I find it difficult to think about this issue in a principled way. Not using standard marketing tactics is not costless. We used the language we used because it was the most effective. Using different language would have caused a decrease in EAG attendees and a decrease in the total value of the conference.
For the sake of argument, let’s assume that some kinds of off-putting language are more effective at getting people to attend. How would you model the tradeoff between generating extra value at EA Global on the one hand and the harm of off-putting language on the other?
Suppose the off-putting but more effective language causes an additional 100 people to attend EA Global. Suppose also that in expectation a marginal EAG attendee is worth $4,200 in donations to effective charities. Would you be willing to use off-putting but more effective language if it generated $420,000 in donations to effective charities? If not, is there a different number where you’d be willing to switch?
This discussion is really valuable by the way and I appreciate the time you’ve both put into it.
My take is that there’s a trade-off here between being most effective for short-term value (getting more attendees at EA global) and most effective building a powerful and supported long-term brand. We have better data on what’s effective for the short-term value, because the feedback loops are tighter. This could mean that it should get more weight (because we actually know what we’re doing), but there’s a danger that it means we swing too far towards it. The off-putting messaging could in some low-grade ways lower a whole lot of people’s opinions towards EA/CEA/EAG. For a movement that trades so much on intellectual leadership this worries me.
Here’s one guesstimate. Take with a vast amount of salt, and I don’t even really believe the framework I’m using, I just want to show how you might get going with these comparisons:
Damage to brand = 0.1% of brand value; brand value = ~$1B; so would be willing to switch if generating >$1M
[Also of course it’s not binary. We can probably look for compromise solutions which get a lot of the marketing value and a lot of the long-term brand value]
There’s even a case that at the margin we should prefer more consolidation over more growth (of EA community generally and EAG specifically), in which case it would be good to have emails which are differentially attractive to people (like Howie and Kit) who are/could become high-value community members, rather than differentially off-putting to them.
I think a stronger argument can be made in favour of the chosen marketing methods. It would probably conclude with something like ‘the huge value of a small number of extra links formed between otherwise-disjoint groups outweighed the minor weakening of cooperation standards across the community’.
Owen’s comment shows that the numbers can be big on the other side too, but valuing brands is a notoriously hard problem. In the hope that people refer back to this discussion when considering future strategies, here is an explicit estimate of one component of the value of avoiding minor harm to trust, for this specific case. It works by assuming that anyone put off from CEA simply shifts collaboration from one organisation to another, causing efficiency loss from wasting comparative advantages, not total loss. It also recognises that I made an unusually large update, and the average will be much smaller. Bracketed items are multiplied together to give an italicised item in the next line.
(present value of a GWWC pledge, $73,292 x number of pledges next year, 856) x size of CEA compared to GWWC proxied by headcount, 2.45 x (my unusually large update to engagement with CEA, 30% x perceived relative strength of other affected people’s reactions, 17.5%) x relative advantage of CEA over competition, 17% x proportion of people with negative reactions, 48%
= (value realised by GWWC next year, $62,737,952 x size of CEA compared to GWWC proxied by headcount, 2.45) x (average affected person’s shift from CEA to elsewhere, 5.25% x relative advantage of CEA over competition, 17%) x proportion of people with negative reactions, 48%
= value realised by CEA next year, $153,707,982 x (inefficiency from one affected person’s shift, 0.88% x proportion of people with negative reactions, 48%)
= (value realised by CEA next year, $153,707,982 x proportion of CEA value lost 0.42%)
= value of one year of CEA minor reputation preservation, $638,849
This model does not incorporate the effects EAG marketing can have on other EA organisations’ reputations (I suspect large), the value of not putting people off the movement entirely (unsure), or the effort required to clean up one’s reputation in the unlikely case that lasting harm is incurred (low in expectation?) To handle overoptimisation, I have tried to keep inputs conservative rather than discounting explicitly.
My guess after public and private discussion is that the approach which captures the most total value would be something like aggressive marketing (including pushing known EAs hard to tell their friends, slightly-more-than-comfortable numbers of chaser emails to applicants, and focussing almost entirely on the positives of attending) while avoiding anyone feeling deliberately misled. Obviously CEA is better placed to make this call, and I hope the broad discussion will help guide future decisions.
I realized I never indicated what I thought after the discussion. I now endorse the position Kit suggests:
My guess after public and private discussion is that the approach which captures the most total value would be something like aggressive marketing (including pushing known EAs hard to tell their friends, slightly-more-than-comfortable numbers of chaser emails to applicants, and focussing almost entirely on the positives of attending) while avoiding anyone feeling deliberately misled. Obviously CEA is better placed to make this call, and I hope the broad discussion will help guide future decisions.
I’m not sure if this discussion has changed your view on using deceptive marketing for EA Global, but if it has, what do you plan to do to avoid it happening in future work by EA Outreach?
Also, it’s easy for EAs with mainly consequentialist ethics to justify deception and non-transparency for the greater good, without considering consequences like the ones discussed here about trust and cooperation. Would it be worth EAO attempting to prevent future deception by promoting the idea that we should be honest and transparent in our communications?
I agree with this. One frame is that the marginal move towards ‘effective marketing tactics’ is also capturing marginal attendees. This seems like it could be tested against data in the long term: attendees at a conference vs active in the community X months later.
Hey HowieL and Kit,
I find it difficult to think about this issue in a principled way. Not using standard marketing tactics is not costless. We used the language we used because it was the most effective. Using different language would have caused a decrease in EAG attendees and a decrease in the total value of the conference.
For the sake of argument, let’s assume that some kinds of off-putting language are more effective at getting people to attend. How would you model the tradeoff between generating extra value at EA Global on the one hand and the harm of off-putting language on the other?
Suppose the off-putting but more effective language causes an additional 100 people to attend EA Global. Suppose also that in expectation a marginal EAG attendee is worth $4,200 in donations to effective charities. Would you be willing to use off-putting but more effective language if it generated $420,000 in donations to effective charities? If not, is there a different number where you’d be willing to switch?
This discussion is really valuable by the way and I appreciate the time you’ve both put into it.
My take is that there’s a trade-off here between being most effective for short-term value (getting more attendees at EA global) and most effective building a powerful and supported long-term brand. We have better data on what’s effective for the short-term value, because the feedback loops are tighter. This could mean that it should get more weight (because we actually know what we’re doing), but there’s a danger that it means we swing too far towards it. The off-putting messaging could in some low-grade ways lower a whole lot of people’s opinions towards EA/CEA/EAG. For a movement that trades so much on intellectual leadership this worries me.
Here’s one guesstimate. Take with a vast amount of salt, and I don’t even really believe the framework I’m using, I just want to show how you might get going with these comparisons: Damage to brand = 0.1% of brand value; brand value = ~$1B; so would be willing to switch if generating >$1M [Also of course it’s not binary. We can probably look for compromise solutions which get a lot of the marketing value and a lot of the long-term brand value]
There’s even a case that at the margin we should prefer more consolidation over more growth (of EA community generally and EAG specifically), in which case it would be good to have emails which are differentially attractive to people (like Howie and Kit) who are/could become high-value community members, rather than differentially off-putting to them.
I think a stronger argument can be made in favour of the chosen marketing methods. It would probably conclude with something like ‘the huge value of a small number of extra links formed between otherwise-disjoint groups outweighed the minor weakening of cooperation standards across the community’.
Owen’s comment shows that the numbers can be big on the other side too, but valuing brands is a notoriously hard problem. In the hope that people refer back to this discussion when considering future strategies, here is an explicit estimate of one component of the value of avoiding minor harm to trust, for this specific case. It works by assuming that anyone put off from CEA simply shifts collaboration from one organisation to another, causing efficiency loss from wasting comparative advantages, not total loss. It also recognises that I made an unusually large update, and the average will be much smaller. Bracketed items are multiplied together to give an italicised item in the next line.
(present value of a GWWC pledge, $73,292 x number of pledges next year, 856) x size of CEA compared to GWWC proxied by headcount, 2.45 x (my unusually large update to engagement with CEA, 30% x perceived relative strength of other affected people’s reactions, 17.5%) x relative advantage of CEA over competition, 17% x proportion of people with negative reactions, 48%
= (value realised by GWWC next year, $62,737,952 x size of CEA compared to GWWC proxied by headcount, 2.45) x (average affected person’s shift from CEA to elsewhere, 5.25% x relative advantage of CEA over competition, 17%) x proportion of people with negative reactions, 48%
= value realised by CEA next year, $153,707,982 x (inefficiency from one affected person’s shift, 0.88% x proportion of people with negative reactions, 48%)
= (value realised by CEA next year, $153,707,982 x proportion of CEA value lost 0.42%)
= value of one year of CEA minor reputation preservation, $638,849
This model does not incorporate the effects EAG marketing can have on other EA organisations’ reputations (I suspect large), the value of not putting people off the movement entirely (unsure), or the effort required to clean up one’s reputation in the unlikely case that lasting harm is incurred (low in expectation?) To handle overoptimisation, I have tried to keep inputs conservative rather than discounting explicitly.
My guess after public and private discussion is that the approach which captures the most total value would be something like aggressive marketing (including pushing known EAs hard to tell their friends, slightly-more-than-comfortable numbers of chaser emails to applicants, and focussing almost entirely on the positives of attending) while avoiding anyone feeling deliberately misled. Obviously CEA is better placed to make this call, and I hope the broad discussion will help guide future decisions.
I realized I never indicated what I thought after the discussion. I now endorse the position Kit suggests:
Thank for the very valuable discussion!
Thanks for this. Very helpful.
I’m not sure if this discussion has changed your view on using deceptive marketing for EA Global, but if it has, what do you plan to do to avoid it happening in future work by EA Outreach?
Also, it’s easy for EAs with mainly consequentialist ethics to justify deception and non-transparency for the greater good, without considering consequences like the ones discussed here about trust and cooperation. Would it be worth EAO attempting to prevent future deception by promoting the idea that we should be honest and transparent in our communications?
I agree with this. One frame is that the marginal move towards ‘effective marketing tactics’ is also capturing marginal attendees. This seems like it could be tested against data in the long term: attendees at a conference vs active in the community X months later.