I’ll admit that I was one of the people who saw this here on the EA Forum first and was disappointed, but chose not to say anything out of a desire to not rock the boat. But now that I see others are concerned, I will echo my concerns too and magnify them here—I don’t feel like this handbook represents EA as I understand it.
By page count, AI is 45.7% of the entire causes sections. And as Catherine Low pointed out, in both the animal and the global poverty articles (which I didn’t count toward the page count), more than half the article was dedicated to why we might not choose this cause area, with much of that space also focused on far-future of humanity. I’d find it hard for anyone to read this and not take away that the community consensus is that AI risk is clearly the most important thing to focus on.
I feel like I get it. I recognize that CEA and 80K have a right to have strong opinions about cause prioritization. I also recognize that they’ve worked hard to become such a strong central pillar of EA as they have. I also recognize that a lot of people that CEA and 80K are familiar with agree with them. But now I can’t personally help but feel like CEA is using their position of relative strength to essentially dominate the conversation and claim it is the community consensus.
I agree the definition of “EA” here is itself the area of concern. It’s very easy for any of us to call “EA” as we see it and naturally make claims about the preferences of the community. But this would be very clearly circular. I’d be tempted to defer to the EA Survey.
AI was only the top cause of 16% of the EA Survey. Even among those employed full-time in a non-profit (maybe a proxy for full-time EAs), it was the top priority of 11.26%, compared to 44.22% for poverty and 6.46% for animal welfare.
But naturally I’d be biased toward using these results, and I’m definitely sympathetic to the idea that EA should be considered more narrowly, or we should weight the opinions of people working on it full-time more heavily. So I’m unsure. Even my opinions here are circular, by my own admission.
But I think if we’re going to be claiming in a community space to talk about the community, we should be more thoughtful about who’s opinions we’re including and excluding. It seems pretty inexpensive to re-weigh the handbook to emphasize AI risk just as much without being as clearly jarring about it (e.g., dedicating three chapters instead of one or slanting so clearly toward AI risk throughout the “reasons not to prioritize this cause” sections).
Based on this, and the general sentiment, I’d echo Scott Weather’s comment on the Facebook group that it’s pretty disingenuous to represent CEA’s views as the views of the entire community writ large, however you want to define that. I agree I would have preferred it called “CEA’s Guide to Effective Altruism” or something similar.
It’s very easy for any of us to call “EA” as we see it and naturally make claims about the preferences of the community. But this would be very clearly circular. I’d be tempted to defer to the EA Survey. AI was only the top cause of 16% of the EA Survey. Even among those employed full-time in a non-profit (maybe a proxy for full-time EAs), it was the top priority of 11.26%, compared to 44.22% for poverty and 6.46% for animal welfare.
As noted in the fb discussion, it seems unlikely full-time non-profit employment is a good proxy for ‘full-time EAs’ (i.e. those working full time at an EA organisation—E2Gers would be one of a few groups who should also be considered ‘full-time EAs’ in the broader sense of the term).
For this group, one could stipulate every group which posts updates to the EA newsletter (I looked at the last half-dozen or so, so any group which didn’t have an update is excluded, but likely minor) is an EA group, and toting up a headcount of staff (I didn’t correct for FTE, and excluded advisors/founders/volunteers/freelancers/interns—all of these decisions could be challenged) and recording the prevailing focus of the org gives something like this:
80000 hours (7 people) - Far future
ACE (17 people) - Animals
CEA (15 people) - Far future
CSER (11 people) - Far future
CFI (10 people) - Far future (I only included their researchers)
FHI (17 people) - Far future
FRI (5 people) - Far future
Givewell (20 people) - Global poverty
Open Phil (21 people) - Far future (mostly)
SI (3 people) - Animals
CFAR (11 people) - Far future
Rethink Charity (11 people) - Global poverty
WASR (3 people) - Animals
REG (4 people) - Far future [Edited after Jonas Vollmer kindly corrected me]
FLI (6 people) - Far future
MIRI (17 people) - Far future
TYLCS (11 people) - Global poverty
Totting this up, I get ~ two thirds of people work at orgs which focus on the far future (66%), 22% global poverty, and 12% animals. Although it is hard to work out the AI | far future proportion, I’m pretty sure it is the majority, so 45% AI wouldn’t be wildly off-kilter if we thought the EA handbook should represent the balance of ‘full time’ attention.
I doubt this should be the relevant metric of how to divvy-up space in the EA handbook. It also seems unclear how clear considerations of representation play in selecting content, or if so what is the key community to proportionately represent.
Yet I think I’d be surprised if it wasn’t the case that among those working ‘in’ EA, the majority work on the far future, and a plurality work on AI. It also agrees with my impression that the most involved in the EA community strongly skew towards the far future cause area in general and AI in particular. I think they do so, bluntly, because these people have better access to the balance of reason, which in fact favours these being the most important things to work on.
‘full-time EAs’ (i.e. those working full time at an EA organisation—E2Gers would be one of a few groups who should also be considered ‘full-time EAs’ in the broader sense of the term).
I think this methodology is pretty suspicious. There are more ways to be a full-time EA (FTEA) that working at an EA org, or even E2Ging. Suppose someone spends their time working on, say, poverty out of an desire to do the most good, and thus works at a development NGO or for a governent. Neither development NGOs nor governments will count as an ‘EA org’ on your definition because they won’t being posting updates to the EA newsletter. Why would they? The EA community has very little comparative advantage in solving poverty, so what we be the point in say, Oxfam or DFID sending update reports to the EA newsletter? It would frankly be bizarre for a government department to update the EA community. We might say “ah, put people who work on poverty aren’t really EAs” but that would just beg the question.
I think your list undercounts the number of animal-focused EAs. For example, it excludes Sentience Politics, which provided updates through the EA newsletter in September 2016, January 2017, and July 2017. It also excludes the Good Food Institute, an organization which describes itself as “founded to apply the principles of effective altruism (EA) to change our food system.” While GFI does not provide updates through the EA newsletter, its job openings are mentioned in the December 2017, January 2018, and March 2018 newsletters. Additionally, it excludes organizations like the Humane League, which while not explicitly EA, have been described as having a “largely utilitarian worldview.” Though the Humane League does not provide updates through the EA newsletter, its job openings are mentioned in the April 2017 newsletters, February 2018, and March 2018.
Perhaps the argument for excluding GFI and the Humane League (while including direct work organizations in the long term future space) is that relatively few people in direct work animal organizations identify as EAs (while most people in direct work long term future organizations identify as EA). If this is the reason, I think it’d be good for someone to provide evidence for it. Also, if the idea behind this method of counting is to look at the revealed preference of EAs, then I think people earning to give have to be included, especially since earning to give appears to be more useful for farm animal welfare than for long term future causes.
(Most of the above also applies to global health organizations.)
I picked the ‘updates’ purely in the interests of time (easier to skim), that it gives some sense of what orgs are considered ‘EA orgs’ rather than ‘orgs doing EA work’ (a distinction which I accept is imprecise: would a GW top charity ‘count’?), and I (forlornly) hoped pointing to a method, however brief, would forestall suspicion about cherry-picking.
I meant the quick-and-dirty data gathering to be more an indicative sample than a census. I’d therefore expect significant margin of error (but not so significant as to change the bottom line). Other relevant candidate groups are also left out: BERI, Charity Science, Founder’s Pledge, ?ALLFED. I’d expect there are more.
I think while this headcount is not a good metric how to allocate space in the EA handbook, it is a quite valuable overview in itself!
Just as a caveat, the numbers should not be directly compared to numbers from EA survey, as the later included also cause-prioritization, rationality, meta, politics & more.
(Using such cathegories, some organizations would end up in classified in different boxes)
(Copying across some comments I made on Facebook which are relevant to this.)
Thanks for the passionate feedback everyone. Whilst I don’t agree with all of the comments, I’m sorry for the mistakes I made. Since some of the comments above make similar comments, I’ll try to give general replies in some main-thread comments. I’ll also be reaching out to some of the people in the thread above to try to work out the best way forward.
My understanding is that the main worry that people have is about calling it the Effective Altruism Handbook vs. CEA’s Guide to Effective Altruism or similar. For the reasons given in my reply to Scott above, I think that calling it the EA Handbook is not a significant change from before: unless we ask Ryan to take down the old handbook, then whatever happens, there will be a CEA-selected resource called the EA Handbook. For reasons given above and below, I think that the new version of the Handbook is better than the old. I think that there is some value in explicitly replacing the old version for this reason, and since “EA Handbook” is a cleaner name. However, I do also get people’s worries about this being taken to represent the EA community as a whole. For that reason, I will make sure that the title page and introduction make clear that this is a project of CEA, and I will make clear in the introduction that others in the community would have selected different essays.
My preferred approach would then be to engage with people who have expressed concern, and see if there are changes we can make that alleviate their concerns (such as those we already plan to make based on Scott’s comment). If it appears that we can alleviate most of those concerns whilst retaining the value of the Handbook from from CEA’s perspective, it might be best to call it the Centre for Effective Altruism’s EA Handbook. Otherwise, we would rebrand. I’d be interested to hear in comments whether there are specific changes (articles to add/take away/design things) that would reassure you about this being called the EA Handbook.
In this comment I’ll reply to some of the more object-level criticisms. I want to apologize for how this seemed to others, but also give a clearer sense of our intentions. I think that it might seem that CEA has tried merely to push AI safety as the only thing to work on. We don’t think that, and that wasn’t our intention. Obviously, poorly realized intentions are still a problem, but I want to reassure people about CEA’s approach to these issues.
First, re there not being enough discussion of portfolios/comparative advantage, this is mentioned in two of the articles (“Prospecting for Gold” and “What Does (and Doesn’t) AI Mean for Effective Altruism?”). However, I think that we could have emphasised this more, and I will see if it’s possible to include a full article on coordination and comparative advantage.
Second, I’d like to apologise for the way the animal and global health articles came across. Those articles were commissioned at the same time as the long-term future article, and they share a common structure: What’s the case for this cause? What are some common concerns about that cause? Why might you choose not to support this cause? The intention was to show how many assumptions underlie a decision to focus on any cause, and to map out some of the debate between the different cause areas, rather than to illicitly push the long-term future. It looks like this didn’t come across, sorry. We didn’t initially commission sub-cause profiles on government, AI and biosecurity, which explains why those more specific articles follow a different structure (mostly talks given at EA Global).
Third, I want to explain some of the reasoning behind including several articles on AI. AI risk is a more unusual area, which is more susceptible to misinterpretation than global health or animal welfare. Partly for this reason, we thought that it was sensible to include several articles on this topic, with the intention that this would provide more needed background and convey more of the nuance of the idea. I will talk with some of the commenters above to discuss if it makes sense to do some sort of merge so that AI dominates the contents page less.
What about the possibility the Centre for Effective Altruism represents the community by editing the EA Handbook to reflect what the community values in spite of what the CEA concludes, the CEA excludes evaluations from the EA Handbook from which it currently diverges from the community, and it’s still called the ‘EA Handbook’ instead of ‘CEA’s Guide to EA?’ Obviously this wouldn’t carry EA forward with what the CEA thinks is maximum fidelity, but it’s clear many think the CEA is trying to spread the EA message with infidelity, while acting as though they’re the only actor in the movement others can trust to carry that message. It looks not only hypocritical but undermines faith in the CEA.
Altering the handbook so it’s more of a compromise between multiple actors in EA will redeem the reputation of the CEA. Without that, the CEA can’t carry EA forward with the fidelity at all, because the rest of the movement wouldn’t cooperate with them. In the meantime, the CEA and everyone else can hammer out what we think is the most good here on the EA Forum. If broader conclusions are drawn which line up with the CEA’s evaluation based on a consensus the CEA had behind their perspective the best arguments, that can be included in the next edition of the EA Handbook. Again, from the CEA’s perspective, that might seem like deliberately compromising the fidelity of EA in the short-term to appease others. But again, from the perspective of the CEA’s current critics, why they’re criticizing the 2nd edition of the EA Handbook is because they perceive themselves as protecting the fidelity of EA from the Centre for Effective Altruism. This could solve other contentious issues in EA, such as consideration of both s-risks and x-risks from AI. The EA Handbook could be published as close to identically as possible in multiple languages, which would prevent the CEA from selling EA one way in English, the EAF selling it another way in German, and creating more trust issues which would down the road just become sources of conflict, not unlike the criticism the EA Handbook, 2nd edition, is receiving now. Ultimately, this would be the CEA making a relatively short-term compromise to ensure the long-term fidelity of EA by demonstrating themselves as delegate and representative agency the EA community can still have confidence in.
Thanks for the comments Evan. First, I want to apologize for not seeking broader consultation earlier. This was clearly a mistake.
My plan now is to do as you suggest: talk to other actors in EA and get their feedback on what to include etc. Obviously any compromise is going to leave some unhappy—different groups do just favour different presentations of EA, so it seems unlikely to me that we will get a fully independent presentation that will please everyone. I also worry that democracy is not well suited to editorial decisions, and that the “electorate” of EA is ill-defined. If the full compromise approach fails, I think it would be best to release a CEA-branded resource which incorporates most of the feedback above. This option also seems to me to be cooperative, and to avoid harm to the fidelity of EA’s message, but I might be missing something.
Thanks for responding Max. I agree consulting some key actors but not going through a democratic makes sense. I appreciate you being able to respond to and incorporate all the feedback you’re receiving so quickly.
I find it so interesting that people on the EA Facebook page have been a lot more generally critical about the content than people here on the EA Forum—here it’s all just typos and formatting issues.
I’ll admit that I was one of the people who saw this here on the EA Forum first and was disappointed, but chose not to say anything out of a desire to not rock the boat. But now that I see others are concerned, I will echo my concerns too and magnify them here—I don’t feel like this handbook represents EA as I understand it.
By page count, AI is 45.7% of the entire causes sections. And as Catherine Low pointed out, in both the animal and the global poverty articles (which I didn’t count toward the page count), more than half the article was dedicated to why we might not choose this cause area, with much of that space also focused on far-future of humanity. I’d find it hard for anyone to read this and not take away that the community consensus is that AI risk is clearly the most important thing to focus on.
I feel like I get it. I recognize that CEA and 80K have a right to have strong opinions about cause prioritization. I also recognize that they’ve worked hard to become such a strong central pillar of EA as they have. I also recognize that a lot of people that CEA and 80K are familiar with agree with them. But now I can’t personally help but feel like CEA is using their position of relative strength to essentially dominate the conversation and claim it is the community consensus.
I agree the definition of “EA” here is itself the area of concern. It’s very easy for any of us to call “EA” as we see it and naturally make claims about the preferences of the community. But this would be very clearly circular. I’d be tempted to defer to the EA Survey. AI was only the top cause of 16% of the EA Survey. Even among those employed full-time in a non-profit (maybe a proxy for full-time EAs), it was the top priority of 11.26%, compared to 44.22% for poverty and 6.46% for animal welfare. But naturally I’d be biased toward using these results, and I’m definitely sympathetic to the idea that EA should be considered more narrowly, or we should weight the opinions of people working on it full-time more heavily. So I’m unsure. Even my opinions here are circular, by my own admission.
But I think if we’re going to be claiming in a community space to talk about the community, we should be more thoughtful about who’s opinions we’re including and excluding. It seems pretty inexpensive to re-weigh the handbook to emphasize AI risk just as much without being as clearly jarring about it (e.g., dedicating three chapters instead of one or slanting so clearly toward AI risk throughout the “reasons not to prioritize this cause” sections).
Based on this, and the general sentiment, I’d echo Scott Weather’s comment on the Facebook group that it’s pretty disingenuous to represent CEA’s views as the views of the entire community writ large, however you want to define that. I agree I would have preferred it called “CEA’s Guide to Effective Altruism” or something similar.
As noted in the fb discussion, it seems unlikely full-time non-profit employment is a good proxy for ‘full-time EAs’ (i.e. those working full time at an EA organisation—E2Gers would be one of a few groups who should also be considered ‘full-time EAs’ in the broader sense of the term).
For this group, one could stipulate every group which posts updates to the EA newsletter (I looked at the last half-dozen or so, so any group which didn’t have an update is excluded, but likely minor) is an EA group, and toting up a headcount of staff (I didn’t correct for FTE, and excluded advisors/founders/volunteers/freelancers/interns—all of these decisions could be challenged) and recording the prevailing focus of the org gives something like this:
80000 hours (7 people) - Far future
ACE (17 people) - Animals
CEA (15 people) - Far future
CSER (11 people) - Far future
CFI (10 people) - Far future (I only included their researchers)
FHI (17 people) - Far future
FRI (5 people) - Far future
Givewell (20 people) - Global poverty
Open Phil (21 people) - Far future (mostly)
SI (3 people) - Animals
CFAR (11 people) - Far future
Rethink Charity (11 people) - Global poverty
WASR (3 people) - Animals
REG (4 people) - Far future [Edited after Jonas Vollmer kindly corrected me]
FLI (6 people) - Far future
MIRI (17 people) - Far future
TYLCS (11 people) - Global poverty
Totting this up, I get ~ two thirds of people work at orgs which focus on the far future (66%), 22% global poverty, and 12% animals. Although it is hard to work out the AI | far future proportion, I’m pretty sure it is the majority, so 45% AI wouldn’t be wildly off-kilter if we thought the EA handbook should represent the balance of ‘full time’ attention.
I doubt this should be the relevant metric of how to divvy-up space in the EA handbook. It also seems unclear how clear considerations of representation play in selecting content, or if so what is the key community to proportionately represent.
Yet I think I’d be surprised if it wasn’t the case that among those working ‘in’ EA, the majority work on the far future, and a plurality work on AI. It also agrees with my impression that the most involved in the EA community strongly skew towards the far future cause area in general and AI in particular. I think they do so, bluntly, because these people have better access to the balance of reason, which in fact favours these being the most important things to work on.
I think this methodology is pretty suspicious. There are more ways to be a full-time EA (FTEA) that working at an EA org, or even E2Ging. Suppose someone spends their time working on, say, poverty out of an desire to do the most good, and thus works at a development NGO or for a governent. Neither development NGOs nor governments will count as an ‘EA org’ on your definition because they won’t being posting updates to the EA newsletter. Why would they? The EA community has very little comparative advantage in solving poverty, so what we be the point in say, Oxfam or DFID sending update reports to the EA newsletter? It would frankly be bizarre for a government department to update the EA community. We might say “ah, put people who work on poverty aren’t really EAs” but that would just beg the question.
I think your list undercounts the number of animal-focused EAs. For example, it excludes Sentience Politics, which provided updates through the EA newsletter in September 2016, January 2017, and July 2017. It also excludes the Good Food Institute, an organization which describes itself as “founded to apply the principles of effective altruism (EA) to change our food system.” While GFI does not provide updates through the EA newsletter, its job openings are mentioned in the December 2017, January 2018, and March 2018 newsletters. Additionally, it excludes organizations like the Humane League, which while not explicitly EA, have been described as having a “largely utilitarian worldview.” Though the Humane League does not provide updates through the EA newsletter, its job openings are mentioned in the April 2017 newsletters, February 2018, and March 2018.
Perhaps the argument for excluding GFI and the Humane League (while including direct work organizations in the long term future space) is that relatively few people in direct work animal organizations identify as EAs (while most people in direct work long term future organizations identify as EA). If this is the reason, I think it’d be good for someone to provide evidence for it. Also, if the idea behind this method of counting is to look at the revealed preference of EAs, then I think people earning to give have to be included, especially since earning to give appears to be more useful for farm animal welfare than for long term future causes.
(Most of the above also applies to global health organizations.)
I picked the ‘updates’ purely in the interests of time (easier to skim), that it gives some sense of what orgs are considered ‘EA orgs’ rather than ‘orgs doing EA work’ (a distinction which I accept is imprecise: would a GW top charity ‘count’?), and I (forlornly) hoped pointing to a method, however brief, would forestall suspicion about cherry-picking.
I meant the quick-and-dirty data gathering to be more an indicative sample than a census. I’d therefore expect significant margin of error (but not so significant as to change the bottom line). Other relevant candidate groups are also left out: BERI, Charity Science, Founder’s Pledge, ?ALLFED. I’d expect there are more.
I think while this headcount is not a good metric how to allocate space in the EA handbook, it is a quite valuable overview in itself!
Just as a caveat, the numbers should not be directly compared to numbers from EA survey, as the later included also cause-prioritization, rationality, meta, politics & more.
(Using such cathegories, some organizations would end up in classified in different boxes)
(Copying across some comments I made on Facebook which are relevant to this.)
Thanks for the passionate feedback everyone. Whilst I don’t agree with all of the comments, I’m sorry for the mistakes I made. Since some of the comments above make similar comments, I’ll try to give general replies in some main-thread comments. I’ll also be reaching out to some of the people in the thread above to try to work out the best way forward.
My understanding is that the main worry that people have is about calling it the Effective Altruism Handbook vs. CEA’s Guide to Effective Altruism or similar. For the reasons given in my reply to Scott above, I think that calling it the EA Handbook is not a significant change from before: unless we ask Ryan to take down the old handbook, then whatever happens, there will be a CEA-selected resource called the EA Handbook. For reasons given above and below, I think that the new version of the Handbook is better than the old. I think that there is some value in explicitly replacing the old version for this reason, and since “EA Handbook” is a cleaner name. However, I do also get people’s worries about this being taken to represent the EA community as a whole. For that reason, I will make sure that the title page and introduction make clear that this is a project of CEA, and I will make clear in the introduction that others in the community would have selected different essays.
My preferred approach would then be to engage with people who have expressed concern, and see if there are changes we can make that alleviate their concerns (such as those we already plan to make based on Scott’s comment). If it appears that we can alleviate most of those concerns whilst retaining the value of the Handbook from from CEA’s perspective, it might be best to call it the Centre for Effective Altruism’s EA Handbook. Otherwise, we would rebrand. I’d be interested to hear in comments whether there are specific changes (articles to add/take away/design things) that would reassure you about this being called the EA Handbook.
In this comment I’ll reply to some of the more object-level criticisms. I want to apologize for how this seemed to others, but also give a clearer sense of our intentions. I think that it might seem that CEA has tried merely to push AI safety as the only thing to work on. We don’t think that, and that wasn’t our intention. Obviously, poorly realized intentions are still a problem, but I want to reassure people about CEA’s approach to these issues.
First, re there not being enough discussion of portfolios/comparative advantage, this is mentioned in two of the articles (“Prospecting for Gold” and “What Does (and Doesn’t) AI Mean for Effective Altruism?”). However, I think that we could have emphasised this more, and I will see if it’s possible to include a full article on coordination and comparative advantage.
Second, I’d like to apologise for the way the animal and global health articles came across. Those articles were commissioned at the same time as the long-term future article, and they share a common structure: What’s the case for this cause? What are some common concerns about that cause? Why might you choose not to support this cause? The intention was to show how many assumptions underlie a decision to focus on any cause, and to map out some of the debate between the different cause areas, rather than to illicitly push the long-term future. It looks like this didn’t come across, sorry. We didn’t initially commission sub-cause profiles on government, AI and biosecurity, which explains why those more specific articles follow a different structure (mostly talks given at EA Global).
Third, I want to explain some of the reasoning behind including several articles on AI. AI risk is a more unusual area, which is more susceptible to misinterpretation than global health or animal welfare. Partly for this reason, we thought that it was sensible to include several articles on this topic, with the intention that this would provide more needed background and convey more of the nuance of the idea. I will talk with some of the commenters above to discuss if it makes sense to do some sort of merge so that AI dominates the contents page less.
What about the possibility the Centre for Effective Altruism represents the community by editing the EA Handbook to reflect what the community values in spite of what the CEA concludes, the CEA excludes evaluations from the EA Handbook from which it currently diverges from the community, and it’s still called the ‘EA Handbook’ instead of ‘CEA’s Guide to EA?’ Obviously this wouldn’t carry EA forward with what the CEA thinks is maximum fidelity, but it’s clear many think the CEA is trying to spread the EA message with infidelity, while acting as though they’re the only actor in the movement others can trust to carry that message. It looks not only hypocritical but undermines faith in the CEA.
Altering the handbook so it’s more of a compromise between multiple actors in EA will redeem the reputation of the CEA. Without that, the CEA can’t carry EA forward with the fidelity at all, because the rest of the movement wouldn’t cooperate with them. In the meantime, the CEA and everyone else can hammer out what we think is the most good here on the EA Forum. If broader conclusions are drawn which line up with the CEA’s evaluation based on a consensus the CEA had behind their perspective the best arguments, that can be included in the next edition of the EA Handbook. Again, from the CEA’s perspective, that might seem like deliberately compromising the fidelity of EA in the short-term to appease others. But again, from the perspective of the CEA’s current critics, why they’re criticizing the 2nd edition of the EA Handbook is because they perceive themselves as protecting the fidelity of EA from the Centre for Effective Altruism. This could solve other contentious issues in EA, such as consideration of both s-risks and x-risks from AI. The EA Handbook could be published as close to identically as possible in multiple languages, which would prevent the CEA from selling EA one way in English, the EAF selling it another way in German, and creating more trust issues which would down the road just become sources of conflict, not unlike the criticism the EA Handbook, 2nd edition, is receiving now. Ultimately, this would be the CEA making a relatively short-term compromise to ensure the long-term fidelity of EA by demonstrating themselves as delegate and representative agency the EA community can still have confidence in.
Thanks for the comments Evan. First, I want to apologize for not seeking broader consultation earlier. This was clearly a mistake.
My plan now is to do as you suggest: talk to other actors in EA and get their feedback on what to include etc. Obviously any compromise is going to leave some unhappy—different groups do just favour different presentations of EA, so it seems unlikely to me that we will get a fully independent presentation that will please everyone. I also worry that democracy is not well suited to editorial decisions, and that the “electorate” of EA is ill-defined. If the full compromise approach fails, I think it would be best to release a CEA-branded resource which incorporates most of the feedback above. This option also seems to me to be cooperative, and to avoid harm to the fidelity of EA’s message, but I might be missing something.
Thanks for responding Max. I agree consulting some key actors but not going through a democratic makes sense. I appreciate you being able to respond to and incorporate all the feedback you’re receiving so quickly.