I find it difficult to evaluate CEA especially after the reorganization, but I did as well beforehand.
The most significant reason is that I feel CEA has been exceedingly slow to embrace metrics regarding many of its activities, as an example, I’ll speak to outreach.
Big picture metrics: I would have expected one of CEA’s very first activities, years ago when EA Outreach was established, to begin trying to measure subscription to the EA community. Gathering statistics on number of people donating, sizes of donations, number that self-identify as EAs, percentage that become EAs after exposure to different organizations/media, number of chapters, size of chapters, number that leave EA, etc.
Obviously, some of these are difficult, and others involve assumptions, gaining access to properties other organizations run, or gathering data yourselves, but I would expect to see a concerted effort to do so, at least in part. The community has embraced Fermi Estimates where appropriate, and these metrics could be estimated with much more information than those often require.
So a few years in, I find it a bit mindblowing that I’m unaware of an attempt to do this by the only organization that has had teams dedicated specifically to the improvement and growth of the movement. Were these statistics gathered, we’d be much better able to evaluate outreach activities of CEA, which are now central to its purpose as an organization.
With regard to metrics on specific CEA activities, I’ve also been disappointed by the seeming lack of measurement (though this may be a transparency issue, more on this later). For example, there have been repeated instances where outreach has actively turned people off in ways that I’ve been told have been expressed to CEA. Multiple friends who applied to the Pareto Fellowship felt like it was quite unprofessionally run and potential speakers at EA Global mentioned they’d found some of the movement’s actions immature. In each instance, I’m aware of them becoming significantly less engaged as a result.
At times concerns such as these have been acknowledged, but given the level of my (admittedly highly anecdotal) exposure to them, it feels like they have mostly not been examined to see if they were at a magnitude that should give pause. It would be nice to see them fully acknowledged through quantification, so we could understand if these were a small minority (which does matter of course regardless) or actually of great concern. Quantification could involve, for example, getting feedback on the process from all of those who applied to the Pareto Fellowship or EA Global or all of those who considered them. I do believe that some satisfaction measurements for EAGx and EA Global did in fact come out recently; I was glad to see those and also hope that they are just seen as starting points rather than as representing the majority of CEA’s growth in measurement.
Other examples of where quantification could be helpful is in the relative prominence of various communication vehicles. The cause prioritization tool, for example, is quite prominently shared, but has its success been measured? Have any alternatives been considered? Measuring and sharing this could be beneficial both for CEA’s decision making as well as for the community understanding what works best for their own outreach activities.
The second most significant reason I find CEA tough to evaluate, which is interconnected to much of what I said regarding the first, is that I feel transparency, especially around decision making, is lacking. I feel that other EA organizations better document why they are pursuing much of what they do, but CEA too often feels like a collection of projects without central filtering / direction. I do believe the reorganization may have been to target a similar feeling, but new projects such as EA Concepts, after the reorganization have similarly seemed to come out of nowhere and without justification of their resourcing. It’d be quite helpful to better understand the set of projects CEA considers and how its decision making leads to what we observe. So many of us have been exposed to the book giveaway… what was the decision making behind doing it? Should taking such a proactive action make us update that CEA has found a quite effective promotion vehicle, or was it a trial to determine effects of distribution?
CEA has taken initial steps toward improvement, with the monthly updates, and I’d like to see them greatly expand and specifically address decision making.
Could CEA speak to its planned approach to growing measurement and transparency moving forward?
I have many additional strong feelings and beliefs in favor of CEA as a donation target, had many strong anecdotal experiences, and have a few beliefs that give me great pause as well. But I think measurement and transparency could do a great deal toward putting those in proper context.
As a preliminary matter, I assume you read the fundraising document linked in this post, but for those reading this comment who haven’t, I think it’s a good indication of the level of transparency and self-evaluation we intend to have going forward. I also think it addresses some of the concerns you raise.
I agree with much of what you say, but as you note, I think we’ve already taken steps toward correcting many of these problems. Regarding metrics on the effective altruism community, you are correct that we need to do more here, and we intend to. Before the reorganization, this responsibility didn’t fall squarely within any team’s jurisdiction which was part of the problem. (For example, Giving What We Can collected a lot of this data for a subset of the effective altruism community.) This is a priority for us.
Regarding measuring CEA activities, internally, we test and measure everything (particularly with respect to community and outreach activities). We measure user engagement with our content (including the cause prioritization tool), the newsletter, Doing Good Better, Facebook marketing, etc., trying to identify where we can most cost-effectively get people most deeply engaged. As we recently did with EAG and EAGx, we’ll then periodically share our findings with the effective altruism community. We will soon share our review of the Pareto Fellowship, for example.
Regarding transparency, our monthly updates, project evaluations (e.g., for EAG and EAGx, and the forthcoming evaluation of the Pareto Fellowship), and the fundraising document linked in this post are indicative of the approach we intend to take going forward. Creating all of this content is costly, and so while I agree that transparency is important, it’s not trivially true that more is always better. We’re trying to strike the right balance and will be very interested in others’ views about whether we’ve succeeded.
Lastly, regarding centralized decision-making, that was the primary purpose of the reorganization. As we note in the fundraising document, we’re still in the process of evaluating current projects. I don’t think the EA Concepts project is to the contrary: that was simply an output of the research team, which it put together in a few weeks, rather than a new project like Giving What We Can or the Pareto Fellowship (the confusion might be the result of using “project” in different ways). Whether we invest much more in that project going forward will depend on the reception and use of this minimum version.
Creating all of this content is costly, and so while I agree that transparency is important, it’s not trivially true that more is always better. We’re trying to strike the right balance and will be very interested in others’ views about whether we’ve succeeded.
Would CEA be open to taking extra funding to specifically cover the cost of hiring someone new whose role would be to collect the data and generate the content in question?
I find it difficult to evaluate CEA especially after the reorganization, but I did as well beforehand.
The most significant reason is that I feel CEA has been exceedingly slow to embrace metrics regarding many of its activities, as an example, I’ll speak to outreach.
Big picture metrics: I would have expected one of CEA’s very first activities, years ago when EA Outreach was established, to begin trying to measure subscription to the EA community. Gathering statistics on number of people donating, sizes of donations, number that self-identify as EAs, percentage that become EAs after exposure to different organizations/media, number of chapters, size of chapters, number that leave EA, etc.
Obviously, some of these are difficult, and others involve assumptions, gaining access to properties other organizations run, or gathering data yourselves, but I would expect to see a concerted effort to do so, at least in part. The community has embraced Fermi Estimates where appropriate, and these metrics could be estimated with much more information than those often require.
So a few years in, I find it a bit mindblowing that I’m unaware of an attempt to do this by the only organization that has had teams dedicated specifically to the improvement and growth of the movement. Were these statistics gathered, we’d be much better able to evaluate outreach activities of CEA, which are now central to its purpose as an organization.
With regard to metrics on specific CEA activities, I’ve also been disappointed by the seeming lack of measurement (though this may be a transparency issue, more on this later). For example, there have been repeated instances where outreach has actively turned people off in ways that I’ve been told have been expressed to CEA. Multiple friends who applied to the Pareto Fellowship felt like it was quite unprofessionally run and potential speakers at EA Global mentioned they’d found some of the movement’s actions immature. In each instance, I’m aware of them becoming significantly less engaged as a result.
At times concerns such as these have been acknowledged, but given the level of my (admittedly highly anecdotal) exposure to them, it feels like they have mostly not been examined to see if they were at a magnitude that should give pause. It would be nice to see them fully acknowledged through quantification, so we could understand if these were a small minority (which does matter of course regardless) or actually of great concern. Quantification could involve, for example, getting feedback on the process from all of those who applied to the Pareto Fellowship or EA Global or all of those who considered them. I do believe that some satisfaction measurements for EAGx and EA Global did in fact come out recently; I was glad to see those and also hope that they are just seen as starting points rather than as representing the majority of CEA’s growth in measurement.
Other examples of where quantification could be helpful is in the relative prominence of various communication vehicles. The cause prioritization tool, for example, is quite prominently shared, but has its success been measured? Have any alternatives been considered? Measuring and sharing this could be beneficial both for CEA’s decision making as well as for the community understanding what works best for their own outreach activities.
The second most significant reason I find CEA tough to evaluate, which is interconnected to much of what I said regarding the first, is that I feel transparency, especially around decision making, is lacking. I feel that other EA organizations better document why they are pursuing much of what they do, but CEA too often feels like a collection of projects without central filtering / direction. I do believe the reorganization may have been to target a similar feeling, but new projects such as EA Concepts, after the reorganization have similarly seemed to come out of nowhere and without justification of their resourcing. It’d be quite helpful to better understand the set of projects CEA considers and how its decision making leads to what we observe. So many of us have been exposed to the book giveaway… what was the decision making behind doing it? Should taking such a proactive action make us update that CEA has found a quite effective promotion vehicle, or was it a trial to determine effects of distribution?
CEA has taken initial steps toward improvement, with the monthly updates, and I’d like to see them greatly expand and specifically address decision making.
Could CEA speak to its planned approach to growing measurement and transparency moving forward?
I have many additional strong feelings and beliefs in favor of CEA as a donation target, had many strong anecdotal experiences, and have a few beliefs that give me great pause as well. But I think measurement and transparency could do a great deal toward putting those in proper context.
Hey Josh,
As a preliminary matter, I assume you read the fundraising document linked in this post, but for those reading this comment who haven’t, I think it’s a good indication of the level of transparency and self-evaluation we intend to have going forward. I also think it addresses some of the concerns you raise.
I agree with much of what you say, but as you note, I think we’ve already taken steps toward correcting many of these problems. Regarding metrics on the effective altruism community, you are correct that we need to do more here, and we intend to. Before the reorganization, this responsibility didn’t fall squarely within any team’s jurisdiction which was part of the problem. (For example, Giving What We Can collected a lot of this data for a subset of the effective altruism community.) This is a priority for us.
Regarding measuring CEA activities, internally, we test and measure everything (particularly with respect to community and outreach activities). We measure user engagement with our content (including the cause prioritization tool), the newsletter, Doing Good Better, Facebook marketing, etc., trying to identify where we can most cost-effectively get people most deeply engaged. As we recently did with EAG and EAGx, we’ll then periodically share our findings with the effective altruism community. We will soon share our review of the Pareto Fellowship, for example.
Regarding transparency, our monthly updates, project evaluations (e.g., for EAG and EAGx, and the forthcoming evaluation of the Pareto Fellowship), and the fundraising document linked in this post are indicative of the approach we intend to take going forward. Creating all of this content is costly, and so while I agree that transparency is important, it’s not trivially true that more is always better. We’re trying to strike the right balance and will be very interested in others’ views about whether we’ve succeeded.
Lastly, regarding centralized decision-making, that was the primary purpose of the reorganization. As we note in the fundraising document, we’re still in the process of evaluating current projects. I don’t think the EA Concepts project is to the contrary: that was simply an output of the research team, which it put together in a few weeks, rather than a new project like Giving What We Can or the Pareto Fellowship (the confusion might be the result of using “project” in different ways). Whether we invest much more in that project going forward will depend on the reception and use of this minimum version.
Regards, Michael
Would CEA be open to taking extra funding to specifically cover the cost of hiring someone new whose role would be to collect the data and generate the content in question?