Because of this, I don’t think it really makes sense to aggregate data over all cause areas. The inclusion criteria are likely to draw pretty arbitrary lines, and respondents will obviously tend to want to see more resources go to the causes they’re working on, and will differ in other ways significantly by cause area. If the proportions of people working in a given cause don’t match the proportion of EA funding people would like to see go to that cause, that is interesting, though, but we still can’t take much away from it.
It seems weird to me that DeepMind and the Good Food Institute are on this list, but not, say, the Against Malaria Foundation, GiveDirectly, Giving What We Can, J-PAL, IPA, or the Humane League.
As stated, some orgs are small and so were not named, but still responded. Maybe a breakdown by the cause area for all the respondents would be more useful with the data you have already?
What criteria were used to decide which orgs/individuals should be invited?
A small team of CEA staffers (I was not one of them) selected an initial invite list (58 people). At present, we see Leaders Forum as an event focused on movement building and coordination. We focus on inviting people who play a role in trying to shape the overall direction of the EA movement (whatever cause area they focus on), rather than people who mostly focus on direct research within a particular cause area. As you’d imagine, this distinction can be somewhat fuzzy, but that’s the mindset with which CEA approaches invites (though other factors can play a role).
To give a specific example, while the Against Malaria Foundation is an important charity for people in EA who want to support global health, I’m not aware of any AMF staffers who have both a strong interest in the EA movement as a whole and some relevant movement-building experience. I don’t think that, say, Rob Mather (AMF’s CEO), or a representative from the Gates Foundation, would get much value from the vast majority of conversations/sessions at the event.
I should also note that the event has gotten a bit smaller over time. The first Leaders Forum (2016) had ~100 invitees and 62 attendees and wasn’t as focused on any particular topic. The next year, we shifted to a stronger focus on movement-building in particular (including community health, movement strategy, and risks to EA), which naturally led to a smaller, more focused invite list.
As with any other CEA program, Leaders Forum may continue to change over time; we aren’t yet sure how many people we’ll invite next year.
Because of this, I don’t think it really makes sense to aggregate data over all cause areas.
I mostly agree! For several reasons, I wouldn’t put much stock in the cause-area data. Most participants likely arrived at their answers very quickly, and the numbers are of course dependent on the backgrounds of the people who both (a) were invited and (b) took the time to respond. However, because we did conduct the survey, it felt appropriate to share what information came out of it, even if the value of that information is limited.
I do, however, think it’s good to have this information to check whether certain “extreme” conditions are present — for example, it would have been surprising and notable if wild animal welfare had wound up with a median score of “0”, as that would seem to imply that most attendees think the cause doesn’t matter at all.
As stated, some orgs are small and so were not named, but still responded. Maybe a breakdown by the cause area for all the respondents would be more useful with the data you have already?
Given the limited utility of the prioritization data, I don’t know how much more helpful a cause-area breakdown would be. (Also, many if not most respondents currently work on more than one of the areas mentioned, but not necessarily with an even split between areas — any number I came up with would be fairly subjective.)
It seems weird to me that DeepMind and the Good Food Institute are on this list, but not, say, the Against Malaria Foundation, GiveDirectly, Giving What We Can, J-PAL, IPA, or the Humane League.
In addition to what I noted above about the types of attendees we aimed for, I’ll note that the list of respondent organizations doesn’t perfectly match who we invited; quite a few other organizations had invitees who didn’t fill out the survey. However, I will note that Giving What We Can (which is a project of CEA and was represented by CEA staff) did have representatives there.
As for organizations like DeepMind or GFI: While some of the orgs on the list are focused on a single narrow area, the employees we invited often had backgrounds in EA movement building and (in some cases) direct experience in other cause areas. (One invitee has run at least three major EA-aligned projects in three different areas.)
This wasn’t necessarily the case for every attendee (as I mentioned, we considered factors other than community-building experience), but it’s an important reason that the org list looks the way it does.
At present, we see Leaders Forum as an event focused on movement building and coordination. We focus on inviting people who play a role in trying to shape the overall direction of the EA movement (whatever cause area they focus on), rather than people who mostly focus on direct research within a particular cause area. As you’d imagine, this distinction can be somewhat fuzzy, but that’s the mindset with which CEA approaches invites (though other factors can play a role).
I really wish this had been included in the OP, in the section that discusses the weaknesses of the data. That section seems to frame the data as a more or less random subset of leaders of EA organizations (“These results shouldn’t be taken as an authoritative or consensus view of effective altruism as a whole. They don’t represent everyone in EA, or even every leader of an EA organization.”)
When I look at the list of organizations that were surveyed, it doesn’t look like the list of organizations most involved in movement building and coordination. It looks much more like a specific subset of that type of org: those focused on longtermism or x-risk (especially AI) and based in one of the main hubs (London accounts for ~50% of respondents, and the Bay accounts for ~30%).* Those that prioritize global poverty, and to a lesser extent animal welfare, seem notably missing. It’s possible the list of organizations that didn’t respond or weren’t named looks a lot different, but if that’s the case it seems worth calling attention to and possibly trying to rectify (e.g. did you email the survey to anyone or was it all done in person at the Leaders Forum?)
Some of the organizations I’d have expected to see included, even if the focus was movement building/coordination: GiveWell (strategy/growth staff, not pure research staff), LEAN, Charity Entrepreneurship, Vegan Outreach, Rethink Priorities, One for the World, Founders Pledge, etc. Most EAs would see these as EA organizations involved to some degree with movement building. But we’re not learning what they think, while we are apparently hearing from at least one org/person who “want to avoid being connected explicitly to the EA movement—for example, if almost all their work happens in non-EA circles, where EA might have a mixed reputation.”
I’m worried that people who read this report are likely to misinterpret the data being presented as more broadly representative than it actually is (e.g. the implications of respondents believing ~30% of EA resources should go to AI work over the next 5 years are radically different if those respondents disproportionally omit people who favor other causes). I have the same concerns about this survey was presented as Jacy Reese expressed about how the leaders survey from 2 years ago (which also captured a narrow set of opinions) was presented:
My main general thought here is just that we shouldn’t depend on so much from the reader. Most people, even most thoughtful EAs, won’t read in full and come up with all the qualifications on their own, so it’s important for article writers to include those themselves, and to include those upfront and center in their articles.
Lastly, I’ll note that there’s a certain irony in surveying only a narrow set of people, given that even among those respondents: “The most common theme in these answers [about problems in the EA community] seems to be the desire for EA to be more inclusive and welcoming. Respondents saw a lot of room for improvement on intellectual diversity, humility, and outreach, whether to distinct groups with different views or to the general population.” I suspect if a more diverse set of leaders had been surveyed, this theme would have been expressed even more strongly.
* GFI and Effective Giving both have London offices, but I’ve assumed their respondents were from other locations.
I agree with you that the orgs you mentioned (e.g. One for the World) are more focused on movement building than some of the other orgs that were invited.
I talked with Amy Labenz (who organized the event) in the course of writing my original reply. We want to clarify that when we said: “At present, we see Leaders Forum as an event focused on movement building and coordination. We focus on inviting people who play a role in trying to shape the overall direction of the EA movement (whatever cause area they focus on)”. We didn’t mean to over-emphasize “movement building” (in the sense of “bringing more people to EA”) relative to “people shaping the overall direction of the EA movement (in the sense of “figuring out what the movement should prioritize, growth or otherwise”).
My use of the term “movement building” was my slight misinterpretation of an internal document written by Amy. The event’s purpose was closer to discussing the goals, health, and trajectory of the movement (e.g. “how should we prioritize growth vs. other things?”) than discussing how to grow/build the movement (e.g. “how should we introduce EA to new people?”)
Thanks Aaron, that’s a helpful clarification. Focusing on “people shaping the overall direction of the EA movement” rather than just movement building seems like a sensible decision. But one drawback is that coming up with a list of those people is a much more subjective (and network-reliant) exercise than, for example, making a list of movement building organizations and inviting representatives from each of them.
Thanks for your feedback on including the event’s focus as a limitation of the survey. That’s something we’ll consider if we run a similar survey and decide to publish the data next year.
Some of the organizations you listed had representatives invited who either did not attend or did not fill out the survey. (The survey was emailed to all invitees, and some of those who filled it out didn’t attend the event.) If everyone invited had filled it out, I think the list of represented organizations would look more diverse by your criteria.
Thanks Aaron. Glad to hear the invitee list included a broader list of organizations, and that you’ll consider a more explicit discussion of potential selection bias effects going forward.
(I was the interim director of CEA during Leaders Forum, and I’m now the executive director.)
I think that CEA has a history of pushing longtermism in somewhat underhand ways (e.g. I think that I made a mistake when I published an “EA handbook” without sufficiently consulting non-longtermist researchers, and in a way that probably over-represented AI safety and under-represented material outside of traditional EA cause areas, resulting in a product that appeared to represent EA, without accurately doing so). Given this background, I think it’s reasonable to be suspicious of CEA’s cause prioritisation.
(I’ll be writing more about this in the future, and it feels a bit odd to get into this in a comment when it’s a major-ish update to CEA’s strategy, but I think it’s better to share more rather than less.) In the future, I’d like CEA to take a more agnostic approach to cause prioritisation, trying to construct non-gameable mechanisms for making decisions about how much we talk about different causes. An example of how this might work is that we might pay for an independent contractor to try to figure out who has spent more than two years full time thinking about cause prioritization, and then surveying those people. Obviously that project would be complicated—it’s hard to figure out exactly what “cause prio” means, it would be important to reach out through diverse networks to make sure there aren’t network biases etc.
Anyway, given this background of pushing longtermism, I think it’s reasonable to be skeptical of CEA’s approach on this sort of thing.
When I look at the list of organizations that were surveyed, it doesn’t look like the list of organizations most involved in movement building and coordination. It looks much more like a specific subset of that type of org: those focused on longtermism or x-risk (especially AI) and based in one of the main hubs (London accounts for ~50% of respondents, and the Bay accounts for ~30%).* Those that prioritize global poverty, and to a lesser extent animal welfare, seem notably missing. It’s possible the list of organizations that didn’t respond or weren’t named looks a lot different, but if that’s the case it seems worth calling attention to and possibly trying to rectify (e.g. did you email the survey to anyone or was it all done in person at the Leaders Forum?)
I think you’re probably right that there are some biases here. How the invite process worked this year was that Amy Labenz, who runs the event, draws up a longlist of potential attendees (asking some external advisors for suggestions about who should be invited). Then Amy, Julia Wise, and I voted yes/no/maybe on all of the individuals on the longlist (often adding comments). Amy made a final call about who to invite, based on those votes. I expect that all of this means that the final invite list is somewhat biased by our networks, and some background assumptions we have about individuals and orgs.
Given this, I think that it would be fair to view the attendees of the event as “some people who CEA staff think it would be useful to get together for a few days” rather than “the definitive list of EA leaders”. I think that we were also somewhat loose about what the criteria for inviting people should be, and I’d like us to be a bit clearer on that in the future (see a couple of paragraphs below). Given this, I think that calling the event “EA Leaders Forum” is probably a mistake, but others on the team think that changing the name could be confusing and have transition costs—we’re still talking about this, and haven’t reached resolution about whether we’ll keep the name for next year.
I also think CEA made some mistakes in the way we framed this post (not just the author, since it went through other readers before publication.) I think the post kind of frames this as “EA leaders think X”, which I expect would be the sort of thing that lots of EAs should update on. (Even though I think it does try to explicitly disavow this interpretation (see the section on “What this data does and does not represent”, I think the title suggests something that’s more like “EA leaders think these are the priorities—probably you should update towards these being the priorities”). I think that the reality is more like “some people that CEA staff think it’s useful to get together for an event think X”, which is something that people should update on less.
We’re currently at a team retreat where we’re talking more about what the goals of the event should be in the future. I think that it’s possible that the event looks pretty different in future years, and we’re not yet sure how. But I think that whatever we decide, we should think more carefully about the criteria for attendees, and that will include thinking carefully about the approach to cause prioritization.
Thank you for taking the time to respond, Max. I appreciate your engagement, your explanation of how the invitation process worked this year, and your willingness to acknowledge that CEA may have historically been too aggressive in how it has pushed longtermism and how it has framed the results of past surveys.
In the future, I’d like CEA to take a more agnostic approach to cause prioritisation, trying to construct non-gameable mechanisms for making decisions about how much we talk about different causes.
Very glad to hear this. As you note, implementing this sort of thing in practice can be tricky. As CEA starts designing new mechanisms, I’d love to see you gather input (as early possible) from people who have expressed concern about CEA’s representativeness in the past (I’d be happy to offer opinions if you’d like). These also might be good people to serve as “external advisors” who generate suggestions for the invite list.
Good luck with the retreat! I look forward to seeing your strategy update once that’s written up.
Some of the organizations you listed had representatives invited who either did not attend or did not fill out the survey. [...] If everyone invited had filled it out, I think the list of represented organizations would look more diverse by your criteria.
Depends a bit on how much you mean to stretch the word “some”… This is false as far as I can tell.. at best I would describe your comment as highly misleading.
I’m not sure what you mean, Peter, but I’ll try to be more clear. Of the seven organizations listed in the comment to which I replied, three of them had people invited, according to the list of people who were recorded as having been sent the invite email.
How did you interpret the word “some”? Is there another sense in which you saw the comment as misleading?
I’m sorry. I was reading uncharitably and wrote too quickly. Your latest response sounds clear and fair to me. Thanks for providing the numbers and I’m sorry for misjudging the situation.
What criteria were used to decide which orgs/individuals should be invited? Should we consider leaders at EA-recommended orgs or orgs doing cost-effective work in EA cause areas, but not specifically EA-aligned (e.g. Gates Foundation?), too? (This was a concern raised about representativeness of the EA handbook. https://forum.effectivealtruism.org/posts/MQWAsdm8MSzYNkD9X/announcing-the-effective-altruism-handbook-2nd-edition#KR2uKZqSmno7ANTQJ)
Because of this, I don’t think it really makes sense to aggregate data over all cause areas. The inclusion criteria are likely to draw pretty arbitrary lines, and respondents will obviously tend to want to see more resources go to the causes they’re working on, and will differ in other ways significantly by cause area. If the proportions of people working in a given cause don’t match the proportion of EA funding people would like to see go to that cause, that is interesting, though, but we still can’t take much away from it.
It seems weird to me that DeepMind and the Good Food Institute are on this list, but not, say, the Against Malaria Foundation, GiveDirectly, Giving What We Can, J-PAL, IPA, or the Humane League.
As stated, some orgs are small and so were not named, but still responded. Maybe a breakdown by the cause area for all the respondents would be more useful with the data you have already?
A small team of CEA staffers (I was not one of them) selected an initial invite list (58 people). At present, we see Leaders Forum as an event focused on movement building and coordination. We focus on inviting people who play a role in trying to shape the overall direction of the EA movement (whatever cause area they focus on), rather than people who mostly focus on direct research within a particular cause area. As you’d imagine, this distinction can be somewhat fuzzy, but that’s the mindset with which CEA approaches invites (though other factors can play a role).
To give a specific example, while the Against Malaria Foundation is an important charity for people in EA who want to support global health, I’m not aware of any AMF staffers who have both a strong interest in the EA movement as a whole and some relevant movement-building experience. I don’t think that, say, Rob Mather (AMF’s CEO), or a representative from the Gates Foundation, would get much value from the vast majority of conversations/sessions at the event.
I should also note that the event has gotten a bit smaller over time. The first Leaders Forum (2016) had ~100 invitees and 62 attendees and wasn’t as focused on any particular topic. The next year, we shifted to a stronger focus on movement-building in particular (including community health, movement strategy, and risks to EA), which naturally led to a smaller, more focused invite list.
As with any other CEA program, Leaders Forum may continue to change over time; we aren’t yet sure how many people we’ll invite next year.
I mostly agree! For several reasons, I wouldn’t put much stock in the cause-area data. Most participants likely arrived at their answers very quickly, and the numbers are of course dependent on the backgrounds of the people who both (a) were invited and (b) took the time to respond. However, because we did conduct the survey, it felt appropriate to share what information came out of it, even if the value of that information is limited.
I do, however, think it’s good to have this information to check whether certain “extreme” conditions are present — for example, it would have been surprising and notable if wild animal welfare had wound up with a median score of “0”, as that would seem to imply that most attendees think the cause doesn’t matter at all.
Given the limited utility of the prioritization data, I don’t know how much more helpful a cause-area breakdown would be. (Also, many if not most respondents currently work on more than one of the areas mentioned, but not necessarily with an even split between areas — any number I came up with would be fairly subjective.)
In addition to what I noted above about the types of attendees we aimed for, I’ll note that the list of respondent organizations doesn’t perfectly match who we invited; quite a few other organizations had invitees who didn’t fill out the survey. However, I will note that Giving What We Can (which is a project of CEA and was represented by CEA staff) did have representatives there.
As for organizations like DeepMind or GFI: While some of the orgs on the list are focused on a single narrow area, the employees we invited often had backgrounds in EA movement building and (in some cases) direct experience in other cause areas. (One invitee has run at least three major EA-aligned projects in three different areas.)
This wasn’t necessarily the case for every attendee (as I mentioned, we considered factors other than community-building experience), but it’s an important reason that the org list looks the way it does.
I really wish this had been included in the OP, in the section that discusses the weaknesses of the data. That section seems to frame the data as a more or less random subset of leaders of EA organizations (“These results shouldn’t be taken as an authoritative or consensus view of effective altruism as a whole. They don’t represent everyone in EA, or even every leader of an EA organization.”)
When I look at the list of organizations that were surveyed, it doesn’t look like the list of organizations most involved in movement building and coordination. It looks much more like a specific subset of that type of org: those focused on longtermism or x-risk (especially AI) and based in one of the main hubs (London accounts for ~50% of respondents, and the Bay accounts for ~30%).* Those that prioritize global poverty, and to a lesser extent animal welfare, seem notably missing. It’s possible the list of organizations that didn’t respond or weren’t named looks a lot different, but if that’s the case it seems worth calling attention to and possibly trying to rectify (e.g. did you email the survey to anyone or was it all done in person at the Leaders Forum?)
Some of the organizations I’d have expected to see included, even if the focus was movement building/coordination: GiveWell (strategy/growth staff, not pure research staff), LEAN, Charity Entrepreneurship, Vegan Outreach, Rethink Priorities, One for the World, Founders Pledge, etc. Most EAs would see these as EA organizations involved to some degree with movement building. But we’re not learning what they think, while we are apparently hearing from at least one org/person who “want to avoid being connected explicitly to the EA movement—for example, if almost all their work happens in non-EA circles, where EA might have a mixed reputation.”
I’m worried that people who read this report are likely to misinterpret the data being presented as more broadly representative than it actually is (e.g. the implications of respondents believing ~30% of EA resources should go to AI work over the next 5 years are radically different if those respondents disproportionally omit people who favor other causes). I have the same concerns about this survey was presented as Jacy Reese expressed about how the leaders survey from 2 years ago (which also captured a narrow set of opinions) was presented:
Lastly, I’ll note that there’s a certain irony in surveying only a narrow set of people, given that even among those respondents: “The most common theme in these answers [about problems in the EA community] seems to be the desire for EA to be more inclusive and welcoming. Respondents saw a lot of room for improvement on intellectual diversity, humility, and outreach, whether to distinct groups with different views or to the general population.” I suspect if a more diverse set of leaders had been surveyed, this theme would have been expressed even more strongly.
* GFI and Effective Giving both have London offices, but I’ve assumed their respondents were from other locations.
I agree with you that the orgs you mentioned (e.g. One for the World) are more focused on movement building than some of the other orgs that were invited.
I talked with Amy Labenz (who organized the event) in the course of writing my original reply. We want to clarify that when we said: “At present, we see Leaders Forum as an event focused on movement building and coordination. We focus on inviting people who play a role in trying to shape the overall direction of the EA movement (whatever cause area they focus on)”. We didn’t mean to over-emphasize “movement building” (in the sense of “bringing more people to EA”) relative to “people shaping the overall direction of the EA movement (in the sense of “figuring out what the movement should prioritize, growth or otherwise”).
My use of the term “movement building” was my slight misinterpretation of an internal document written by Amy. The event’s purpose was closer to discussing the goals, health, and trajectory of the movement (e.g. “how should we prioritize growth vs. other things?”) than discussing how to grow/build the movement (e.g. “how should we introduce EA to new people?”)
Thanks Aaron, that’s a helpful clarification. Focusing on “people shaping the overall direction of the EA movement” rather than just movement building seems like a sensible decision. But one drawback is that coming up with a list of those people is a much more subjective (and network-reliant) exercise than, for example, making a list of movement building organizations and inviting representatives from each of them.
Thanks for your feedback on including the event’s focus as a limitation of the survey. That’s something we’ll consider if we run a similar survey and decide to publish the data next year.
Some of the organizations you listed had representatives invited who either did not attend or did not fill out the survey. (The survey was emailed to all invitees, and some of those who filled it out didn’t attend the event.) If everyone invited had filled it out, I think the list of represented organizations would look more diverse by your criteria.
Thanks Aaron. Glad to hear the invitee list included a broader list of organizations, and that you’ll consider a more explicit discussion of potential selection bias effects going forward.
(I was the interim director of CEA during Leaders Forum, and I’m now the executive director.)
I think that CEA has a history of pushing longtermism in somewhat underhand ways (e.g. I think that I made a mistake when I published an “EA handbook” without sufficiently consulting non-longtermist researchers, and in a way that probably over-represented AI safety and under-represented material outside of traditional EA cause areas, resulting in a product that appeared to represent EA, without accurately doing so). Given this background, I think it’s reasonable to be suspicious of CEA’s cause prioritisation.
(I’ll be writing more about this in the future, and it feels a bit odd to get into this in a comment when it’s a major-ish update to CEA’s strategy, but I think it’s better to share more rather than less.) In the future, I’d like CEA to take a more agnostic approach to cause prioritisation, trying to construct non-gameable mechanisms for making decisions about how much we talk about different causes. An example of how this might work is that we might pay for an independent contractor to try to figure out who has spent more than two years full time thinking about cause prioritization, and then surveying those people. Obviously that project would be complicated—it’s hard to figure out exactly what “cause prio” means, it would be important to reach out through diverse networks to make sure there aren’t network biases etc.
Anyway, given this background of pushing longtermism, I think it’s reasonable to be skeptical of CEA’s approach on this sort of thing.
I think you’re probably right that there are some biases here. How the invite process worked this year was that Amy Labenz, who runs the event, draws up a longlist of potential attendees (asking some external advisors for suggestions about who should be invited). Then Amy, Julia Wise, and I voted yes/no/maybe on all of the individuals on the longlist (often adding comments). Amy made a final call about who to invite, based on those votes. I expect that all of this means that the final invite list is somewhat biased by our networks, and some background assumptions we have about individuals and orgs.
Given this, I think that it would be fair to view the attendees of the event as “some people who CEA staff think it would be useful to get together for a few days” rather than “the definitive list of EA leaders”. I think that we were also somewhat loose about what the criteria for inviting people should be, and I’d like us to be a bit clearer on that in the future (see a couple of paragraphs below). Given this, I think that calling the event “EA Leaders Forum” is probably a mistake, but others on the team think that changing the name could be confusing and have transition costs—we’re still talking about this, and haven’t reached resolution about whether we’ll keep the name for next year.
I also think CEA made some mistakes in the way we framed this post (not just the author, since it went through other readers before publication.) I think the post kind of frames this as “EA leaders think X”, which I expect would be the sort of thing that lots of EAs should update on. (Even though I think it does try to explicitly disavow this interpretation (see the section on “What this data does and does not represent”, I think the title suggests something that’s more like “EA leaders think these are the priorities—probably you should update towards these being the priorities”). I think that the reality is more like “some people that CEA staff think it’s useful to get together for an event think X”, which is something that people should update on less.
We’re currently at a team retreat where we’re talking more about what the goals of the event should be in the future. I think that it’s possible that the event looks pretty different in future years, and we’re not yet sure how. But I think that whatever we decide, we should think more carefully about the criteria for attendees, and that will include thinking carefully about the approach to cause prioritization.
Thank you for taking the time to respond, Max. I appreciate your engagement, your explanation of how the invitation process worked this year, and your willingness to acknowledge that CEA may have historically been too aggressive in how it has pushed longtermism and how it has framed the results of past surveys.
Very glad to hear this. As you note, implementing this sort of thing in practice can be tricky. As CEA starts designing new mechanisms, I’d love to see you gather input (as early possible) from people who have expressed concern about CEA’s representativeness in the past (I’d be happy to offer opinions if you’d like). These also might be good people to serve as “external advisors” who generate suggestions for the invite list.
Good luck with the retreat! I look forward to seeing your strategy update once that’s written up.
Depends a bit on how much you mean to stretch the word “some”… This is false as far as I can tell.. at best I would describe your comment as highly misleading.
I’m not sure what you mean, Peter, but I’ll try to be more clear. Of the seven organizations listed in the comment to which I replied, three of them had people invited, according to the list of people who were recorded as having been sent the invite email.
How did you interpret the word “some”? Is there another sense in which you saw the comment as misleading?
I’m sorry. I was reading uncharitably and wrote too quickly. Your latest response sounds clear and fair to me. Thanks for providing the numbers and I’m sorry for misjudging the situation.