I think to give some color to how this affects my work in particular (speaking strictly for myself as I haven’t discussed this with others on my team):
One of our organization priorities is ensuring we are creating a welcoming and hopefully safe community for people to do good better, regardless of people’s identities. A large part of our work is familiarizing people with and connecting them to organizations and resources, including ones that aren’t explicitly EA-branded. We are often one of their first touch points within EA and its niches, including forecasting. Many factors shape whether people decide to continue and deepen their involvement, including how positive they find these early touch points. When we’re routing people toward organizations and individuals, we know that their perception of our recommendations in turn affects their perception of us and of EA as a whole.
Good, smart, ambitious people usually have several options for professional communities to spend their time within. EA and its subcommunities are just one option and an off-putting experience can mean losing people for good.
With this in mind, I will feel much more reluctant to direct community members to Manifold in particular and (EA-adjacent) forecasting spaces more broadly, especially if the community member is an underrepresented group in EA. I think Manifold brings a lot of value, but I can’t in good conscience recommend they plug into communities I believe most people I am advising would find notably morally off-putting.
This is of course a subjective judgement call, I understand there are strong counterarguments here, and what repels one person also attracts another. But I hope this gives a greater sense of the considerations/trade-offs I (and probably many others) will have to spend time thinking about and reaching decisions around as a result of Manifest.
Thanks; I also appreciate you sharing your rationale here. I think this makes sense from your perspective, and while I think Manifest and Manifold in fact would be great experiences for people of all kinds, including underrepresented folks, I totally understand if we haven’t proven this to you at this point. Next time I’m in NYC, I’d enjoy speaking with you or other members of EA NYC, if you’d like that!
(I also want to note that my views shouldn’t reflect on “forecasting” any more than they reflect on “EA”; see Ozzie and Peter Wildeford’s comments for what forecasting more broadly is like. I’m in the awkward position of having run a platform for many forecasters, but not participating much as a forecaster myself.)
Just for your information, as a non-binary person who was assigned female at birth (so definitely under-represented in EA), I would find it very upsetting if I knew you were trying to control which ideas and people I was exposed to.
I find speciesist attitudes morally offputting, but if you would keep events from me because some people there were speciesist, I’d consider you to be being a bad friend.
People are different. Some people consider what you’re suggesting to be helpful. I do not. I just want you to be aware about the differences in preferences there, and not think that all “underrepresented groups” would feel uncomfortable going to events with speakers they deeply disagree with.
I do also want to clarify that I have no desire to “control which ideas and people [anyone] is exposed to.” It is more so, “If I am recommending 3 organizations I think someone should connect with, are there benefits or risks tied to those recommendations.”
Oh, it sounds like you might be confused about the context I’m talking about this occurring in, and I’m not sure that explaining it more fully is on-topic enough for this post. I’m going to leave this thread here for now to not detract from the main conversation. But I’ll consider making a separate post about this and welcome feedback there.
I really appreciate you sharing your perspective on this. I think these are extremely hard calls, as evidenced by the polarity of the discussion on this post, and to some extent it feels like a lose-lose situation. I don’t think these decisions should be made in a vacuum and want other people’s input, which is one reason I’m flagging how this affects my work and the larger involvement funnels in EA.
Thanks for spelling this out.
I think to give some color to how this affects my work in particular (speaking strictly for myself as I haven’t discussed this with others on my team):
One of our organization priorities is ensuring we are creating a welcoming and hopefully safe community for people to do good better, regardless of people’s identities. A large part of our work is familiarizing people with and connecting them to organizations and resources, including ones that aren’t explicitly EA-branded. We are often one of their first touch points within EA and its niches, including forecasting. Many factors shape whether people decide to continue and deepen their involvement, including how positive they find these early touch points. When we’re routing people toward organizations and individuals, we know that their perception of our recommendations in turn affects their perception of us and of EA as a whole.
Good, smart, ambitious people usually have several options for professional communities to spend their time within. EA and its subcommunities are just one option and an off-putting experience can mean losing people for good.
With this in mind, I will feel much more reluctant to direct community members to Manifold in particular and (EA-adjacent) forecasting spaces more broadly, especially if the community member is an underrepresented group in EA. I think Manifold brings a lot of value, but I can’t in good conscience recommend they plug into communities I believe most people I am advising would find notably morally off-putting.
This is of course a subjective judgement call, I understand there are strong counterarguments here, and what repels one person also attracts another. But I hope this gives a greater sense of the considerations/trade-offs I (and probably many others) will have to spend time thinking about and reaching decisions around as a result of Manifest.
Thanks; I also appreciate you sharing your rationale here. I think this makes sense from your perspective, and while I think Manifest and Manifold in fact would be great experiences for people of all kinds, including underrepresented folks, I totally understand if we haven’t proven this to you at this point. Next time I’m in NYC, I’d enjoy speaking with you or other members of EA NYC, if you’d like that!
(I also want to note that my views shouldn’t reflect on “forecasting” any more than they reflect on “EA”; see Ozzie and Peter Wildeford’s comments for what forecasting more broadly is like. I’m in the awkward position of having run a platform for many forecasters, but not participating much as a forecaster myself.)
Just for your information, as a non-binary person who was assigned female at birth (so definitely under-represented in EA), I would find it very upsetting if I knew you were trying to control which ideas and people I was exposed to.
I find speciesist attitudes morally offputting, but if you would keep events from me because some people there were speciesist, I’d consider you to be being a bad friend.
People are different. Some people consider what you’re suggesting to be helpful. I do not. I just want you to be aware about the differences in preferences there, and not think that all “underrepresented groups” would feel uncomfortable going to events with speakers they deeply disagree with.
I do also want to clarify that I have no desire to “control which ideas and people [anyone] is exposed to.” It is more so, “If I am recommending 3 organizations I think someone should connect with, are there benefits or risks tied to those recommendations.”
Do you not see how that’s controlling what ideas and people they are exposed to?
They can’t make the choice on their own. You’re keeping information from them because you’ve decided what’s good for them.
I think the more robustly good thing to do is find out what your friends’ preferences are and follow their expressed preferences.
Oh, it sounds like you might be confused about the context I’m talking about this occurring in, and I’m not sure that explaining it more fully is on-topic enough for this post. I’m going to leave this thread here for now to not detract from the main conversation. But I’ll consider making a separate post about this and welcome feedback there.
I really appreciate you sharing your perspective on this. I think these are extremely hard calls, as evidenced by the polarity of the discussion on this post, and to some extent it feels like a lose-lose situation. I don’t think these decisions should be made in a vacuum and want other people’s input, which is one reason I’m flagging how this affects my work and the larger involvement funnels in EA.