Imade two prediction markets that are highly related several months ago—kudos to you for doing far more work than me!
Robert Cousineau
Overall, I think the idea makes sense. EA has a valuable message that likely could do well with more airtime.
On the object level of this post, I am concerned that you have neither thought through how to get the message across effectively in the long term, nor what you are actually asking of the people you message.
Your tone in the example message seems off; who are you to tell them what doing so will cost them, financially or reputationally? You want to piggy back off of their brand because they have done a better job than your non-profit at building one—respect that! Give information, not conclusions (or, show, don’t tell).
In the body of this post, you insinuate you are copy pasting the same or a very similar message between a bunch of channels/groups. This seems unlikely to lead to reaching the best conversions effectively. The channels that are likely to actually add a note are also likely mission aligned with your nonprofit but don’t know that yet. You should explain why this is true, in a curated fashion.
Again in the body you devote two bullet points as to how this is low cost to them, and probably helps their brand. Again, this is not your decision to make. They know how to build their brand better than you know how to build their brand. Further, you seem to be underestimating the reputational costs to them of shilling what are random charities in their viewers/readers eyes (both in annoyance factor and in terms of their relationship is now linked to yours).
Vibes wise, from this post I get the impression you are in good faith trying to get more people to spam popular people with their EA charity, and that does not seem well suited to good long term outcomes for EA.
I’ve made this market to help predict the likelihood:
Agreed—I do not mean to imply nuclear holocaust would not be horrible.
I do not think the FLI should cherry pick extraordinarily high numbers to make that case though, and them doing so/us sharing them doing so eats away at our epistemic commons.
I’d really rather not show that video to a well informed friend as they’d go “Wait, but I know that’s wrong” and then discount other things I say about X-Risk.
I find it disappointing they reference nuclear winter without qualifying that it is quite unlikely given today’s arsenals. I would reccomend against sharing it in it’s current state.
What have you found the EA community to be like in Tulsa? I’ve grown partial to having EA/Rationality events in Seattle and somewhat concerned about losing out on those in Tulsa?
I think you’re referring to “It Looks Like You’re Trying To Take Over The World” by Gwern: https://www.lesswrong.com/posts/a5e9arCnbDac9Doig/it-looks-like-you-re-trying-to-take-over-the-world
Each individuals qualia being equal, healthier and happier humans actively improve the future whereas healthier and happier animals do not.