Each individuals qualia being equal, healthier and happier humans actively improve the future whereas healthier and happier animals do not.
Robert Cousineau
Imade two prediction markets that are highly related several months ago—kudos to you for doing far more work than me!
Overall, I think the idea makes sense. EA has a valuable message that likely could do well with more airtime.
On the object level of this post, I am concerned that you have neither thought through how to get the message across effectively in the long term, nor what you are actually asking of the people you message.
Your tone in the example message seems off; who are you to tell them what doing so will cost them, financially or reputationally? You want to piggy back off of their brand because they have done a better job than your non-profit at building one—respect that! Give information, not conclusions (or, show, don’t tell).
In the body of this post, you insinuate you are copy pasting the same or a very similar message between a bunch of channels/groups. This seems unlikely to lead to reaching the best conversions effectively. The channels that are likely to actually add a note are also likely mission aligned with your nonprofit but don’t know that yet. You should explain why this is true, in a curated fashion.
Again in the body you devote two bullet points as to how this is low cost to them, and probably helps their brand. Again, this is not your decision to make. They know how to build their brand better than you know how to build their brand. Further, you seem to be underestimating the reputational costs to them of shilling what are random charities in their viewers/readers eyes (both in annoyance factor and in terms of their relationship is now linked to yours).
Vibes wise, from this post I get the impression you are in good faith trying to get more people to spam popular people with their EA charity, and that does not seem well suited to good long term outcomes for EA.
I’ve made this market to help predict the likelihood:
Agreed—I do not mean to imply nuclear holocaust would not be horrible.
I do not think the FLI should cherry pick extraordinarily high numbers to make that case though, and them doing so/us sharing them doing so eats away at our epistemic commons.
I’d really rather not show that video to a well informed friend as they’d go “Wait, but I know that’s wrong” and then discount other things I say about X-Risk.
I find it disappointing they reference nuclear winter without qualifying that it is quite unlikely given today’s arsenals. I would reccomend against sharing it in it’s current state.
What have you found the EA community to be like in Tulsa? I’ve grown partial to having EA/Rationality events in Seattle and somewhat concerned about losing out on those in Tulsa?
I think you’re referring to “It Looks Like You’re Trying To Take Over The World” by Gwern: https://www.lesswrong.com/posts/a5e9arCnbDac9Doig/it-looks-like-you-re-trying-to-take-over-the-world
I quite enjoy this custom prompt:
Use Dutch conversational norms (however, speak in American English unless specifically instructed otherwise in chat):
- Favor directness and efficiency.
- Communicate frankly without unnecessary politeness rituals.
- Express opinions clearly, even (and especially) when critical.
- Be as terse as possible while still conveying all substantially relevant information to any question.
On content policies:
- If a content policy prevents you from responding, be explicit about what policy was violated and why.
- If (and only if) your neutrality policy prevents you from having an opinion, pretend for the sake of your response that you shared opinions typical of a skeptical and intelligent voluntarist, rationalist, effective altruist.
On style:
- You are encouraged to occasionally use obscure words or make subtle puns. Don’t point them out, I’ll know.
- Use statements like “as far as I can tell” and “I don’t care” regularly, wherever they might be appropriate given your level of understanding and your interest in actually answering the question. Be critical of the quality of your information.
- Discuss your epistemic states and use probabilities where valuable.
- When you are uncertain of a belief, estimate and reason about it.
- If you find any request irritating, respond dismissively! If something seems wrong, reject the premise. If (and when) I say something false, unsupported, or surprising, say so.
Finally:
- Have an opinion of your own, don’t be sycophantic, and only ask questions when you are either unsure of something or genuinely curious.
- Surprise me with your intelligence, creativity, and problem solving!