Some of the world’s most important problems are, surprisingly, still neglected. Lots of smart people are trying to cure cancer—it’s been around for a long time, and so has the medical research establishment attacking it.
But far fewer people are working on preventing an outbreak from a novel synthetic biological agent or safely governing advanced AI systems, because those issues are less widely-known.
I prefer something like “Imagine you’re one of the first people to discover that cancer is a problem, or one of the first people to work on climate change seriously and sketch out the important problems for others to work on. There are such problems today, that don’t have [millions] of smart people already working on them”
[this allows me to point at the value of being early on a neglected problem without presenting new “strange” such problems. moreover, after this part of the pitch, the other person is more welcoming to hear a new strange problem, I think]
I think this is a good framing, but in isolation it may (rightly) sound epistemically fishy, since you’re saying from the start that you’re privy to rare and highly important information, which is unlikely by definition and also a claim commonly made by cults and scams. That doesn’t mean it’s wrong though; there are good reasons to think EAs are privy to special information, unlike those involved in cults and scams.
Perhaps you already do so, but I would encourage anyone making this argument to follow up your framing with not just “strange” problems to work on, but an explanation of why we/you are confident in those causes despite ignorance or doubt from the general public. Maybe it’s only important to highly skeptical people, but I think this is a necessary follow up for this to be a logically sound argument
I prefer something like “Imagine you’re one of the first people to discover that cancer is a problem, or one of the first people to work on climate change seriously and sketch out the important problems for others to work on. There are such problems today, that don’t have [millions] of smart people already working on them”
[this allows me to point at the value of being early on a neglected problem without presenting new “strange” such problems. moreover, after this part of the pitch, the other person is more welcoming to hear a new strange problem, I think]
I love this.
I really like this framing and we’ll update it to something like this soon. Thanks!
I think this is a good framing, but in isolation it may (rightly) sound epistemically fishy, since you’re saying from the start that you’re privy to rare and highly important information, which is unlikely by definition and also a claim commonly made by cults and scams. That doesn’t mean it’s wrong though; there are good reasons to think EAs are privy to special information, unlike those involved in cults and scams.
Perhaps you already do so, but I would encourage anyone making this argument to follow up your framing with not just “strange” problems to work on, but an explanation of why we/you are confident in those causes despite ignorance or doubt from the general public. Maybe it’s only important to highly skeptical people, but I think this is a necessary follow up for this to be a logically sound argument