How smart do you need to be to contribute meaningfully t AI safety? Near top in class in high-school? Near top in class at ivy-league? Potential famous prof at ivy league? Potential fields medalist?
Also, how hard should we expect alignment to be? Are we trying to throw resources at a problem we expect to be able to at least partially solve in most worlds (which is e.g. the superficial impression I get from biorisk) or are we attempting a hail mary, because it might just work and it’s important enough to be worth a try (not saying that would be bad)?
Big labs in the West that kind of target AGI are OpenAI and DeepMind. Others target AGI less explicitly, but inlcude e.g. Google Brain. Are there equivalents elsewhere? China? Do we know whether these exits? Am I missing labs that target AGI in the West?
Finally, this one’s kind of obvious, but how large is the risk? What’s the probability of catastrophe? I’m aware of many estimates, but this is still definitely something I’m confused about.
I think on all these questions except (3), there’s substantial disagreement among AI safety researchers, though I don’t have a good feeling for the distributions of views either.
Here’s a couple that came to mind just now.
How smart do you need to be to contribute meaningfully t AI safety? Near top in class in high-school? Near top in class at ivy-league? Potential famous prof at ivy league? Potential fields medalist?
Also, how hard should we expect alignment to be? Are we trying to throw resources at a problem we expect to be able to at least partially solve in most worlds (which is e.g. the superficial impression I get from biorisk) or are we attempting a hail mary, because it might just work and it’s important enough to be worth a try (not saying that would be bad)?
Big labs in the West that kind of target AGI are OpenAI and DeepMind. Others target AGI less explicitly, but inlcude e.g. Google Brain. Are there equivalents elsewhere? China? Do we know whether these exits? Am I missing labs that target AGI in the West?
Finally, this one’s kind of obvious, but how large is the risk? What’s the probability of catastrophe? I’m aware of many estimates, but this is still definitely something I’m confused about.
I think on all these questions except (3), there’s substantial disagreement among AI safety researchers, though I don’t have a good feeling for the distributions of views either.