[Question] Can/​should we define quick tests of personal skill for priority areas?

Early in a career, if you’re uncertain about what you’re good at, exploration of your skills/​abilities is necessary. Choosing quicker/​cheaper tests at this time can help you do this more efficiently.

However, for assessing skill/​personal-fit in our priority areas, a lot of the advice we give is “major in X in undergrad” or “be able to get into this type of job/​grad school.” To my mind, these aren’t efficient tests—by the time you’ve gotten to higher level classes that truly test your ability to potentially move the field forward/​get into the right job or grad-school, it’s pretty late to pivot. Also, this advice only applies to EAs currently in college.

Instead for priority paths, could/​should 80000-Hours and the EA community curate sets of tests, starting from cheap to expensive, that one can use to rapidly gauge their skills? For instance for technical AI safety research, I layout the following hypothetical example (heads up—I’m no AI expert)[1]

  • Test 1 - Assess how fundamentally strong/​flexible your technical thinking is

    • Learn a programming language /​ fundamentals of programming (ex. MIT’s Intro to Python MOOC)

    • Learn Data Structures /​ Algorithms (ex. Princeton’s Algorithms MOOC)

    • Start doing programming puzzles on Leetcode

    • Check:

      • Are you enjoying doing these sorts of computational puzzles?

      • Are you able to solve them in a reasonable amount of time? (~30 min)

      • If no: you may not have a high likelihood of being a good fit for technical AI safety research.You may be able to contribute to the field in other ways—but you might need to adjust your plans.

      • If yes: continue

  • Test 2 - (it would go on from here—for instance, now start doing competitive programming and learn fundamentals of Machine Learning)

The advantage of Test 1 is that you’ve found a way to test the fundamental skill of flexible technical thinking without investing a ton of time just learning accessory information (how vectors/​matrices work, how TensorFlow works, etc.). You could arguably figure this out in one summer instead of over many years.The potential downsides are:

  • We’d need to make sure the tests truly do serve as a good indicator of the core skill—otherwise we’re giving advice that leads to an unacceptable amount of false positives and/​or false negatives.

  • It can be less motivating to work on proxy problems than learning stuff related to the actual topic of interest, which can throw off the accuracy of the tests.

  • We’d have to balance how specific, yet flexible these testing guidelines should be.

[1] Again, I’m no expert on technical AI research. Feel free to dispute this example if inaccurate, but I’d ask you to try and focus on the broad concept of “could a more accurate set of ranked tests exist and actually be useful for EAs?”