[Question] Can/​should we define quick tests of personal skill for priority areas?

Early in a ca­reer, if you’re un­cer­tain about what you’re good at, ex­plo­ra­tion of your skills/​abil­ities is nec­es­sary. Choos­ing quicker/​cheaper tests at this time can help you do this more effi­ciently.

How­ever, for as­sess­ing skill/​per­sonal-fit in our pri­or­ity ar­eas, a lot of the ad­vice we give is “ma­jor in X in un­der­grad” or “be able to get into this type of job/​grad school.” To my mind, these aren’t effi­cient tests—by the time you’ve got­ten to higher level classes that truly test your abil­ity to po­ten­tially move the field for­ward/​get into the right job or grad-school, it’s pretty late to pivot. Also, this ad­vice only ap­plies to EAs cur­rently in col­lege.

In­stead for pri­or­ity paths, could/​should 80000-Hours and the EA com­mu­nity cu­rate sets of tests, start­ing from cheap to ex­pen­sive, that one can use to rapidly gauge their skills? For in­stance for tech­ni­cal AI safety re­search, I lay­out the fol­low­ing hy­po­thet­i­cal ex­am­ple (heads up—I’m no AI ex­pert)[1]

  • Test 1 - Assess how fun­da­men­tally strong/​flex­ible your tech­ni­cal think­ing is

    • Learn a pro­gram­ming lan­guage /​ fun­da­men­tals of pro­gram­ming (ex. MIT’s In­tro to Python MOOC)

    • Learn Data Struc­tures /​ Al­gorithms (ex. Prince­ton’s Al­gorithms MOOC)

    • Start do­ing pro­gram­ming puz­zles on Leetcode

    • Check:

      • Are you en­joy­ing do­ing these sorts of com­pu­ta­tional puz­zles?

      • Are you able to solve them in a rea­son­able amount of time? (~30 min)

      • If no: you may not have a high like­li­hood of be­ing a good fit for tech­ni­cal AI safety re­search.You may be able to con­tribute to the field in other ways—but you might need to ad­just your plans.

      • If yes: continue

  • Test 2 - (it would go on from here—for in­stance, now start do­ing com­pet­i­tive pro­gram­ming and learn fun­da­men­tals of Ma­chine Learn­ing)

The ad­van­tage of Test 1 is that you’ve found a way to test the fun­da­men­tal skill of flex­ible tech­ni­cal think­ing with­out in­vest­ing a ton of time just learn­ing ac­ces­sory in­for­ma­tion (how vec­tors/​ma­tri­ces work, how Ten­sorFlow works, etc.). You could ar­guably figure this out in one sum­mer in­stead of over many years.The po­ten­tial down­sides are:

  • We’d need to make sure the tests truly do serve as a good in­di­ca­tor of the core skill—oth­er­wise we’re giv­ing ad­vice that leads to an un­ac­cept­able amount of false pos­i­tives and/​or false nega­tives.

  • It can be less mo­ti­vat­ing to work on proxy prob­lems than learn­ing stuff re­lated to the ac­tual topic of in­ter­est, which can throw off the ac­cu­racy of the tests.

  • We’d have to bal­ance how spe­cific, yet flex­ible these test­ing guidelines should be.

[1] Again, I’m no ex­pert on tech­ni­cal AI re­search. Feel free to dis­pute this ex­am­ple if in­ac­cu­rate, but I’d ask you to try and fo­cus on the broad con­cept of “could a more ac­cu­rate set of ranked tests ex­ist and ac­tu­ally be use­ful for EAs?”