How would you recommend deciding which AI Safety orgs are actually doing useful work? According to this comment, and my very casual reading of LessWrong, there is definitely no consensus on whether any given org is net-positive, net-neutral, or net-negative.
If you’re working in a supporting role (e.g. engineering or hr) and can’t really evaluate the theory yourself, how would you decide which orgs are net-positive to help?
Firstly, make sure you’re working on doing safety rather than capabilities.
Distinguishing between who is doing the best safety work if you can’t evaluate the research directly is challenging. Your best path is probably to find a person who seems trustworthy and competent to you, and get their opinions on the work of the organizations you’re considering. This could be a direct contact, or from a review such as Lark’s one.
How would you recommend deciding which AI Safety orgs are actually doing useful work?
According to this comment, and my very casual reading of LessWrong, there is definitely no consensus on whether any given org is net-positive, net-neutral, or net-negative.
If you’re working in a supporting role (e.g. engineering or hr) and can’t really evaluate the theory yourself, how would you decide which orgs are net-positive to help?
Firstly, make sure you’re working on doing safety rather than capabilities.
Distinguishing between who is doing the best safety work if you can’t evaluate the research directly is challenging. Your best path is probably to find a person who seems trustworthy and competent to you, and get their opinions on the work of the organizations you’re considering. This could be a direct contact, or from a review such as Lark’s one.