When we first encounter a question, our initial aim is normally to work out: (i) who are the relevant experts? (ii) what would they say about this question?
I think this is a valuable heuristic, but that it gets stronger by also trying to consider the degree of expertise, and letting that determine how much weight to put on it. The more the question is of the same type they routinely answer, and the more there are good feedback mechanisms to help their judgement, the stronger we should expect their expertise to be.
For some questions we have very good experts. If I’ve been hurt by someone else’s action, I would trust the judgement of a lawyer about whether I have a good case for winning damages. If I want to buy a light for my bike to see by at night, I’ll listen to the opinions of people who cycle at night rather than attempt a first-principles calculation of how much light I need it to produce to see a certain distance.
Some new questions, though, don’t fall clearly into any existing expertise, and the best you can do is find someone who knows about something similar. I’d still prefer this over the opinion of someone chosen randomly, but it should get much less weight, and may not be worth seeking out. In particular, it becomes much easier for you to become more of an expert in the question than the sort-of-expert you found.
I think this is especially true for AI safety. Sometimes people will cite prominent computer scientists’ lack of concern for AI safety as evidence it is an unfounded concern. However, computer scientists seem to typically answer questions on AI progress moreso than AI safety, and these questions seem pretty categorically different, so I’m hesitant to give serious weight to their opinions on this topic. Not to mention the biases we can expect from AI researchers on this topic, e.g. from their incentives to be optimistic about their own field.
I think this is a valuable heuristic, but that it gets stronger by also trying to consider the degree of expertise, and letting that determine how much weight to put on it. The more the question is of the same type they routinely answer, and the more there are good feedback mechanisms to help their judgement, the stronger we should expect their expertise to be.
For some questions we have very good experts. If I’ve been hurt by someone else’s action, I would trust the judgement of a lawyer about whether I have a good case for winning damages. If I want to buy a light for my bike to see by at night, I’ll listen to the opinions of people who cycle at night rather than attempt a first-principles calculation of how much light I need it to produce to see a certain distance.
Some new questions, though, don’t fall clearly into any existing expertise, and the best you can do is find someone who knows about something similar. I’d still prefer this over the opinion of someone chosen randomly, but it should get much less weight, and may not be worth seeking out. In particular, it becomes much easier for you to become more of an expert in the question than the sort-of-expert you found.
I think this is especially true for AI safety. Sometimes people will cite prominent computer scientists’ lack of concern for AI safety as evidence it is an unfounded concern. However, computer scientists seem to typically answer questions on AI progress moreso than AI safety, and these questions seem pretty categorically different, so I’m hesitant to give serious weight to their opinions on this topic. Not to mention the biases we can expect from AI researchers on this topic, e.g. from their incentives to be optimistic about their own field.
Thanks, I’ll adapt the page to point this out.