I prefer to just analyse and refute his concrete arguments on the object level.
I’m not a fan of engaging the person of the arguer instead of their arguments.
Granted, I don’t practice epistemic deference in regards to AI risk (so I’m not the target audience here), but I’m really not a fan of this kind of post. It rubs me the wrong way.
I prefer to just analyse and refute his concrete arguments on the object level.
I agree that work analyzing specific arguments is, overall, more useful than work analyzing individual people’s track records. Personally, partly for that reason, I’ve actually done a decent amount of public argument analysis (e.g. here, here, and most recently here) but never written a post like this before.
Still, I think, people do in practice tend to engage in epistemic deference. (I think that even people who don’t consciously practice epistemic deference tend to be influenced by the views of people they respect.) I also think that people should practice some level of epistemic deference, particularly if they’re new to an area. So—in that sense—I think this kind of track record analysis is still worth doing, even if it’s overall less useful than argument analysis.
(I hadn’t seen this reply when I made my other reply).
What do you think of legitimising behaviour that calls out the credibility of other community members in the future?
I am worried about displacing the concrete object level arguments as the sole domain of engagement. A culture in which arguments cannot be allowed to stand by themselves. In which people have to be concerned about prior credibility, track record and legitimacy when formulating their arguments...
Expert opinion has always been a substitute for object level arguments because of deference culture. Nobody has object level arguments for why x-risk in the 21st century is around 1/6: we just think it might be because Toby Ord says so and he is very credible. Is this ideal? No. But we do it because expert priors are the second best alternative when there is no data to base our judgments off of.
Given this, I think criticizing an expert’s priors is functionally an object level argument, since the expert’s prior is so often used as a substitute for object level analysis.
I agree that a slippery slope endpoint would be bad but I do not think criticizing expert priors takes us there.
I do not want an epistemic culture that finds it acceptable to challenge an individuals overall credibility in lieu of directly engaging with their arguments.
I think that’s unhealthy and contrary to collaborative knowledge growing.
Yudkowsky has laid out his arguments for doom at length. I don’t fully agree with those arguments (I believe he’s mistaken in 2 − 3 serious and important ways), but he has laid them out, and I can disagree on the object level with him because of that.
Given that the explicit arguments are present, I would prefer posts that engaged with and directly refuted the arguments if you found them flawed in some way.
I don’t like this direction of attacking his overall credibility.
Attacking someone’s credibility in lieu of their arguments feels like a severe epistemic transgression.
I am not convinced that the community is better for a norm that accepts such epistemic call out posts.
I do not want an epistemic culture that finds it acceptable to challenge an individuals overall credibility in lieu of directly engaging with their arguments.
I think I roughly agree with you on this point, although I would guess I have at least a somewhat weaker version of your view. If discourse about people’s track records or reliability starts taking up (e.g.) more than a fifth of the space that object-level argument does, within the most engaged core of people, then I do think that will tend to suggest an unhealthy or at least not-very-intellectually-productive community.
One caveat: For less engaged people, I do actually think it can make sense to spend most of your time thinking about questions around deference. If I’m only going to spend ten hours thinking about nanotechnology risk, for example, then I might actually want to spend most of this time trying to get a sense of what different people believe and how much weight I should give their views; I’m probably not going to be able to make a ton of headway getting a good gears-level-understanding of the relevant issues, particularly as someone without a chemistry or engineering background.
> I do not want an epistemic culture that finds it acceptable to challenge an individuals overall credibility in lieu of directly engaging with their arguments.
I think it’s fair to talk about a person’s lifetime performance when we are talking about forecasting. When we don’t have the expertise ourselves, all we have to go on is what little we understand and the track records of the experts we defer to. Many people defer to Eliezer so I think it’s a service to lay out his track record so that we can know how meaningful his levels of confidence and special insights into this kind of problem are.
I do not want an epistemic culture that finds it acceptable to challenge an individuals overall credibility in lieu of directly engaging with their arguments.
I don’t think this is realistic. There is much more important knowledge than one can engage with in a lifetime. The only way of forming views about many things is to somehow decide who to listen to, or at least how to aggregate relevant more strongly based opinions (so, who to count as an expert and who not to and with what weight).
I prefer to just analyse and refute his concrete arguments on the object level.
I’m not a fan of engaging the person of the arguer instead of their arguments.
Granted, I don’t practice epistemic deference in regards to AI risk (so I’m not the target audience here), but I’m really not a fan of this kind of post. It rubs me the wrong way.
Challenging someone’s overall credibility instead of their concrete arguments feels like bad form and [logical rudeness] (https://www.lesswrong.com/posts/srge9MCLHSiwzaX6r/logical-rudeness).
I wish EAs did not engage in such behaviour and especially not with respect to other members of the community.
I agree that work analyzing specific arguments is, overall, more useful than work analyzing individual people’s track records. Personally, partly for that reason, I’ve actually done a decent amount of public argument analysis (e.g. here, here, and most recently here) but never written a post like this before.
Still, I think, people do in practice tend to engage in epistemic deference. (I think that even people who don’t consciously practice epistemic deference tend to be influenced by the views of people they respect.) I also think that people should practice some level of epistemic deference, particularly if they’re new to an area. So—in that sense—I think this kind of track record analysis is still worth doing, even if it’s overall less useful than argument analysis.
(I hadn’t seen this reply when I made my other reply).
What do you think of legitimising behaviour that calls out the credibility of other community members in the future?
I am worried about displacing the concrete object level arguments as the sole domain of engagement. A culture in which arguments cannot be allowed to stand by themselves. In which people have to be concerned about prior credibility, track record and legitimacy when formulating their arguments...
It feels like a worse epistemic culture.
Expert opinion has always been a substitute for object level arguments because of deference culture. Nobody has object level arguments for why x-risk in the 21st century is around 1/6: we just think it might be because Toby Ord says so and he is very credible. Is this ideal? No. But we do it because expert priors are the second best alternative when there is no data to base our judgments off of.
Given this, I think criticizing an expert’s priors is functionally an object level argument, since the expert’s prior is so often used as a substitute for object level analysis.
I agree that a slippery slope endpoint would be bad but I do not think criticizing expert priors takes us there.
To expand on my complaints in the above comment.
I do not want an epistemic culture that finds it acceptable to challenge an individuals overall credibility in lieu of directly engaging with their arguments.
I think that’s unhealthy and contrary to collaborative knowledge growing.
Yudkowsky has laid out his arguments for doom at length. I don’t fully agree with those arguments (I believe he’s mistaken in 2 − 3 serious and important ways), but he has laid them out, and I can disagree on the object level with him because of that.
Given that the explicit arguments are present, I would prefer posts that engaged with and directly refuted the arguments if you found them flawed in some way.
I don’t like this direction of attacking his overall credibility.
Attacking someone’s credibility in lieu of their arguments feels like a severe epistemic transgression.
I am not convinced that the community is better for a norm that accepts such epistemic call out posts.
I think I roughly agree with you on this point, although I would guess I have at least a somewhat weaker version of your view. If discourse about people’s track records or reliability starts taking up (e.g.) more than a fifth of the space that object-level argument does, within the most engaged core of people, then I do think that will tend to suggest an unhealthy or at least not-very-intellectually-productive community.
One caveat: For less engaged people, I do actually think it can make sense to spend most of your time thinking about questions around deference. If I’m only going to spend ten hours thinking about nanotechnology risk, for example, then I might actually want to spend most of this time trying to get a sense of what different people believe and how much weight I should give their views; I’m probably not going to be able to make a ton of headway getting a good gears-level-understanding of the relevant issues, particularly as someone without a chemistry or engineering background.
> I do not want an epistemic culture that finds it acceptable to challenge an individuals overall credibility in lieu of directly engaging with their arguments.
I think it’s fair to talk about a person’s lifetime performance when we are talking about forecasting. When we don’t have the expertise ourselves, all we have to go on is what little we understand and the track records of the experts we defer to. Many people defer to Eliezer so I think it’s a service to lay out his track record so that we can know how meaningful his levels of confidence and special insights into this kind of problem are.
I don’t think this is realistic. There is much more important knowledge than one can engage with in a lifetime. The only way of forming views about many things is to somehow decide who to listen to, or at least how to aggregate relevant more strongly based opinions (so, who to count as an expert and who not to and with what weight).