As a weak counter-point to this, I have found in the past that experts who are not value-aligned can almost find EA ways of thinking incomprehensible, such that it can be very difficult to extract useful information from them. I have experienced talking to a whole series of non-EA experts and really struggling to get them to even engage with the questions I was asking (a lot of “I just haven’t thought about that”), whereas I got a ton of value very quickly from talking to an EA grad student in the area.
I empathise with this from my own experience having been quite actively involved in EA for 10 years and within my own area of expertise which is finance and investment, risk management and to a lesser extent governance ( as a senior partner and risk committee member of one the largest hedge fund in Europe), that sometimes we ignore ‘experts’ over people who are more value aligned.
It doesn’t mean I believe we should always defer to ‘experts’. Sometimes a fresh perspective is useful to explore and maximise potential upside , but sometimes ‘experts’ are useful in minimising downside risks that people with less experience may not be aware of, and also save time and effort in reinventing existing best practises upon which improvements could be made.
I guess it is a balance between the two which varies with the context, but more likely perhaps in areas such as operation, legal and compliance, financial risk management and probably others.
Hi Gideon, do you mean me? I have very very little detailed knowledge of xrisk and do not believe my risk management expertise would be relevant. But happy to chat. May be you can pm me?
This is valuable, but at a certain point the market of ideas relies on people actually engaging in object level reasoning. There’s an obvious failure mode in rejecting adopting new ideas on the sole meta-level basis that if they were good they would already be popular. Kind of like the old joke of the economist who refuses to pick up hundred-dollar bills off the ground because of the Efficient Market Hypothesis.
EA & Aspiring Rationalism have grown fairly rapidly, all told! But they’re also fairly new. “Experts in related fields haven’t thought much about EA approaches” is more promising than “experts in related fields have thought a lot about EA approaches and have standard reasons to reject them.”
(Although “most experts have clear reasons to reject EA thinking on their subject matter” is closer to being the case in AI … but that’s probably also the field with the most support for longtermist & x-risk type thinking & where it’s seen the fastest growth, IDK.)
We sort of seem to be doing the opposite to me—see for example some of the logic behind this post and some of the comments on it (though I like the post and think it’s useful).
Only a small red flag, IMO, because it’s rather easy to convince people of alluring falsehoods, and not so easy to convince people of uncomfortable truths.
This seems quite hand-wavy and I’m skeptical of it. Could you give an example where “we” have ignored the experts? And when you say experts, you probably refer to expert reasoning or scientific consensus and not appeals to authority.
Your statement gained a lot of upvotes but “EA ignores expoerts” just fits the prevailing narrative too well but I haven’t seen any examples of it. Happy to update if I find one.
For some related context: In the past GiveWell used to solicit external reviews by experts of their work, but has since discontinued the practice. Some of their reasons are (I can imagine similar reasons applying to other orgs):
“There is a question around who counts as a “qualified” individual for conducting such an evaluation, since we believe that there are no other organizations whose work is highly similar to GiveWell’s.”
“Given the time investment these sorts of activities require on our part, we’re hesitant to go forward with one until we feel confident that we are working with the right person in the right way and that the research they’re evaluating will be representative of our work for some time to come.”
I’ve hypothesized that one potential failure mode is that experts are not used to communicating with EA audiences, and EA audiences tend to be more critical/skeptical of ideas (on a rational level). Thus, it may be the case that experts aren’t always as explicit about some of the concerns or issues, perhaps because they expect their audiences to defer to them or they have a model of what things people will be skeptical of and thus that they need to defend/explain, but that audience model doesn’t apply well to EA. I think there may be a case/example to highlight with regards to nuclear weapons or international relations, but then again it is also possible that the EA skepticism in some of these cases is valid due to higher emphasis on existential risks rather than smaller risks.
I agree the we ignore experts over people who are more value aligned. Seems like a mistake.
As a weak counter-point to this, I have found in the past that experts who are not value-aligned can almost find EA ways of thinking incomprehensible, such that it can be very difficult to extract useful information from them. I have experienced talking to a whole series of non-EA experts and really struggling to get them to even engage with the questions I was asking (a lot of “I just haven’t thought about that”), whereas I got a ton of value very quickly from talking to an EA grad student in the area.
I empathise with this from my own experience having been quite actively involved in EA for 10 years and within my own area of expertise which is finance and investment, risk management and to a lesser extent governance ( as a senior partner and risk committee member of one the largest hedge fund in Europe), that sometimes we ignore ‘experts’ over people who are more value aligned.
It doesn’t mean I believe we should always defer to ‘experts’. Sometimes a fresh perspective is useful to explore and maximise potential upside , but sometimes ‘experts’ are useful in minimising downside risks that people with less experience may not be aware of, and also save time and effort in reinventing existing best practises upon which improvements could be made.
I guess it is a balance between the two which varies with the context, but more likely perhaps in areas such as operation, legal and compliance, financial risk management and probably others.
I’m doing a project on how we should study xrisk, and I’d love to talk to you about your risk management work etc. Would you be up for a call?
Hi Gideon, do you mean me? I have very very little detailed knowledge of xrisk and do not believe my risk management expertise would be relevant. But happy to chat. May be you can pm me?
Sure!
More broadly I often think a good way to test if we are right is if we can convince others. If we can’t that’s kind of a red flag in itself.
This is valuable, but at a certain point the market of ideas relies on people actually engaging in object level reasoning. There’s an obvious failure mode in rejecting adopting new ideas on the sole meta-level basis that if they were good they would already be popular. Kind of like the old joke of the economist who refuses to pick up hundred-dollar bills off the ground because of the Efficient Market Hypothesis.
EA & Aspiring Rationalism have grown fairly rapidly, all told! But they’re also fairly new. “Experts in related fields haven’t thought much about EA approaches” is more promising than “experts in related fields have thought a lot about EA approaches and have standard reasons to reject them.”
(Although “most experts have clear reasons to reject EA thinking on their subject matter” is closer to being the case in AI … but that’s probably also the field with the most support for longtermist & x-risk type thinking & where it’s seen the fastest growth, IDK.)
We sort of seem to be doing the opposite to me—see for example some of the logic behind this post and some of the comments on it (though I like the post and think it’s useful).
Agree that it is a red flag. However, I also think that sometimes we have to bite the bullet on this.
Only a small red flag, IMO, because it’s rather easy to convince people of alluring falsehoods, and not so easy to convince people of uncomfortable truths.
This seems quite hand-wavy and I’m skeptical of it. Could you give an example where “we” have ignored the experts? And when you say experts, you probably refer to expert reasoning or scientific consensus and not appeals to authority.
Your statement gained a lot of upvotes but “EA ignores expoerts” just fits the prevailing narrative too well but I haven’t seen any examples of it. Happy to update if I find one.
For some related context: In the past GiveWell used to solicit external reviews by experts of their work, but has since discontinued the practice. Some of their reasons are (I can imagine similar reasons applying to other orgs):
“There is a question around who counts as a “qualified” individual for conducting such an evaluation, since we believe that there are no other organizations whose work is highly similar to GiveWell’s.”
“Given the time investment these sorts of activities require on our part, we’re hesitant to go forward with one until we feel confident that we are working with the right person in the right way and that the research they’re evaluating will be representative of our work for some time to come.”
I’ve hypothesized that one potential failure mode is that experts are not used to communicating with EA audiences, and EA audiences tend to be more critical/skeptical of ideas (on a rational level). Thus, it may be the case that experts aren’t always as explicit about some of the concerns or issues, perhaps because they expect their audiences to defer to them or they have a model of what things people will be skeptical of and thus that they need to defend/explain, but that audience model doesn’t apply well to EA. I think there may be a case/example to highlight with regards to nuclear weapons or international relations, but then again it is also possible that the EA skepticism in some of these cases is valid due to higher emphasis on existential risks rather than smaller risks.
I generally/directionally agree, and also wrote about a closely related concern previously: https://forum.effectivealtruism.org/posts/tdaoybbjvEAXukiaW/what-are-your-main-reservations-about-identifying-as-an?commentId=GB8yfzi8ztvr3c6DC