I think an inviting form of decoupling norms where it’s fractured in chains. I don’t think decoupling norms work when both parties don’t opt-in and so people should switch to the dominant norm of the sphere. An illustrative example is as follows:
EAs try to avoid being persuasive and be explanatory and thus do not engage within the frame of opponents for fear of low fidelity. E.g. Will Macaskill sweatshop discourse about impermissibility and instead contest the claim on the metaethical level.
EAs then respond that utilitarianism is true and people aren’t engaging or justifying on the comparative (which is a category mistake about morality for a deontologist) and act as if the critic was non-responsive (the Chappell thread comes to mind).
I will note this is a special case in which many EA critics don’t want to re-litigate the utilitarianism vs. deontology discourse but want to articulate the side constraint violation and inform others of that thinking in EA.
The critic feels misunderstood because the EAs are very nice and say they’re really happy the critic is here but the critic doesn’t feel actually heard because the criticism wasn’t really responded to but EAs say nice words of praise.
In turn the critic continues on the same chain of logic that EAs have not sharpened and pushed towards the true crux.
Some EAs would see this as being a motte-and-bailey instead of getting to the crux but cruxes can be asymmetric in that different critics combine claims together (e.g. the “woke” combining with more centrist sensibilities deontologists). But I think explanations which are done well are persuasive because they reframe truth-seeking ideas within accessible language that dissolve cruxes to seek agreement and cooperation.
Another illustration on the macro-level of the comparative:
Treat critics well on the resource and out of argumentative level such that the asymmetric resources of EA don’t come to bear.
Make sure you are responsive to their concerns and use reasoning transparency. Instead of saying, “thanks for responding and being a critic” and leaving it there actually engage forcefully with the ideas and then thank them for their time.
The core problem right now is that EAs lead with being open to change their mind which sets up discourse for failure because when EAs don’t change their minds the critics feel misled.
To be clear, there are harms with trying to be persuasive (e.g. sophistry, lying, motivated reasoning etc.). But sometimes being persuasive is about speaking the argumentative language of another side.
Yeah I should have written more but I try to keep my short form casual to make the barrier of entry lower and to allow for expansions based on different reader’s issues.
Platforms and ability to amplify. I worry a lot about the amount of money in global priorities research and graduate students (even though I do agree it’s net good). For instance, most EA PhD students take teaching buyouts and probably have more hours to devote to research. A sharing of resources probably means good distribution of prestige bodies and amplification gatekeepers.
To be explicitly my model of the modal EA is they have bad epistemics and would take this to mean fund a bad faith critic (and there are so many) but I do worry that sometimes EA wins in the marketplace of ideas due to money rather than truth.
Give access to the materials necessary to make criticisms (e.g. AI Safety papers should be more open with dataset documentation etc.).
What’s the comparative?
I think an inviting form of decoupling norms where it’s fractured in chains. I don’t think decoupling norms work when both parties don’t opt-in and so people should switch to the dominant norm of the sphere. An illustrative example is as follows:
EAs try to avoid being persuasive and be explanatory and thus do not engage within the frame of opponents for fear of low fidelity. E.g. Will Macaskill sweatshop discourse about impermissibility and instead contest the claim on the metaethical level.
EAs then respond that utilitarianism is true and people aren’t engaging or justifying on the comparative (which is a category mistake about morality for a deontologist) and act as if the critic was non-responsive (the Chappell thread comes to mind).
I will note this is a special case in which many EA critics don’t want to re-litigate the utilitarianism vs. deontology discourse but want to articulate the side constraint violation and inform others of that thinking in EA.
The critic feels misunderstood because the EAs are very nice and say they’re really happy the critic is here but the critic doesn’t feel actually heard because the criticism wasn’t really responded to but EAs say nice words of praise.
In turn the critic continues on the same chain of logic that EAs have not sharpened and pushed towards the true crux.
Some EAs would see this as being a motte-and-bailey instead of getting to the crux but cruxes can be asymmetric in that different critics combine claims together (e.g. the “woke” combining with more centrist sensibilities deontologists). But I think explanations which are done well are persuasive because they reframe truth-seeking ideas within accessible language that dissolve cruxes to seek agreement and cooperation.
Another illustration on the macro-level of the comparative:
Treat critics well on the resource and out of argumentative level such that the asymmetric resources of EA don’t come to bear.
Make sure you are responsive to their concerns and use reasoning transparency. Instead of saying, “thanks for responding and being a critic” and leaving it there actually engage forcefully with the ideas and then thank them for their time.
The core problem right now is that EAs lead with being open to change their mind which sets up discourse for failure because when EAs don’t change their minds the critics feel misled.
To be clear, there are harms with trying to be persuasive (e.g. sophistry, lying, motivated reasoning etc.). But sometimes being persuasive is about speaking the argumentative language of another side.
This is a great comment and I think made me get much more of what you’re driving at than the (much terser) top-level comment.
Yeah I should have written more but I try to keep my short form casual to make the barrier of entry lower and to allow for expansions based on different reader’s issues.
What do you mean by “resource” here?
Examples of resources that come to mind:
Platforms and ability to amplify. I worry a lot about the amount of money in global priorities research and graduate students (even though I do agree it’s net good). For instance, most EA PhD students take teaching buyouts and probably have more hours to devote to research. A sharing of resources probably means good distribution of prestige bodies and amplification gatekeepers.
To be explicitly my model of the modal EA is they have bad epistemics and would take this to mean fund a bad faith critic (and there are so many) but I do worry that sometimes EA wins in the marketplace of ideas due to money rather than truth.
Give access to the materials necessary to make criticisms (e.g. AI Safety papers should be more open with dataset documentation etc.).
Again this is predicated on good faith critics.