“EAs generally have better opinions when they’ve been around EA longer”
Except on the issues that EAs are systematically wrong about, where they will tend to have worse opinions. Which we won’t notice because we also share those opinions. For example, if AMF is actually worse than standard aid programs at reducing global poverty, or if AI risk is actually not a big deal, then time spent in EA is correlated with worse opinions on these topics.
Epistemic status: grappling with something confusing. May not make sense.
One thing that confuses me is whether we should be just willing to “eat that loss” in expectation. I think most EAs agree that individuals should be somewhat risk-seeking in eg, career choice, since this allows the movement to have a portfolio. But maybe there are correlated risks that the movement will have (for example, if we’re wrong about Bayesian decision theory, say, or meta-philosophy concepts like preferring parsimony), that we basically can’t de-risk without cutting a lot into expected value.
An analogy is startups. Startups implicitly have to take on some epistemic (and other) risks about the value of the product, the vision for team organization being good, etc. VCs are fine with funding off-shoot ideas as long as their portfolio is good (lots of startups with relatively uncorrelated risks).
So maybe in some ways we should think of the world as a whole of having a portfolio of potential do-gooder social movements, and we should just try our best to have the best movement we can under the movements’ assumptions.
Another analogy is the 100 schools of thought era in China, where at least one school of thought had important similarities to ours. That school of thought (Mohism) did not end up winning, for reasons that are not necessarily the best according to our lights. But maybe it was a good shot anyway, and if they compromised too much on their values or epistemology, they wouldn’t have produced much value.
This is what confuses me when people like Will Macaskill talks about EA being a new ethical revolution. Should we think of an “EA ethical revolution” as something that is the default outcome as long as we work really hard at it, and is something we can de-risk and still do, or is the implicit assumption that we should think of ourselves as a startup that is one of the world’s bets (among many) for achieving an ethical revolution?
I think one clear disanalogy with startups is that eventually startups are judged by reality. Whereas we aren’t, because doing good and getting more money are not that strongly correlated. By just eating the risk of being wrong about something, the worst case is not failing, like it is for a startup, but rather sucking up all the resources into the wrong thing.
Also, small point, but I don’t think Bayesian decision theory is particularly important for EA.
Anyway, maybe eventually this might be worth considering, but as it is we’ve done several orders of magnitude too little analysis to start conceding.
My broader point is something like: in a discussion about deference and skepticism, it feels odd to only discuss deference to other EAs. By conflating “EA experts” and “people with good opinions”, you’re missing an important dimension of variation (specifically, the difference between a community-centred outside view and a broader outside view).
Apologies for phrasing the original comment as a “gotcha” rebuttal rather than trying to distill a more constructive criticism.
Correlation usually implies higher value in sources of outside variance, even if the mean is slightly lower. We should actively look for additional sources of high-value variance. And we often see that smart people outside of EA often have valuable criticisms, once we can get past the instinctive “we’re being attacked” response.
“EAs generally have better opinions when they’ve been around EA longer”
Except on the issues that EAs are systematically wrong about, where they will tend to have worse opinions. Which we won’t notice because we also share those opinions. For example, if AMF is actually worse than standard aid programs at reducing global poverty, or if AI risk is actually not a big deal, then time spent in EA is correlated with worse opinions on these topics.
Epistemic status: grappling with something confusing. May not make sense.
One thing that confuses me is whether we should be just willing to “eat that loss” in expectation. I think most EAs agree that individuals should be somewhat risk-seeking in eg, career choice, since this allows the movement to have a portfolio. But maybe there are correlated risks that the movement will have (for example, if we’re wrong about Bayesian decision theory, say, or meta-philosophy concepts like preferring parsimony), that we basically can’t de-risk without cutting a lot into expected value.
An analogy is startups. Startups implicitly have to take on some epistemic (and other) risks about the value of the product, the vision for team organization being good, etc. VCs are fine with funding off-shoot ideas as long as their portfolio is good (lots of startups with relatively uncorrelated risks).
So maybe in some ways we should think of the world as a whole of having a portfolio of potential do-gooder social movements, and we should just try our best to have the best movement we can under the movements’ assumptions.
Another analogy is the 100 schools of thought era in China, where at least one school of thought had important similarities to ours. That school of thought (Mohism) did not end up winning, for reasons that are not necessarily the best according to our lights. But maybe it was a good shot anyway, and if they compromised too much on their values or epistemology, they wouldn’t have produced much value.
This is what confuses me when people like Will Macaskill talks about EA being a new ethical revolution. Should we think of an “EA ethical revolution” as something that is the default outcome as long as we work really hard at it, and is something we can de-risk and still do, or is the implicit assumption that we should think of ourselves as a startup that is one of the world’s bets (among many) for achieving an ethical revolution?
I think one clear disanalogy with startups is that eventually startups are judged by reality. Whereas we aren’t, because doing good and getting more money are not that strongly correlated. By just eating the risk of being wrong about something, the worst case is not failing, like it is for a startup, but rather sucking up all the resources into the wrong thing.
Also, small point, but I don’t think Bayesian decision theory is particularly important for EA.
Anyway, maybe eventually this might be worth considering, but as it is we’ve done several orders of magnitude too little analysis to start conceding.
I mean on average; obviously you’re right that our opinions are correlated. Do you think there’s anything important about this correlation?
My broader point is something like: in a discussion about deference and skepticism, it feels odd to only discuss deference to other EAs. By conflating “EA experts” and “people with good opinions”, you’re missing an important dimension of variation (specifically, the difference between a community-centred outside view and a broader outside view).
Apologies for phrasing the original comment as a “gotcha” rebuttal rather than trying to distill a more constructive criticism.
Correlation usually implies higher value in sources of outside variance, even if the mean is slightly lower. We should actively look for additional sources of high-value variance. And we often see that smart people outside of EA often have valuable criticisms, once we can get past the instinctive “we’re being attacked” response.