Epistemic status: grappling with something confusing. May not make sense.
One thing that confuses me is whether we should be just willing to “eat that loss” in expectation. I think most EAs agree that individuals should be somewhat risk-seeking in eg, career choice, since this allows the movement to have a portfolio. But maybe there are correlated risks that the movement will have (for example, if we’re wrong about Bayesian decision theory, say, or meta-philosophy concepts like preferring parsimony), that we basically can’t de-risk without cutting a lot into expected value.
An analogy is startups. Startups implicitly have to take on some epistemic (and other) risks about the value of the product, the vision for team organization being good, etc. VCs are fine with funding off-shoot ideas as long as their portfolio is good (lots of startups with relatively uncorrelated risks).
So maybe in some ways we should think of the world as a whole of having a portfolio of potential do-gooder social movements, and we should just try our best to have the best movement we can under the movements’ assumptions.
Another analogy is the 100 schools of thought era in China, where at least one school of thought had important similarities to ours. That school of thought (Mohism) did not end up winning, for reasons that are not necessarily the best according to our lights. But maybe it was a good shot anyway, and if they compromised too much on their values or epistemology, they wouldn’t have produced much value.
This is what confuses me when people like Will Macaskill talks about EA being a new ethical revolution. Should we think of an “EA ethical revolution” as something that is the default outcome as long as we work really hard at it, and is something we can de-risk and still do, or is the implicit assumption that we should think of ourselves as a startup that is one of the world’s bets (among many) for achieving an ethical revolution?
I think one clear disanalogy with startups is that eventually startups are judged by reality. Whereas we aren’t, because doing good and getting more money are not that strongly correlated. By just eating the risk of being wrong about something, the worst case is not failing, like it is for a startup, but rather sucking up all the resources into the wrong thing.
Also, small point, but I don’t think Bayesian decision theory is particularly important for EA.
Anyway, maybe eventually this might be worth considering, but as it is we’ve done several orders of magnitude too little analysis to start conceding.
Epistemic status: grappling with something confusing. May not make sense.
One thing that confuses me is whether we should be just willing to “eat that loss” in expectation. I think most EAs agree that individuals should be somewhat risk-seeking in eg, career choice, since this allows the movement to have a portfolio. But maybe there are correlated risks that the movement will have (for example, if we’re wrong about Bayesian decision theory, say, or meta-philosophy concepts like preferring parsimony), that we basically can’t de-risk without cutting a lot into expected value.
An analogy is startups. Startups implicitly have to take on some epistemic (and other) risks about the value of the product, the vision for team organization being good, etc. VCs are fine with funding off-shoot ideas as long as their portfolio is good (lots of startups with relatively uncorrelated risks).
So maybe in some ways we should think of the world as a whole of having a portfolio of potential do-gooder social movements, and we should just try our best to have the best movement we can under the movements’ assumptions.
Another analogy is the 100 schools of thought era in China, where at least one school of thought had important similarities to ours. That school of thought (Mohism) did not end up winning, for reasons that are not necessarily the best according to our lights. But maybe it was a good shot anyway, and if they compromised too much on their values or epistemology, they wouldn’t have produced much value.
This is what confuses me when people like Will Macaskill talks about EA being a new ethical revolution. Should we think of an “EA ethical revolution” as something that is the default outcome as long as we work really hard at it, and is something we can de-risk and still do, or is the implicit assumption that we should think of ourselves as a startup that is one of the world’s bets (among many) for achieving an ethical revolution?
I think one clear disanalogy with startups is that eventually startups are judged by reality. Whereas we aren’t, because doing good and getting more money are not that strongly correlated. By just eating the risk of being wrong about something, the worst case is not failing, like it is for a startup, but rather sucking up all the resources into the wrong thing.
Also, small point, but I don’t think Bayesian decision theory is particularly important for EA.
Anyway, maybe eventually this might be worth considering, but as it is we’ve done several orders of magnitude too little analysis to start conceding.