I think my framework/illustration doesn’t really handle moral uncertainty well, since it effectively assumes a particular normalization, but I think the general idea can still be useful in those cases, and you should consider compensating moral worldviews that are harmed by an intervention in your portfolio and/or allocating less to interventions that are harmful on other moral worldviews than interventions that are neutral on other moral worldviews, all else equal, aiming for a portfolio that is robustly positive across these worldviews and not dominated.
I think my framework/illustration doesn’t really handle moral uncertainty well, since it effectively assumes a particular normalization, but I think the general idea can still be useful in those cases, and you should consider compensating moral worldviews that are harmed by an intervention in your portfolio and/or allocating less to interventions that are harmful on other moral worldviews than interventions that are neutral on other moral worldviews, all else equal, aiming for a portfolio that is robustly positive across these worldviews and not dominated.