There’s a reason society converged to loss-limited companies being the right thing to do, even though there is unlimited gain and limited downside, and that’s that individuals tend to be far too risk averse.
I think the reason that states tend to allow loss-limited companies is that it causes them to have larger GDP (and thus all the good/adaptive things that are caused by having larger GDP). But loss-limited companies may be a bad thing from an EA perspective, considering that such companies may be financially incentivized to act in net-negative ways (e.g. exacerbating x-risks), especially in situations where lawmakers/regulators are lagging behind.
Yes, and greater GDP maps fairly well to greater effectiveness of altruism. I think you’re focused on downside risks too strongly. They exist, and they are worth mitigating, but inaction due to fear of them will cause far more harm. Inaction due to heckler’s veto is a not a free outcome.
Companies not being loss-limited would not cause them to stop producing x-risks when the literal death of all their humans is an insufficient motivation to discourage them. It would reduce a bunch of other categories of harm, but we’ve converged to accepting that risk to avoid crippling risk aversion in the economy.
I think the reason that states tend to allow loss-limited companies is that it causes them to have larger GDP (and thus all the good/adaptive things that are caused by having larger GDP). But loss-limited companies may be a bad thing from an EA perspective, considering that such companies may be financially incentivized to act in net-negative ways (e.g. exacerbating x-risks), especially in situations where lawmakers/regulators are lagging behind.
Yes, and greater GDP maps fairly well to greater effectiveness of altruism. I think you’re focused on downside risks too strongly. They exist, and they are worth mitigating, but inaction due to fear of them will cause far more harm. Inaction due to heckler’s veto is a not a free outcome.
Companies not being loss-limited would not cause them to stop producing x-risks when the literal death of all their humans is an insufficient motivation to discourage them. It would reduce a bunch of other categories of harm, but we’ve converged to accepting that risk to avoid crippling risk aversion in the economy.