I was discussing EA with a friend the other night and they made some criticisms I couldn’t answer. I think this is usually illustrative. I’ll attempt to structure the discussion:
Effective Altruism must exist due to market failure
That the market is the least bad allocator of resources we have created. I agree that a failure to price in x-risks and s-risks (existential and suffering risks) is a market failure here. Companies will make no money in a future where we are all dead and consumers would pay a lot to avoid significant suffering and yet companies aren’t investing. We are subsidising catastrophe while we let this continue.
Effective Altruism does not primarily focus on correcting that market failure
While it is an aspect of EA to lobby government to consider x and s-risk, it is not (as far as I can tell) the primary focus, nor it is what most people seem to spend their time doing. In other ideologies it might be reasonable to say we are doing what we can whilst carrying on, but since we are about finding the most effective way to do things, if this were the most important thing to do, we should all do it. We should found or convince a political party and either campaign or pay others to. Why don’t we?
80000 hours is basically central planning and is unsustainable as the movement grows
If people choose work based on 80k hours advice rather than their own desires/market incentives, this makes a break from market allocation and backtracks towards central planning which has always been worse in the past. Why is it better here?
Likewise the 80k hours advice isn’t scalable. If 10% of the workforce was reading 80k hours, it would make much more sense at to change government than to tell each individual which job they ought to be doing. At that point rather than saying it should mainly be AI and biorisk, you’d be thinking about how to shift the whole economy, which is more effectively done by the market than central planning. Rather than advising individuals you’d work on the level of legislation (to correct externalities).
So why is there a goldilocks zone where it makes sense to tell individuals to change their lives, when elsewhere they should all lobby government? Why are we working to do something which we don’t intend to do indefinitely? I can’t help but think it seems as if much of EA is a stopgap right now to demonstrate our legitimacy so that we can convince others to join and eventually move to our real purpose—wholescale legislative change. If that’s the case we should be honest about it. If that isn’t the case, where is this argument wrong?
Is EA unscalable central planning?
I was discussing EA with a friend the other night and they made some criticisms I couldn’t answer. I think this is usually illustrative. I’ll attempt to structure the discussion:
Effective Altruism must exist due to market failure
That the market is the least bad allocator of resources we have created. I agree that a failure to price in x-risks and s-risks (existential and suffering risks) is a market failure here. Companies will make no money in a future where we are all dead and consumers would pay a lot to avoid significant suffering and yet companies aren’t investing. We are subsidising catastrophe while we let this continue.
Effective Altruism does not primarily focus on correcting that market failure
While it is an aspect of EA to lobby government to consider x and s-risk, it is not (as far as I can tell) the primary focus, nor it is what most people seem to spend their time doing. In other ideologies it might be reasonable to say we are doing what we can whilst carrying on, but since we are about finding the most effective way to do things, if this were the most important thing to do, we should all do it. We should found or convince a political party and either campaign or pay others to. Why don’t we?
80000 hours is basically central planning and is unsustainable as the movement grows
If people choose work based on 80k hours advice rather than their own desires/market incentives, this makes a break from market allocation and backtracks towards central planning which has always been worse in the past. Why is it better here?
Likewise the 80k hours advice isn’t scalable. If 10% of the workforce was reading 80k hours, it would make much more sense at to change government than to tell each individual which job they ought to be doing. At that point rather than saying it should mainly be AI and biorisk, you’d be thinking about how to shift the whole economy, which is more effectively done by the market than central planning. Rather than advising individuals you’d work on the level of legislation (to correct externalities).
So why is there a goldilocks zone where it makes sense to tell individuals to change their lives, when elsewhere they should all lobby government? Why are we working to do something which we don’t intend to do indefinitely? I can’t help but think it seems as if much of EA is a stopgap right now to demonstrate our legitimacy so that we can convince others to join and eventually move to our real purpose—wholescale legislative change. If that’s the case we should be honest about it. If that isn’t the case, where is this argument wrong?