I remain unconvinced that these offsets are particularly helpful, and certainly not at 1:1.
My understanding is that alignment as a field is much more constrained by ideas, talent, and infrastructure than funding. Providing capabilities labs like OpenAI with more resources (and making it easier for similar organisations to raise capital) seems to do much more to slow down timelines than providing some extra cash to the alignment community today does to get us closer to good alignment solutions.
I am not saying it can never be ethical to pay for something like ChatGPT Plus, but if you are not directly using that to help with working on alignment then I think it’s likely to be very harmful in expectation.
I am pretty surprised that more of the community don’t have more of an issue with merely using ChatGPT and similar services—it provides a lot of real world data that capabilities researchers will use for future training, and encourages investment into more capabilities research, even if you don’t pay them directly.
Very harmful seems unreasonably strong. These products are insanely widely used and Jeffrey’s impact will be negligible. I generally think that tracking minor harms like this causes a lot more stress than its worth
The products being widely used doesn’t prevent the marginal impact of another user being very high in absolute terms, since the absolute cost of an AI catastrophe would be enormous.
In addition, establishing norms about behaviour can lead to a difference of a much larger number of users.
You could make similar arguments to suggest that if you are concerned about climate change and/or animal welfare, that it is not worth avoiding flying, or eating vegan, but I think those are at least given more serious consideration both in EA communities and other communities that care about these causes.
You could make similar arguments to suggest that if you are concerned about climate change and/or animal welfare, that it is not worth avoiding flying, or eating vegan
If it helps, I also hold this opinion and think that many EAs are also wrong about this. I particularly think that on the margin fewer EAs should be vegan, by their own lights (my impression is that most EAs do still fly when flying makes sense).
The products being widely used doesn’t prevent the marginal impact of another user being very high in absolute terms, since the absolute cost of an AI catastrophe would be enormous.
I agree with this argument in principle, but think that it just doesn’t check out—if you compare to the other options for reducing AI x risk (like Jeffrey’s day job!), I think his impact from that seems vastly higher than the impact of ChatGPT divided by several million (and the share given to openai, Microsoft, etc). And these both scale with the expected harm of AI x risk, so the ratio argument still holds regardless of the absolute scale. And I generally think that it’s a mistake to significantly stress over tiny fractions of your expected impact, and leads to poor allocations of time and mental energy.
Even if you’re not working on AI x risk directly, I would guess that eg donations to the longterm future fund still matter way more.
This isn’t an actual argument, I have a meta level suspicion that in many people this kind of reasoning is generated more by the virtue ethics of “I want to be a good person ⇒ avoid causing harm” than the utilitarian “I want to maximise the total amount of good done”, and in many people the utilitarian case is more post hoc justification (
I think the influencing other people argument does check out, but I’m just pretty skeptical that the average number of counterfactual coverts will be more than, say, 5,and this doesn’t change my argument. You also need to be careful to avoid double counting evidence—if being vegan convinces someone else to become vegan, and THEY convince someone else, then you need to split the credit between you two. If you think it leads to significant exponential growth then MAYBE the argument goes through even after credit sharing and accounting for counterfactuals?
In the concrete case of ChatGPT, I expect the models to just continue getting better and the context to shift far too rapidly for slow movement growth like that to be that important (concretely, I think that a mass movement boycott of these products is unlikely to be a contingent factor in AI products like this being profitable)
I remain unconvinced that these offsets are particularly helpful, and certainly not at 1:1.
My understanding is that alignment as a field is much more constrained by ideas, talent, and infrastructure than funding. Providing capabilities labs like OpenAI with more resources (and making it easier for similar organisations to raise capital) seems to do much more to slow down timelines than providing some extra cash to the alignment community today does to get us closer to good alignment solutions.
I am not saying it can never be ethical to pay for something like ChatGPT Plus, but if you are not directly using that to help with working on alignment then I think it’s likely to be very harmful in expectation.
I am pretty surprised that more of the community don’t have more of an issue with merely using ChatGPT and similar services—it provides a lot of real world data that capabilities researchers will use for future training, and encourages investment into more capabilities research, even if you don’t pay them directly.
Very harmful seems unreasonably strong. These products are insanely widely used and Jeffrey’s impact will be negligible. I generally think that tracking minor harms like this causes a lot more stress than its worth
Thanks for the response!
The products being widely used doesn’t prevent the marginal impact of another user being very high in absolute terms, since the absolute cost of an AI catastrophe would be enormous.
In addition, establishing norms about behaviour can lead to a difference of a much larger number of users.
You could make similar arguments to suggest that if you are concerned about climate change and/or animal welfare, that it is not worth avoiding flying, or eating vegan, but I think those are at least given more serious consideration both in EA communities and other communities that care about these causes.
If it helps, I also hold this opinion and think that many EAs are also wrong about this. I particularly think that on the margin fewer EAs should be vegan, by their own lights (my impression is that most EAs do still fly when flying makes sense).
I agree with this argument in principle, but think that it just doesn’t check out—if you compare to the other options for reducing AI x risk (like Jeffrey’s day job!), I think his impact from that seems vastly higher than the impact of ChatGPT divided by several million (and the share given to openai, Microsoft, etc). And these both scale with the expected harm of AI x risk, so the ratio argument still holds regardless of the absolute scale. And I generally think that it’s a mistake to significantly stress over tiny fractions of your expected impact, and leads to poor allocations of time and mental energy.
Even if you’re not working on AI x risk directly, I would guess that eg donations to the longterm future fund still matter way more.
This isn’t an actual argument, I have a meta level suspicion that in many people this kind of reasoning is generated more by the virtue ethics of “I want to be a good person ⇒ avoid causing harm” than the utilitarian “I want to maximise the total amount of good done”, and in many people the utilitarian case is more post hoc justification (
I think the influencing other people argument does check out, but I’m just pretty skeptical that the average number of counterfactual coverts will be more than, say, 5,and this doesn’t change my argument. You also need to be careful to avoid double counting evidence—if being vegan convinces someone else to become vegan, and THEY convince someone else, then you need to split the credit between you two. If you think it leads to significant exponential growth then MAYBE the argument goes through even after credit sharing and accounting for counterfactuals?
In the concrete case of ChatGPT, I expect the models to just continue getting better and the context to shift far too rapidly for slow movement growth like that to be that important (concretely, I think that a mass movement boycott of these products is unlikely to be a contingent factor in AI products like this being profitable)