I’m in favour of everyone donating to effective charities. Even according to deontological theories I think donating and avoiding harm are two different responsibilities and people doing harm still have responsibilities/opportunities to donate. Donating is an amazing thing to do regardless of what other actions a person might be undertaking.
Nonetheless, I’m also very much in favour of having true beliefs about things and taking moral uncertainty seriously. If something doesn’t seem right to me under a somewhat plausible theory I’m going to say so even if I don’t believe in that theory myself. My language in the original comment is also appropriately hedged(I suspect, it might be the case).
I wouldn’t want to discourage anyone from donating anywhere. But for offsetting I have uncertainties so I’m going to state them. I agree that one of the more important wrongdoings committed by consuming animal products is creating more demand.
But I’m not certain that eating meat doesn’t wrong the animal eaten at all according to deontological theories.
1. I’m not sure that the right to bodily integrity ends after death. It might be the case desecrating the bodies of dead individuals might be wronging them. I’m aware that claiming that dead people can be wronged brings in a lot of problems in moral theorising, but I can’t dismiss this claim entirely.
2. It seems very odd to me that if you hire an individual to kill X and X gets killed, you certainly wrong X; but if someone kills X in advance with the expectation that they will get paid for it and retroactively asks to get paid for killing X, paying them doesn’t wrong X.
And if eating meat wrongs the animal being eaten then offsetting is not a Pareto improvement so the case for “offsetting” becomes weaker.
To be honest you can view these implications as weaknesses of deontological theories, I personally do.
None of this weakens the case for donating to effective charities either. Donating money to effective charities is pretty robust according to many different moral theories.
I disagree with the following:
”very strong evidence against “the world in 100 years will look kind of similar to what it looks like today”.”
Growth is an important kind of change. Arguing against the possibility of some kind of extreme growth makes it more difficult to argue that the future will be very different. Let me frame it this way:
Scenario → Technological “progress” under scenario
AI singularity → Extreme progress within this century
AI doom → Extreme “progress” within this century
Constant growth → Moderate progress within this century, very extreme progress in 8200 years
Collapse through climate change, political instability, war → Technological decline
Stagnation/slowdown → Weak progress within this century
Most of the mainstream audience mostly give credence in the scenarios 3, 4 and 5. The scenario 3 is the scenario with the highest technological progress. The blog post is mostly spent on refuting the scenario 3 by explaining the difficulty and rareness of the growth and technological change. This argument makes people give more credence in scenarios 4 and especially 5 rather than 1 and 2, since the scenarios 1 and 2 also involve a lot of technological progress.
For these reasons, I’m more inclined to believe that an introductory blog post should be more focused on assessing the possibility of 4 and 5 rather than 3.
Arguing against 3 is still important, as it is decision-relevant on the questions of whether philanthropic resources should be spent now or later. But it doesn’t look like this topic makes a good intro blog-post for AI risk.