I find it emotionally draining when heated topics become battlegrounds for social proofing through mass use of agreement vote/karma. It makes me feel like people are trying to manipulate me by illegitimate means and I’m a target of aggression. I don’t have any good solutions here but I wanted to offer feedback on my experience.
by illegitimate means and I’m a target of aggression
Can you give example of that? Not saying you are not right, but not sure I can easily picture what falls into these categories. Pls ignore if this would drain you more.
You should be familiar with this from activism, people use “like”s and mass comments in social media to make bystanders more likely to believe in an idea or have certain attitudes just through social proof effect. I feel a similar vibe with discussions under heated posts.
Does requiring ex-ante Pareto superiority incentivise information suppression?
Assume I emit x kg of carbon dioxide. Later on, I donate to offset 2x kg of carbon dioxide emissions. The combination of these two actions seems to make everyone better off in expectation. It’s ex-ante Pareto superior. Even though we know that my act of emitting carbon and offsetting it will cause the deaths of different individuals due to different extreme weather events compared to not emitting at all, climate scientists report that higher carbon emissions will make the severity of climate change worse overall. Since our forecasts are not granular enough and nobody is made foreseeably worse off by reducing emissions, it’s morally permissible to reduce the total amount of emissions.
This position seems to incentivise information suppression.
Assume a climate scientist creates a reliable and sophisticated climate model that can forecast specific weather events caused by different levels of carbon emissions. Such a model would allow us to infer that reducing emissions by a specific amount would make a specific village in Argentina worse off. The villagers from there could complain to a politician that “your offsetting/reduction policy foreseeably causes severe drought in my region, therefore it makes us foreseeably worse off”.
Policy makers who want to act permissibly would have incentives to prevent such a detailed climate model if ex-ante Pareto superiority were a sound condition for permissibility.
FWIW, GPT said the greenhouse effect is not stronger locally to the emissions. So, I would guess that if you can offset and emit the same kind of greenhouse gas molecules roughly simultaneously, it would be very unlikely we’d be able to predict which regions are made worse off by this than neither emitting nor offsetting.
I find it emotionally draining when heated topics become battlegrounds for social proofing through mass use of agreement vote/karma. It makes me feel like people are trying to manipulate me by illegitimate means and I’m a target of aggression. I don’t have any good solutions here but I wanted to offer feedback on my experience.
Can you give example of that? Not saying you are not right, but not sure I can easily picture what falls into these categories. Pls ignore if this would drain you more.
You should be familiar with this from activism, people use “like”s and mass comments in social media to make bystanders more likely to believe in an idea or have certain attitudes just through social proof effect. I feel a similar vibe with discussions under heated posts.
Does requiring ex-ante Pareto superiority incentivise information suppression?
Assume I emit x kg of carbon dioxide. Later on, I donate to offset 2x kg of carbon dioxide emissions. The combination of these two actions seems to make everyone better off in expectation. It’s ex-ante Pareto superior. Even though we know that my act of emitting carbon and offsetting it will cause the deaths of different individuals due to different extreme weather events compared to not emitting at all, climate scientists report that higher carbon emissions will make the severity of climate change worse overall. Since our forecasts are not granular enough and nobody is made foreseeably worse off by reducing emissions, it’s morally permissible to reduce the total amount of emissions.
This position seems to incentivise information suppression.
Assume a climate scientist creates a reliable and sophisticated climate model that can forecast specific weather events caused by different levels of carbon emissions. Such a model would allow us to infer that reducing emissions by a specific amount would make a specific village in Argentina worse off. The villagers from there could complain to a politician that “your offsetting/reduction policy foreseeably causes severe drought in my region, therefore it makes us foreseeably worse off”.
Policy makers who want to act permissibly would have incentives to prevent such a detailed climate model if ex-ante Pareto superiority were a sound condition for permissibility.
Interesting!
Fleurbaey and Voorhoeve wrote a related paper: https://doi.org/10.1093/acprof:oso/9780199931392.003.0009
FWIW, GPT said the greenhouse effect is not stronger locally to the emissions. So, I would guess that if you can offset and emit the same kind of greenhouse gas molecules roughly simultaneously, it would be very unlikely we’d be able to predict which regions are made worse off by this than neither emitting nor offsetting.