I guess I worry it will be convincing to people with a different ethical framework to me, and I won’t be able to articulate an equally convincing objection?
A world in which consequentialists are able to convince a lot of people to accept the destruction of nature for welfare reasons is a pretty surprising world, given how much people like (and depend on) nature. In that hypothetical future, they must have come up with something really quite convincing.
That, or someone’s been and gone and unilaterally done something drastic, but we all agree that’s probably a bad idea.
To address the first point, it’s definitely not something I see as happening any time soon, and I’m much less concerned about the future of the field now that I’ve read the replies to my post.
But since you ask, I can only conceive of being convinced that any of my deeply held beliefs are wrong through appeal to an even more deeply held belief, and a lot of my beliefs (and interest in EA) rest on the idea that “Life is Worth Living”. At some point, surely there has to be something that isn’t up for debate?
As for why I’d be opposed to human extinction on principle and even against my better judgement, G. K. Chesterton’s Orthodoxy, Chapter 5 puts it best: “A man belongs to this world before he begins to ask if it is nice to belong to it. He has fought for the flag, and often won heroic victories for the flag long before he has ever enlisted. To put shortly what seems the essential matter, he has a loyalty long before he has any admiration.”
(This is the basis for his argument against optimism, jingoism, pessimism and suicide)
Ah, I deleted the second half of my comment but you must have already been writing your response to it. It’s a bad habit of mine – my apologies for muddling the dialogue here.
I can only conceive of being convinced that any of my deeply held beliefs are wrong through appeal to an even more deeply held belief [...].
I think this is maybe the locus of our disagreement about how to think about statements like “regardless of how convincing the research is”.
To me it seems important to be able to conceive of being convinced of something even if you can’t currently think of any plausible way that would happen. Otherwise you’re not really imagining the scenario as stated, which leads to various issues. This is mostly just Cromwell’s rule as applied to philosophical/moral beliefs:
leave a little probability for the moon being made of green cheese; it can be as small as 1 in a million, but have it there since otherwise an army of astronauts returning with samples of the said cheese will leave you unmoved.
If I’m honest I don’t understand the Chesterton quote, so I’m not sure we’ll make much progress there right now.
In general I think it’s a mistake to put value on a species or an ecosystem instead of the beings within that species or ecosystem; but humanity is a more plausible exception to this than most.
It probably makes more sense in context, but the context is an entire book of Christian apologetics (sequel to a book on early 20th century philosophy called “Heretics”) so I doubt you have time for that right now.
I guess what I really meant was “regardless of how convincing it is to people other than me”. By definition if I found something convincing it would change my mind, but in the hypothetical example it’s more of a difference in values rather than facts.
I too think it makes the most sense to care about groups only as collections of individuals, but reasonable people could think the reverse is true.
It probably makes more sense in context, but the context is an entire book of Christian apologetics (sequel to a book on early 20th century philosophy called “Heretics”) so I doubt you have time for that right now.
Honestly, I wish people said something like this more often.
I guess what I really meant was “regardless of how convincing it is to people other than me”. By definition if I found something convincing it would change my mind, but in the hypothetical example it’s more of a difference in values rather than facts.
This feels like it’s teetering at the brink of a big moral-uncertainty rabbit hole, and I didn’t read that book yet, so I propose leaving this here for now. ☺
A world in which consequentialists are able to convince a lot of people to accept the destruction of nature for welfare reasons is a pretty surprising world, given how much people like (and depend on) nature. In that hypothetical future, they must have come up with something really quite convincing.
That, or someone’s been and gone and unilaterally done something drastic, but we all agree that’s probably a bad idea.
To address the first point, it’s definitely not something I see as happening any time soon, and I’m much less concerned about the future of the field now that I’ve read the replies to my post.
But since you ask, I can only conceive of being convinced that any of my deeply held beliefs are wrong through appeal to an even more deeply held belief, and a lot of my beliefs (and interest in EA) rest on the idea that “Life is Worth Living”. At some point, surely there has to be something that isn’t up for debate?
As for why I’d be opposed to human extinction on principle and even against my better judgement, G. K. Chesterton’s Orthodoxy, Chapter 5 puts it best: “A man belongs to this world before he begins to ask if it is nice to belong to it. He has fought for the flag, and often won heroic victories for the flag long before he has ever enlisted. To put shortly what seems the essential matter, he has a loyalty long before he has any admiration.”
(This is the basis for his argument against optimism, jingoism, pessimism and suicide)
Ah, I deleted the second half of my comment but you must have already been writing your response to it. It’s a bad habit of mine – my apologies for muddling the dialogue here.
I think this is maybe the locus of our disagreement about how to think about statements like “regardless of how convincing the research is”.
To me it seems important to be able to conceive of being convinced of something even if you can’t currently think of any plausible way that would happen. Otherwise you’re not really imagining the scenario as stated, which leads to various issues. This is mostly just Cromwell’s rule as applied to philosophical/moral beliefs:
If I’m honest I don’t understand the Chesterton quote, so I’m not sure we’ll make much progress there right now.
In general I think it’s a mistake to put value on a species or an ecosystem instead of the beings within that species or ecosystem; but humanity is a more plausible exception to this than most.
It probably makes more sense in context, but the context is an entire book of Christian apologetics (sequel to a book on early 20th century philosophy called “Heretics”) so I doubt you have time for that right now.
I guess what I really meant was “regardless of how convincing it is to people other than me”. By definition if I found something convincing it would change my mind, but in the hypothetical example it’s more of a difference in values rather than facts.
I too think it makes the most sense to care about groups only as collections of individuals, but reasonable people could think the reverse is true.
Honestly, I wish people said something like this more often.
This feels like it’s teetering at the brink of a big moral-uncertainty rabbit hole, and I didn’t read that book yet, so I propose leaving this here for now. ☺