I read your post a couple times, and the appendices. You’re interested in exploring informal logic applied in EA thought. I will offer one of my biggest learnings about informal logic applied here in EA thought.
When evaluating longtermism’s apparent commitment to a large future population in a few forum posts, I attempted to gather information about whether EA’s think that:
the act of procreation is a moral act.
continuation of the species is a moral on-going event.
hypothetical future humans (or other hypothetical beings) deserve moral status.
I got the response, “I believe in making happy people” along with making people happy.
Exploring this with some helpful folks on the forum, I learned about an argument built on the money-pump template for thought experiments.
There’s a money pump argument for the combination of:
moral preference for happy people.
moral indifference for making happy people.
Keeping preferences 1.1 and 1.2 means I can be money-pumped.
Any preferences that can be money-pumped is irrational.
My moral preferences 1.1 and 1.2 are irrational in combination.
I should make my moral preferences rationally consistent.
Basically anyone who thinks its good to make people happy should also think its good to make happy people.
I had never come across such an argument before! Money-pumping, this is new. As I have explored it, I have seen it increasingly as an argument from analogy. It works as follows:
EA has this list of money-pump thought experiments.
Suppose I hold point of view X.
There’s an analogous money-pump thought experiment.
By analogy, my point of view X is irrational.
I should change my point of view.
Well, so there are three points of potential disagreement here:
Is the money-pump thought experiment analogous?
if it is analogous, does it prove my point of view is irrational?
if my point of view (an ethical one) is irrational, should I change it?
I doubt that EA thought experiments are analogous to my ethical beliefs and preferences, which I think are in the minority in some respects. However, in exploring the answers to each of those questions, an insider could determine the cluster of beliefs that EA’s keep around rationality, mathematics, and ethics.
You claim to be a long-time EA, and I believe you. I don’t know who you are, of course, but if you’re a long-time member of this community of interesting people, then I guess you’ve got the connections to get some answers to some casual questions.
I actually can’t do the same. I don’t have the time to build engagement or trust in this community enough for such an important project nor to read through past forum posts to track relevant lines of thought in enough detail to use those as a proxy.
My interest in doing so would be different, but that’s another story.
You wrote:
Now, we know that if your uncertainty can’t be represented with a probability function, then you can be money pumped. There are proofs of this. Guaranteed losses are bad, thus, so the argument goes, you should behave so that your uncertainty can be represented with a probability function.
I followed the link and am browsing the book. Are you sure that link presents any arguments or proofs of why a probability function is necessary in the case of uncertainty?
I see content on: cyclic preferences; expected value; decision trees; money pump set ups with optional souring and sweetening; induction, backward and forward; indifference vs preferential gaps; various principles. It all goes to show that having cyclic preferences has different implications than acyclic preferences and cyclic preferences can be considered irrational (or at least mismatched to circumstances). The result is very interesting and useful for modeling some forms of irrationality well, but that’s not what I am looking for as I read it.
I can’t find any proof of a probability distribution requirement under uncertainty. Is there a specific page you can point me to in the Gustaffson reference or should I use the alternate reference on arguments for probabilism?
Once I tackle this topic, and detail my conclusions about it in a criticism document that I am still writing (about unweighted beliefs, I can’t get anybody to actually read it, lol), I will pursue some other topics you raised.
Hi, Violet Hour
I read your post a couple times, and the appendices. You’re interested in exploring informal logic applied in EA thought. I will offer one of my biggest learnings about informal logic applied here in EA thought.
When evaluating longtermism’s apparent commitment to a large future population in a few forum posts, I attempted to gather information about whether EA’s think that:
the act of procreation is a moral act.
continuation of the species is a moral on-going event.
hypothetical future humans (or other hypothetical beings) deserve moral status.
I got the response, “I believe in making happy people” along with making people happy.
Exploring this with some helpful folks on the forum, I learned about an argument built on the money-pump template for thought experiments.
There’s a money pump argument for the combination of:
moral preference for happy people.
moral indifference for making happy people.
Keeping preferences 1.1 and 1.2 means I can be money-pumped.
Any preferences that can be money-pumped is irrational.
My moral preferences 1.1 and 1.2 are irrational in combination.
I should make my moral preferences rationally consistent.
Basically anyone who thinks its good to make people happy should also think its good to make happy people.
I had never come across such an argument before! Money-pumping, this is new. As I have explored it, I have seen it increasingly as an argument from analogy. It works as follows:
EA has this list of money-pump thought experiments.
Suppose I hold point of view X.
There’s an analogous money-pump thought experiment.
By analogy, my point of view X is irrational.
I should change my point of view.
Well, so there are three points of potential disagreement here:
Is the money-pump thought experiment analogous?
if it is analogous, does it prove my point of view is irrational?
if my point of view (an ethical one) is irrational, should I change it?
I doubt that EA thought experiments are analogous to my ethical beliefs and preferences, which I think are in the minority in some respects. However, in exploring the answers to each of those questions, an insider could determine the cluster of beliefs that EA’s keep around rationality, mathematics, and ethics.
You claim to be a long-time EA, and I believe you. I don’t know who you are, of course, but if you’re a long-time member of this community of interesting people, then I guess you’ve got the connections to get some answers to some casual questions.
I actually can’t do the same. I don’t have the time to build engagement or trust in this community enough for such an important project nor to read through past forum posts to track relevant lines of thought in enough detail to use those as a proxy.
My interest in doing so would be different, but that’s another story.
I followed the link and am browsing the book. Are you sure that link presents any arguments or proofs of why a probability function is necessary in the case of uncertainty?
I see content on: cyclic preferences; expected value; decision trees; money pump set ups with optional souring and sweetening; induction, backward and forward; indifference vs preferential gaps; various principles. It all goes to show that having cyclic preferences has different implications than acyclic preferences and cyclic preferences can be considered irrational (or at least mismatched to circumstances). The result is very interesting and useful for modeling some forms of irrationality well, but that’s not what I am looking for as I read it.
I can’t find any proof of a probability distribution requirement under uncertainty. Is there a specific page you can point me to in the Gustaffson reference or should I use the alternate reference on arguments for probabilism?
Once I tackle this topic, and detail my conclusions about it in a criticism document that I am still writing (about unweighted beliefs, I can’t get anybody to actually read it, lol), I will pursue some other topics you raised.
Thanks so much!