You said “The rationalist community also wasn’t involved from the start”. I think this is false almost no matter how you slice it.
I’ve given a timeline to the contrary which you don’t seem to contradict, so I have little more to say here. If you think that ‘some rationalists were at some EA events’ implies that ‘Eliezer Yudkowsky’s post ~2 years later on was somehow foundational to the EA movement’, then I don’t think we’re going to agree.
I don’t think “known causal decision theorist Sam Bankman-Fried committed multibillion-dollar fraud, therefore we should be less confident that causal decision theory is false” is a good argument.
I haven’t said anything to the effect that SBF’s behaviour should update us on decision theory, so please don’t put that in my mouth. I said that I would like to see you, as a prominent EA, show more epistemic humility.
Do you mean that FDT… E.g., LDT comes into play
I didn’t mention any decision theory except CDT, which I have not seen sufficient reason to reject based on the thought experiments you’ve cited. For example, I expect a real jeep driver in a real desert with no knowledge of my history to have no better than base rate chance of guessing my intentions based on the decision theory I’ve at some stage picked out. I expect a real omnipotent entity with seemingly perfect knowledge of my actions to raise serious questions about personal identity, to which a reasonable answer is ‘I will one-box because it will cause future simulations like me to get more utility’. I don’t have the bandwidth to trawl through every argument and make a counterargument to the effect that the parameters are ill-defined, but that seems to be the unifying factor among them. If you think your views are provable, then don’t link me to multiple thousand-flowery-word essays: just copy and paste the formal proof!
I initially misunderstood you as making a claim that early EAs were philosophically committed to “naive consequentialism” in the sense of “willingness to lie, steal, cheat, murder, etc. whenever the first-order effects of this seem to outweigh the costs”.
Your original comment was about how ‘consequentialism at the level of actions has worse consequences than consequentialism at the level of policies/dispositions’ which said nothing about lying, stealing etc. It was presented as a counterpoint to Harris who, to my knowledge, does neither of those things with any regularity.
Toby Ord’s PhD thesis, which he completed while working on GWWC, was on ‘global consequentialism’, which explicitly endorses act-level reasoning if, on balance, it will lead to the best effect. His solicitation for people to do something actively beneficent rather than just be a satisficing citizen ran against very much against the disinterested academic stylings of rule consequentialist reasoning in practice. You can claim it was advocating a ‘policy or disposition of giving’, but if you’re going to use such language so broadly, you no longer seem to be disagreeing with the original claim that ‘if you’re critiquing consequentialism ethics on the basis that it led to bad consequences you’re seriously confused’.
I’ve given a timeline to the contrary which you don’t seem to contradict, so I have little more to say here. If you think that ‘some rationalists were at some EA events’ implies that ‘Eliezer Yudkowsky’s post ~2 years later on was somehow foundational to the EA movement’, then I don’t think we’re going to agree.
I haven’t said anything to the effect that SBF’s behaviour should update us on decision theory, so please don’t put that in my mouth. I said that I would like to see you, as a prominent EA, show more epistemic humility.
I didn’t mention any decision theory except CDT, which I have not seen sufficient reason to reject based on the thought experiments you’ve cited. For example, I expect a real jeep driver in a real desert with no knowledge of my history to have no better than base rate chance of guessing my intentions based on the decision theory I’ve at some stage picked out. I expect a real omnipotent entity with seemingly perfect knowledge of my actions to raise serious questions about personal identity, to which a reasonable answer is ‘I will one-box because it will cause future simulations like me to get more utility’. I don’t have the bandwidth to trawl through every argument and make a counterargument to the effect that the parameters are ill-defined, but that seems to be the unifying factor among them. If you think your views are provable, then don’t link me to multiple thousand-flowery-word essays: just copy and paste the formal proof!
Your original comment was about how ‘consequentialism at the level of actions has worse consequences than consequentialism at the level of policies/dispositions’ which said nothing about lying, stealing etc. It was presented as a counterpoint to Harris who, to my knowledge, does neither of those things with any regularity.
Toby Ord’s PhD thesis, which he completed while working on GWWC, was on ‘global consequentialism’, which explicitly endorses act-level reasoning if, on balance, it will lead to the best effect. His solicitation for people to do something actively beneficent rather than just be a satisficing citizen ran against very much against the disinterested academic stylings of rule consequentialist reasoning in practice. You can claim it was advocating a ‘policy or disposition of giving’, but if you’re going to use such language so broadly, you no longer seem to be disagreeing with the original claim that ‘if you’re critiquing consequentialism ethics on the basis that it led to bad consequences you’re seriously confused’.