Thanks for this interesting post. As I argued in the post that you cite and as George Bridgwater notes below, I don’t think you have identified a problem in the idea of counterfactual impact here, but have instead shown that you sometimes cannot aggregate counterfactual impact across agents. As you say, CounterfactualImpact(Agent) = Value(World with agent) - Value(World without agent).
Suppose Karen and Andrew have a one night stand which leads to Karen having a baby George (and Karen and Andrew otherwise have no effect on anything). In this case, Andrew’s counterfactual impact is:
Value (world with one night stand) - Value (world without one night stand)
The same is true for Karen. Thus, the counterfactual impact of each of them taken individually is an additional baby George. This doesn’t mean that the counterfactual impact of Andrew and Karen combined is two additional baby Georges. In fact, the counterfactual impact of Karen and Andrew combined is also given by:
Thus, the counterfactual impact of Karen and Andrew combined is an additional baby George. There is nothing in the definition of counterfactual impact which implies it can be always be aggregated across agents.
This is the difference between “if me and Karen hadn’t existed, neither would George” and “If I hadn’t existed, neither would George, and if Karen hadn’t existed neither would George, therefore if me and Karen hadn’t existed, neither would two Georges.” This last statement is confused, because the babies referred to in the antecedent are the same.
I discuss other examples in the comments to Joey’s post.
The counterfactual understanding of impact is how almost all voting theorists analyse the expected value of voting. EAs tends to think that voting is sometimes altruistically rational because of the small chance of being the one pivotal voter and making a large counterfactual difference. On the Shapely value approach, the large counterfactual difference would be divided by the number of winning voters. Firstly, to my knowledge almost no-one in voting theory assesses the impact of voting in this way. Secondly, this would I think imply that voting is never rational since in any large election the prospective pay-off of voting would be divided by the potential set of winning voters and so would be >100,000x smaller than on the counterfactual approach
I disagree that 80k should transition towards a £3k retreat + no online content model, but it doesn’t seem worth getting into why here.
On premises, here is the top definition I have found from googling… “a previous statement or proposition from which another is inferred or follows as a conclusion”. This fits with my (and CFAR’s) characterisation of double cruxing. I think we’re agreed that the question is which premises you disagree on cause your disagreement. It is logically impossible that double cruxing extends this characterisation.
Yes I don’t fully understand why they’re not legible. A 4 day workshop seems pretty well-placed for a carefully done impact evaluation.
On the biting the bullet answer, that doesn’t seem plausible to me. The preference we have are a product of the beliefs we have about what will make our lives better over the long-run. My preference not to smoke is entirely a product of the fact that I believe that it will increase my risk of premature death. Per proponents of cluelessness, I could argue “maybe it will make me look cool to smoke, and that will increase my chances of getting a desirable partner” or something like that. In that sense the sign of the effect of smoking on my own interests is not certain. Nevertheless, I think it is irrational to smoke. I don’t think a Parfitian understanding of identity would help here because then my refusal to smoke would be altruistic—I would be helping out my future self.
The dodge the bullet answer is more plausible, and I may follow up with more later.
On the latter, yes that is a good point—there are general features at play here, so I retract my previous comment. However, it still seems true that your rational credal state will always depend to a very significant extent on the particular facts.
I find the use of the long-termist point of view a bit weird as applied to the AMF example. AMF is not usually justified from a long-termist point of view, so it is not really surprising that its benefits seem less obvious when you consider it from that point of view.
Here is a good paper on this - https://www.princeton.edu/~adame/papers/sharp/elga-subjective-probabilities-should-be-sharp.pdf
thanks for this.
If the retreats are valuable, one would expect them to communicate genuinely useful concepts and ideas. Which ideas that CFAR teaches do you think are most useful?
On the payment model, imagine that instead of putting their material on choosing a high impact career online, 80k charged people £3000 to have 4 day coaching and networking retreats in a large mansion, afterwards giving them access to the relevant written material. I think this would shave off ~100% of the value of 80k. The differences between the two organisations don’t seem to me to be large enough to make a relevant difference to this analysis when applied to CFAR. Do you think there is a case for 80k to move towards the CFAR £3k retreat model?
On double cruxing, here is how CFAR defines double cruxing
“Let’s say you have a belief, which we can label A (for instance, “middle school students should wear uniforms”), and that you’re in disagreement with someone who believes some form of ¬A. Double cruxing with that person means that you’re both in search of a second statement B, with the following properties:
1. You and your partner both disagree about B as well (you think B, your partner thinks ¬B)
2. The belief B is crucial for your belief in A; it is one of the cruxes of the argument. If it turned out that B was not true, that would be sufficient to make you think A was false, too.
3. The belief ¬B is crucial for your partner’s belief in ¬A, in a similar fashion.”
So, if I were to double crux with you, we would both establish which were the premises we disagree on that cause our disagreement. B is a premise in the argument for A. This is double cruxing, right?
“if you ask me “what are my premises for the belief that Nature is the most prestigious science journal?” then I definitely won’t have a nice list of premises I can respond with, but if you ask me “what would change my mind about Nature being the most prestigious science journal?” I might be able to give a reasonably good answer and start having a productive conversation”
Your answer could be expressed in the form of premises right? Premises are just propositions that bear on the likelihood of the conclusion
This is a great post, which I think will be useful for the community!
I’m interested in the recommendation of CFAR (though I appreciate it is not funded by the LTFF). What do you think are the top ideas regarding epistemics that CFAR has come up with that have helped EA/the world?
You mention double cruxing in the other post discussing CFAR. Rather than an innovation, isn’t this merely agreeing on which premise you disagree on? Similarly, isn’t murphyjitsu just the pre-mortem, which was defined by Kahneman more than a decade ago?
I also wonder why CFAR has to charge people for their advice. Why don’t they write down all of their insights and put it online for free?
I’m pretty sceptical of arguments for cluelessness. Some thoughts:
Knightian uncertainty seems to me never rational. There are strong arguments that credence functions should be sharp. Even if you can bound your credences very broadly with intervals, it seems like you would never be under knightian uncertainty given your information—your credal state is always somewhere between 0 and 1, and surely your mean estimate will differ between different problems.
Similar arguments for complex cluelessness also seems to apply to my own decisions about what would be in my rational self-interest to do. Nevertheless, I will not be wandering blindly into the road outside my hotel room in 10 minutes.
I don’t see how you could make a general argument for cluelessness with respect to all decisions made by the community. You could make an argument that the sign of the expected benefits of EA actions is much more uncertain than has been acknowledged. I don’t see how this could ever generalise to an argument that all of our decisions are clueless, since the level of uncertainty will always be almost entirely dependent on the facts about the particular case. Why would uncertainty about the effects of AMF have any bearing on uncertainty about the effects of MIRI or the Clean Air Task Force?
Cluelessness seems to imply that altruists should be indifferent between all possible actions that they can take. Is this implication of the view embraced?
Related to the above, in the AMF vs make a wish foundation example, I don’t actually agree that we are as uncertain as suggested. e.g. you list studies citing different effects of life saving on fertility saying “Unfortunately, the studies just noted are of different kinds (cross-country comparisons, panel studies, quasi-experiments, large-sample micro-studies), with different strengths and weaknesses, making it difficult to draw firm conclusions”. This seems to be asking for the reaction “what are we to do in the face of all this methodological complexity?” But an economist would actually have an answer to this—cross-country comparisons with cross-sectional data are out of fashion for example.
Overall, arguments about cluelessness seem to merely reassert that the world is complex and we should think carefully before acting. I don’t see how it points to some deep permanent feature of our epistemic situation.
ok cheers. I disagree with that but feel we have reached the end of productive argument
What do you make of my ‘offensive beliefs’ poll idea and questions?
There are two issues here. The less important one is - (1) are people’s beliefs that many of these opinions are taboo rational? I think not, and have discussed the reasons why above.
The more important one is (2) - this poll is a blunt instrument that encourages people to enter offensive opinions that threaten the reputation of the movement. If there were a way to do this with those opinions laundered out, then I wouldn’t have a problem.
This has been done in a very careless way without due thought to the very obvious risks
They have a section on ‘why do this?’ and don’t discuss any of the obvious risks which suggests they haven’t thought properly about the issue. I think a good norm to propagate would be—people put a lot of thought into whether they should publish posts that could potentially damage the movement. Do you agree?
Suppose I am going to run a poll on ‘what’s the most offensive thing you believe—anonymous public poll for effective altruists’. (1) do you think I should have to publicly explain why I am doing this? (2) do you think I should run this poll and publish the results?
Thanks for this, this is useful (upvoted)
1. I think we disagree on the empirical facts here. EA seems to me unusually open to considering rational arguments for unfashionable positions. People in my experience lose points for bad arguments, not for weird conclusions. I’d be very perplexed if someone were not willing to discuss whether or not utilitarianism is false (or whether remote working is bad etc) in front of EAs, and would think someone was overcome by irrational fear if they declined to do so. Michael Plant believes one of the allegedly taboo opinions here (mental health should be a priority) and is currently on a speaking tour of EA events across the Far East.
2. This is a good point and updates me towards the usefulness of the survey, but I wonder whether there is a better way to achieve this that doesn’t carry such clear reputational risks for EA.
3. The issue is not whether my colleagues have sufficient public accessible reason to believe that EA is full of good people acting in good faith (which they do), but whether this survey weighs heavily or not in the evidence that they will actually consider. i.e. this might lead them not to consider the rest of the evidence that EA is mostly full of good people working in good faith. I think there is a serious risk of that.
4. As mentioned elsewhere in the thread, I’m not saying that EA should embrace political level self-restraint. What I am saying is that there are sometimes reasons to self-censor holding forth on all of your opinions in public when you represent a community of people trying to achieve something important. The respondents to this poll implicitly agree with that given that they want to remain anonymous. For some of these statements, the reputational risk of airing them anonymously does not transfer from them to the EA movement as a whole. For other statements, the reputational risk does transfer from them to the community as a whole.
Do you think anyone in the community should ever self-censor for the sake of the reputation of the movement? Do you think scientists should ever self-censor their views?
This post actively encourages people to post their least acceptable views online, so seems bad by this argument.
Hi, you start with a straw man here—I’m not requesting that they write a whole essay, I’m just requesting that they put some thought into the potential downsides, rather than zero thought (as occurred here). As I understand your view, you think the person has no obligation to put any thought into whether publishing this post is a good idea or not. I have to say I find this an implausible and strange position.
I respect your view Oli, but I don’t think the person organising it put sufficient thought into the downsides of doing a poll such as this. They didn’t discuss any of the obvious risks in the ‘why this is a valuable exercise’ section.
The political analogy was an example; it was not meant to say that standard political constraints should apply to EA. The thought applies to any social movement, e.g. for people involved in environmentalism, radical exchange or libertarianism. If I were a libertarian and someone came to me saying “why don’t we run a poll of libertarians on opinions they are scared to air publicly and then publish those opinions online for the world to see”, I think it would be pretty obvious that this would be an extremely bad idea.