# Flodorner

Karma: 121
• “The alternative approach (which I argue is wrong) is to say that each of the n A voters is counterfactually responsible for 1/​n of the \$10bn benefit. Suppose there are 10m A voters. Then each A voter’s counterfactual social impact is 1/​10m\$10bn = \$1000. But on this approach the common EA view that it is rational for individuals to vote as long as the probability of being decisive is not too small, is wrong. Suppose the ex ante chance of being decisive is 1/​1m. Then the expected value of Emma voting is a mere 1/​1m\$1000 = \$0.001. On the correct approach, the expected value of Emma voting is 1/​10m*\$10bn = \$1000. If voting takes 5 minutes, this is obviously a worthwhile investment for the benevolent voter, as per common EA wisdom.”

I am not sure, whether anyone is arguing for discounting twice. The alternative approach using the shapley value would divide the potential impact amongst the contributors, but not additionally account for the probability. Therefore, in this example both approaches seem to assign the same counterfactual impact.

More generally, it seems like most disagreements in this thread could be resolved by a more charitable interpretation of the other side (from both sides, as the validity of your argument against rohinmshah’s counterexample seems to show)

Right now, a comment from someone more proficient with the shapley value arguing against

“Also consider the \$1bn benefits case outlined above. Suppose that the situation is as described above but my action costs \$2 and I take one billionth of the credit for the success of the project. In that case, the Shapely-adjusted benefits of my action would be \$1 and the costs \$2, so my action would not be worthwhile. I would therefore leave \$1bn of value on the table.”

might be helpful for a better understanding.

• I disagree. If we are fairly certain, that the average intervention in Cause X is 10 times more effective than the average Intervention in Cause Y (For a comparision, 80000 hours currently believes, that AI-safety work is 1000 times as effective as global health), it seems like we should strongly prioritize Cause X. Even if there are some interventions in Cause Y, which are more effective, than the average intervention in Cause X, finding them is probably as costly as finding the most effective interventions in Cause X (Unless there is a specific reason, why evaluating cost effectiveness in Cause X is especially costly, or the distributions of Intervention effectiveness are radically different between both causes). Depending on how much we can improve on our current comparative estimates of cause effctiveness, the potential impact of doing so could be quite high, since it is essentially multiplies the effects of our lower level prioritization. Therefore it seems, like high to medium level prioritization in combination with low-level prioritization restricted to the best causes seems the way to go. On the other hand, it seems at least plausible, that we cannot improve our high-level prioritization significantly at the moment and should therefore focus on the lower level within the most effective causes.

• I think, it might be best to just report confidence intervals for your final estimates (guesstimate should give you those). Then everyone can combine your estimates with their own priors on general intervention’s effectiveness and thereby potentially correct for the high levels of uncertainty (at least in a crude way by estimating the variance from the confidence intervals).

The variance of X can be defined as E[X^2]-E[X]^2, which should not be hard to implement in Guesstimate. However, i am not sure, whether or not having the variance yields to more accurate updating, than having a confidence interval. Optimally you’d have the full distribution, but i am not sure, whether anyone will actually do the maths to update from there. (But they could get it roughly from your guesstimate model).

I might comment more on some details and the moral assumptions, if i find the time for it soon.

• At this point, i think that to analyze the \$1bn case correctly, you’d have to substract everyone’s opportunity cost in the calculation of the shapley value (if you want to use it here). This way, the example should yield what we expect.

I might do a more general writeup about shapley values, their advantages, disadvantages and when it makes sense to use them, if i find the time to read a bit more about the topic first.

• The claim does not seem to be exactly, that there is a 10% chance of an animal advocacy video affecting consumption decisions after 12 years for a given individual.

I’d interpret it as: there is a 5% chance of the mean duration of reduction, conditioned on the participant reporting to change their behaviour based on the video being higher than 12 years.

This could for example also be achieved by having a very long term impact on very few participants. This interpretation seems a lot more plausible, although i am not certain at all, wheter that claim correct. Long term follow up data would certainly be very helpful.

• Are you talking about the individual level, or the mean? My estimate would be, that for the median individual, the effect will have faded out after at most 6 months. However, the mean might be influenced by the tails quite strongly.

Thinking about it for a bit longer, a mean effect of 12 years does seem quite implausible, though. In the limiting case, where only the tails matter, this would be equivalent to convincing around 25% of the initially influenced students to stop eating pork for the rest of their lives.

The upper bound for my 90% confidence interval for the mean seems to be around 3 years, while the lower bound is at 3 months. The probability mass within the interval is mostly centered to the left.

• I am not sure about whether your usage of economies of scale already covers this, but it seems to make sense to highlight, that what matters is the marginal difference of the money for you and your adversary. If doing evil is a lot more efficient at low scales (Think of distributing highly addictive drugs among vurnerable populations vs. Distributing Malaria nets), your adversary could be hitting diminishing returns already, while your marginal returns increase, and the lottery might still be not be worth it.

• “to prove this argument I would have to present general information which may be regarded as having informational hazard”

Is there any way to assess the credibility of statements like this (or whether this is actually an argument worth considering in a given specific context)? It seems like you could use this as a general purpose argument for almost everything.

• Are any ways of making content easier to filter (like for example tags) planned?

I am rather new to the community and there have been multiple occassions, where i randomly stumbled upon old articles, i haven’t read, concerned with topics i was interested in and had previously made an effort to find articles about. This seems rather inefficient.

• What exactly do you mean with utility here? The Quasi-negative utilitarian framework seems to correspond to a shift of everyone’s personal utility, such that the shifted utility for each person is 0, whenever this person’s live is neither worth living, nor not worth living.

It seems to me, like a reasonable notion of utility would have this property anyway (but i might just use the word differently than other people, please tell me, if there is some widely used definition contradicting this!). This reframes the discussion into one about where the zero point of utility functions should lie, which seems easier to grasp. In particular, from this point of view Quasi-negative utilitarianism still gives rise to some for of the sadistic-repugnant conclussion.

On a broader point, i suspect, that the repugnance of repgugnant conclussions usually stems from confusion/​disagreement about what “a life worth living” means. However, as in your article, entertaining this conclussion still seems useful in order to sharpen our intuition about what lives are actually worth living.

• Interesting post!

I am quite interested in your other arguments for why EV calculations won’t work for pascal’s mugging and why they might extend to x-risks. I would probably have prefered a post already including all the arguments for your case.

About the argument from hypothetical updates: My intuition is, that if you assign a probability of a lot more than 0.1^10^10^10 to the mugger actually being able to follow through this might create other problems (like probabilities of distinct events adding to something higher than 1 or priors inconsistent with occams razor). If that intuition (and your argument) was true (my intuition might very well be wrong and seems at least slightly influenced by motivated reasoning), one would basically have to conclude that bayesian EV reasoning fails as soon as it involves combinations of extreme utilities and miniscule probabilities.

However, i don’t think the credenced for being able to influence x-risks are so low, that updating becomes impossible and therefore i’m not convinced not to use EV to evaluate them by your first argument. I’m quite eager to see the other arguments, though.

• Relatedly, the impromptu nature of some debating formats could also help with getting comfortable formulating answers to nontrivial questions under (time) pressure. Apart from being generally helpful, this might be especially valuable in some types of job interviews.

I’ve been considering to invest some time into competitive debating, mostly in order to improve that skill, so if someone has data (even anecdotal) on that, pleases share :)

• For clarification: (PITi+ui) is the “real” tractability and importance?

The text seems to make more sense that way, but reading “ui is the unknown (to you) importance and tractability of the cause.”, I interpreted it as ui being the “real” tractability and importance instead of just a noise term at first.

• Comment moved.

• I think the argument is that additional information showing that a cause has high marginal impact might divert causes away towards it from causes with less marginal impact. And getting this kind of information does seem more likely for causes without a track record allowing for a somewhat robust estimation of their (marginal) impact.

• I think that the assumption of the existence of a Funnel shaped distribution with undefined expected value of things we care about is quite a bit stronger than assuming that there are infinitely many possible outcomes.

But even if we restrict ourselves to distributions with finite expected value, our estimates can still fluctuate wildly until we have gathered huge amounts of evidence.

So while i am sceptical of the assumption that there exists a sequence of world states with utilities tending to infinity and even more sceptical of extremely high/​low utility world states being reachable with sufficient probability for there to be undefined expected value (the absolute value of the utility of our action would have to have infinite expected value, and i’m sceptical of believing this without something at least close to “infinite evidence”), i still think your post is quite valuable for starting a debate on how to deal with low probability events, crucial considerations and our decision making when expected values fluctuate a lot.

Also, even if my intuition about the impossibility of infinite utilities was true (I’m not exactly sure what that would actually mean, though), the problems you mentioned would still apply to anyone who does not share this intuition.

• Very interesting!

In the your literature review you summarize the Smith and Winkler (2006) paper as “Prove that nonrandom, non-Bayesian decision strategies systematically overestimate the value of the selected option.”

On first sight, this claim seems like it might be stronger than the claim i have taken away from the paper (which is similar to what you write later in the text): if your decision strategy is to just choose the option you (naively) expect to be best, you will systematically overestimate the value of the selected option.

If you think the first claim is implied by the second (or something in the paper i missed) in some sense, i’d love to learn about your arguments!

“In fact, I believe that choosing the winning option does maximize expected value if all measurements are unbiased and their reliability doesn’t vary too much.”

I think you are basically right, but the amount of available options also plays a role here. If you consider a lot of non-optimal options, for which your measurements are only slightly noisier than for the best option, you can still systematically underselect the best option. (For example, simulations suggest that with 99 N(0,1.1) and 1 N(0.1,1) variables, the last one will only be maximal among the 100 only 0.7% of the time, despite having the highest expected value).

In this case, randomly taking one option would in fact have a higher expected value. (But it still seems very unclear, how one would identify similar situations in reality, even if they existed).

Some combination of moderately varying noise and lots of options seems like the most plausible condition, under which not taking the winning option might be better for some real world decisions.

• Yes, exactly. When first reading your summary i interpreted it as the “for all” claim.

• Thanks for writing this!

I think you are pointing out some important imprecisions, but i think that some of your arguments aren’t as conclusive as you seem to present them to be:

“Bostrom therefore faces a dilemma. If intelligence is a mix of a wide range of distinct abilities as in Intelligence(1), there is no reason to think it can be ‘increased’ in the rapidly self-reinforcing way Bostrom speaks about (in mathematical terms, there is no single variable which we can differentiate and plug into the differential equation, as Bostrom does in his example on pages 75-76). ”

Those variables could be reinforcing each other, as one could argue they had done in the evolution of human intelligence. (in mathematical terms, there is a runaway dynamic similar to the one dimensional case for a linear vector-valued differential equation, as long as all eigenvalues are positive).

“This should become clear if one considers that ‘essentially all human cognitive abilities’ includes such activities as pondering moral dilemmas, reflecting on the meaning of life, analysing and producing sophisticated literature, formulating arguments about what constitutes a ‘good life’, interpreting and writing poetry, forming social connections with others, and critically introspecting upon one’s own goals and desires. To me it seems extraordinarily unlikely that any agent capable of performing all these tasks with a high degree of proficiency would simultaneously stand firm in its conviction that the only goal it had reasons to pursue was tilling the universe with paperclips. To me it seems extraordinarily unlikely that any agent capable of performing all these tasks with a high degree of proficiency would simultaneously stand firm in its conviction that the only goal it had reasons to pursue was tilling the universe with paperclips.”

Why does it seem unlikely? Also, do you mean unlikely as in “agents emerging in a world similar to ours is nowprobably won’t have this property” or as in “given that someone figured out how to construct a great variety of superintelligent agents, she would still have trouble constructing an agent with this property?”