Hi — I’m Alex! I run the 80k headhunting service, which provides organisations with lists of promising candidates for their open roles.
You can give me (anonymous) feedback here: admonymous.co/alex_ht
Hi — I’m Alex! I run the 80k headhunting service, which provides organisations with lists of promising candidates for their open roles.
You can give me (anonymous) feedback here: admonymous.co/alex_ht
Do you think any of these things have positive compounding effects or avoid lock-in:
-Investing to donate later,
-Narrow x-risk reduction,
-Building the EA community?
Thanks for this. (I should say I don’t completely understand it). My intuitions are much more sympathetic to additivity over prioritarianism but I see where you’re coming from and it does help to answer my question (and updates me a bit).
I wonder if you’ve seen this. I didn’t take the time to understand it fully but it looks like the kind of thing you might be interested in. (Also curious to hear whether you agree with the conclusions).
Nice, thanks for the explanation of your reasoning.
The example I gave there was the same as for simple cluelessness, but it needn’t be. (I like this example because it shows that, even for simple cluelessness, there isn’t wash out.) For example, if we imagine some version of complex cluelessness we can see that ripples on a pond objection doesn’t seem to work. Eg. increased economic growth —> increased carbon emissions —> increased climate change —> migration problems and resources struggles —> great power conflict etc. As time goes on, the world where the extra economic growth happened will look more and more different from the world where it didn’t happen. Does that seem true?
I agree that we don’t know how to predict a bunch of these long-term effects, and this only gets worse as the timescales get longer. But why does that mean we can ignore them? Aren’t we interested in doing the things with the best effects (all the effects)? Does it matter whether we can predict the effects at the moment? Like does GiveWell doing an analysis of AMF mean that there are now better effects from donating to AMF? That doesn’t seem right to me. It does seem more reasonable to donate after the analysis (more subjectively choice-worthy or something like that). But the effects aren’t better, right? Similarly, if there are unpredictable long-term effects, why does it matter (morally*) whether that the effects are unpredictable?
With regards to that EV calculation, I think that might be assuming you have precise credences. If we’re uncertain in our EV estimates, don’t we need to use imprecise credences? Then we’d have a bunch of different term like
EV under model n*credence in model n
*or under whatever value system is motivating you/me eg subjective preferences
Thanks for the answer Saulius, and I agree the hotel analogy is pretty different to the reality! So do you think the long-term effects don’t dominate? Or we can’t say what they are because they depend on other people’s unpredictable behaviour in a way that near-term things don’t?
And I think you’re also saying that, at any given time, we have a special opportunity to influence that time. Is that because we have more evidence about present effects or because there’s something special about direct rather than indirect effects? I’m confused because it seems like while we do have a special opportunity to influence the present because we’re here now, we also have a special opportunity to influence the future too because we’re here now. Eg. by doing anything that has positive compounding effects, or avoids lock-in of a bad state.
I’m thinking of doing (1). Is there a particular way you think this should look?
How technical do you think the summary should be? The thing that would be easiest for me to write would require some maths understanding (eg. basic calculus and limits) but no economics understanding. Eg. about as technical as your summary but more maths and less philosophy.
Also, do you have thoughts on length? Eg. do you think a five page summary is substantially more accessible than the paper, or would the summary have to be much shorter than that?
(I’m also interested in what others would find useful)
Is the idea that most of the opportunities to do good will be soon (say in the next 100-200 years)? Eg. because we expect less poverty, and factory farms etc. Or because the AI is gonna come and make us all happy, so we should just make the bit before that good?
Distinct from that seems ‘make us get to that point faster’ (I’m imagining this could mean things like increasing growth/creating friendly AI/spreading good values) - that seems very much like looking to long-term effects.