Yeah I just couldn’t understand his comment until I realised that he’d misunderstood the OP as saying it should be a big movement rather than it should be a movement with diverse views that doesn’t deter great people for having different views. So I was looking for an explanation and that’s what my brain came up with.
tamgent
Your comment now makes more sense given that you misunderstood the OP. Consider adding an edit mentioning what your misunderstanding was at top of your comment, I think it’d help with interpreting it.
So you agree 3 is clearly false. I thought that you thought it was near enough true to not worry about the possibility of being very wrong on a number of things. Good to have cleared that up.I imagine then our central disagreement lies more in what it looks like once you collapse all that uncertainty on your unidimensional EV scale. Maybe you think it looks less diverse (on many dimensions) overall than I do. That’s my best guess at our disagreement—that we just have different priors on how much diversity is the right amount for maximising impact overall. Or maybe we have no core disagreement. On an aside, I tend to find it mostly not useful as an exercise to do that collapsing thing at such an aggregate level, but maybe I just don’t do enough macro analysis, or I’m just not that maximising.
BTW on your areas where you think we agree: I strongly disagree with commitment to EA as a sign of how likely someone is to make impact. Probably it does better than base rate in global population, sure, but here we are discussing the marginal set of people who would/wouldn’t get deterred/to use EA as one of their inputs in helping them make an impact, depending on whether you take a big tent approach. I’m personally quite cautious to not confuse ‘EA’ with ‘having impact’ (not saying you did this, I’m just pretty wary about it and thus sensitive), and do worry about people selecting for ‘EA alignment’ - it really turns me off EA because it’s strong sign of groupthink and bad epistemic culture.
Yeah maybe. Sorry if you found it unhelpful, I could have been clearer. I find your decomposition interesting. I was most strongly gesturing at the third.
Correct me if I’m wrong in my interpretation here, but it seems like you are modelling impact on a unidimensional scale, as though there is always an objective answer that we know with certainty when asked ‘is X or Y more impactful’?
I got this impression from what I understood your main point to be, something like:
There is a tail of talented people who will make the most impact, and any diversion of resource towards less talented people will be lower expected value.I think there are several assumptions in both of these points that I want to unpack (and disagree with).
On the question of whether there is a unidimensional scale of talented people who will make the most impact: I believe that the EA movement could be wrong about the problems it thinks are most important, and/or the approaches to solving them. In the world where we are wrong, if we deter many groups with important skillsets or approaches that we didn’t realise were important because we were overconfident in some problems/solutions, then that’s quite bad. Conversely, in the world where we are right, yes maybe we have invested in more places than turned out to be necessary, but the downside risks seem smaller overall (depending on constraints, which I’ll get to in next para). You could argue that talent correlates across all skillsets and approaches, and maybe there’s some truth to that, but I think there’s lots of places where the tails come apart, and I worry that not taking worldview diversification seriously can lead to many failure modes for a movement like EA. If you are quite certain that EA top cause areas as listed on 80k are right about the problems that are ‘most’ important and the ‘best’ approaches to solving them (this second one I am extremely uncertain about), you may reasonably disagree with me here—is that the case? In my view, these superlatives and collapsing of dimensions requires a lot of certainty about some baseline assumptions.
On the question of whether resource diversion from talented people to less ‘talented’ people is lower expected value: I think this depends on lots of things (sidestepping the question of talent definition which above para addresses). Firstly, are the resources substitutable? In the example you gave with university groups, I’d say no, if you fund a non-top university group then you are not detracting from top university group funding (assuming no shortage of monetary funding, which I believe we can assume). However, if you meant the resource is the time of a grantmaker specialised in community building, and it is harder for them to evaluate a non-top uni than top because maybe they know fewer people there etc. then I’d say that resource is substitutable. The question of substitutability matters to identify if it is a real cost, but it also opens a question of resource constraints and causality. Imagine a world where that time-constrained grantmaker decides to not take the easy decision but bear short term cost and invest in getting to know the new non-top uni—it is possible that the ROI is higher because of returns to early-stage scaling being higher, and new value of information. We could also imagine a different causality: if grantmaking itself was less centralised (which a bigger tent might lead to), some grantmakers might cater to non-top unis, and others to top unis, and we’d be able to see outcomes from both. So overall I think this point of yours is far from clearly true, and a bigger tent would give more value of information.There were some points you made that I do agree with you on. In particular: celebration disproportionate to the impact feeling fake, adding false uncertainty to avoid coming across as dogmatic (although I think there is a middle way here), and real trade-offs in axes of desirable communication qualities. Another thing I noticed that I like is a care for epistemic quality and rigour and wanting to protect that cultural aspects. It’s not obvious to me why that would need to be sacrificed to have a bigger tent—but maybe we have different ideas of what a bigger tent looks like.
(Also I did a quick reversal test of the actions in the OP in my head as mentioned in the applause lights post you linked to, and the vast majority do not stand up as applause lights in my opinion, in that I’d bet you’d find the opposite point of view being genuinely argued for around this forum or LW somewhere ).
I guess some scientific topics have some pretty good evidence and are hard to believe are extremely wrong (e.g. physics) given how much works so well that is based on it today, and then there are other scientific/medical areas that look scientific/medical without having the same robust evidence-base. I’d like to read a small overview meta analysis with some history of each field that claims (and is widely believed) to be scientific/medical, with discussion of some of its core ideas, and an evaluation of how sure we are that it is good and real in the way that a lot of physics is. I don’t want to name particular other scientific/medical areas to contrast, but I do have at least one prominently in my mind.
BC is of the past, CB is of the future! We are definitely progressing, right, right alphabet?
Really? I thought it stood for Easy Answers
Mmm I sense a short life thusfar. I posit that the shorter the life thusfar the more likely you are to feel this way. How high impact! Think of all the impact we can make on the impactable ones!
I enjoyed this comment
Some things I like about this post:
- I like the topic, I am interested in failure and places where failure and mistake making is discussed openly feels more growthy.
- I liked that you gave lots of examples.
Some things I didn’t like about this post:
- Sometimes I couldn’t always see the full connections you were making, or I could but had to leap to them based on my own preconceptions, maybe they could be more explained? For example, a benefit was a stronger community, but you didn’t explain the mechanism by which that leads to a stronger community. I don’t think the Howie podcast supports the point, a lot of people liked the podcast, but how is that indicative of a stronger community exactly?
Things I disagree with in this post:
- I don’t think the Opportunity Cost point was well argued. In particular, you discussed transparency in general, with examples of publishing annual reports and so on, which take a lot of time. However, this post is about being transparent about mistakes and failure, not transparency in general. I think the Opportunity Cost is much lower for just publishing big mistakes, even though it takes some time to word it properly, and then there is the stress of it. But you can choose simply not to look at reactions on social media. Same as people can choose not to engage in lengthy threads about it.
- I think your Reputational Cost point was better on the other side as some of the reasons would put it there. Also, I just think this is somewhat a normative cultural question rather than one about facts in the world. If my reputation will be destroyed in an area for publishing a mistake, either that is a good thing, or the person judging is undervaluing the growth/learning part and overvaluing a fixed view of people. I basically don’t think someone who would incorrectly judge me negatively for publishing a mistake is worth me caring about the opinion of. Again, this is normative, not a fact about reality, it’s about what kind of culture we want to create.
- Similar arguments to Reputational Cost apply to the Harming Discourse point—this is a normative culture question, we get to choose how we respond and whether we reward or disincentivise it! I would put it not as a risk/downside but in another category called cultural equilibrium or something, along with the reputation point.
- I don’t think the Career Risk point is different to the Reputational Cost point in any meaningful way. You can also take more ownership as an organisation rather than an individual, where appropriate.
I recognise that the things I disagree with are all in the downsides/risks section, and that is because I am biased and uninterested in critiquing the other side. I feel somewhat entitled to do this because I’m under the impression that you added this section in after feedback to make it more balanced, so it’s partially because I’m being mischievous and unfair (you made this easier), as well as not wanting to feel pressure myself to give a balanced comment and wanting to protest against feeling constrained in that way.
Thanks for sharing your motivations! Personally, I would have liked to read your original post, even if it was more one-sided, and got the other side elsewhere. Being helped with heuristics for making decisions is not really what I was looking for in this post—it feels paternalistic and contrived in me, and I’d enjoy you advocating earnestly for more of something you think is good.
I found this valuable, thank you.
I’m reading this book now and finding it very good! I’m surprised because most books in this genre I’ve tried lately have been really bad and I couldn’t bear to continue reading them. This one is fun because in addition to being practical it takes you on a fast route tour of just the essentials from the tech tree we’ve developed over human history, without the long winding arbitrary delays we had historically, which is interesting as well as potentially useful. Makes me think Cyberpunk is more likely in some ways (for some types of disaster) than I’d realised. It should be said that the book assumes technology is somewhat intact but most (but not all) of the population are gone.
For situations where it may happen but unclear and you want to temporarily go somewhere in a reversible way, places in your timezone if you want to continue working (and you can work remotely) might be worth considering.
Thank you
Do you know if there is something similar in Romania by any chance?
From experience, heavily prioritising my partner over me is bad for my self-esteem and my mood, makes my partner feel guilty, and leads to resentment and conflict.
It feels painful and inefficient to spend time that could be converted into high impact work on chores that anyone could do.
To me one of these feels more essential than the other. If the former is compromised, you are less likely to be in a good position to make impact or support someone who is making impact. Whereas if the latter is, it may affect the former somewhat but probably in a much more limited way as it’s only one component of what supports it (hopefully).
I think liberalism vs realism is an interesting lens but the conclusion doesn’t seem right to me. You say you’re working backwards from a theory of victory, but at least that argument was working backwards from a theory of catastrophe. I think this is an is-ought problem, and if we want things to go well then we might want to actively encourage more cooperative IR, whilst also not ignoring the powerful forces.
If there aren’t significant competition issues caused by any of those acquisitions...
I’ve wondered in the past whether it’s like dropout in a neural network. (I’ve never looked into this and know nothing about it)