Kurzgesagt communicates some complex ideas using visualisations and reframing which are also quite effective, and possibly could learn from. Their video on time is a good example of this.
howdoyousay?
Some words of caution here which I want to be brief with to (ideally) set someone up for taking down in a steel-man.
The tl;dr version is Twitter excels at meming misinformed, outraged takes on nuanced things.
First off, EA and in particular long-termism has some vocal detractors who do not seem to use the same norms as most people on the EAF.
Second, Twitter is a forum which people who dislike an event / idea can easily weaponise to discredit the thing and the poster, and do so through (sometimes deliberate) misinterpretation. So it’s plausible that long-termist posts on Twitter—if not steel-manned rigorously beforehand—would be vulnerable for this. For example, any post not triple-checked could be retweeted with a misinterpreting comment that argues how long-termism is a bad ideology, and provoke a negative meme-and-outrage-cascade / pile-on.
Third, even with excellent codes of conduct in place (and I agree with disseminating the EAF CoC more widely where possible), an actor who wants to misinterpret something can and will. There is a fairly substantial risk that, should this happen, it would skew the discourse on long-termism outside EA for quite some time, and it may prove very challenging to reset this.
The above are some hot-takes, which I genuinely thought about *not* posting because I haven’t had time to mull over them much but thought better to do it than not.
Also, I genuinely hope I’m wrong (especially because I hate being the Helen Lovejoy “won’t someone please think of the (future) children?!” voice!) - I think it would be helpful for someone to give some arguments against those or propose some potential mitigations, maybe those seen in other Twitter forums?
I feel like an easy way to get lots of upvotes is to make lots of vague critical comments about how EA isn’t intellectually rigorous enough, or inclusive enough, or whatever. This makes me feel less enthusiastic about engaging with the EA Forum, because it makes me feel like everything I’m saying is being read by a jeering crowd who just want excuses to call me a moron.
Could you unpack this a bit? Is it the originating poster who makes you feel that there’s a jeering crowd, or the people up-voting the OP which makes you feel the jeers?
As counterbalance...
Writing, and sharing your writing, is how you often come to know your own thoughts. I often recognise the kernel of truth someone is getting at before they’ve articulated it well, both in written posts and verbally. I’d rather encourage someone for getting at something even if it was lacking, and then guide them to do better. I’d especially prefer to do this given I personally know that it’s difficult to make time to perfect a post whilst doing a job and other commitments.
This is even more the case when it’s on a topic that hasn’t been explored much, such as biases in thinking common to EAs or diversity issues. I accept that in liberal circles being critical on basis of diversity and inclusion or cognitive biases is a good signalling-win, and you might think it would follow suit in EA. But I’m reminded of what Will MacAskill said about 8 months ago on an 80k podcast that he was awake thinking his reputation would be in tatters after posting in the EA forum, that his post would be torn to shreds (didn’t happen). For quite some time I was surprised at the diversity elephant in the room on EA, and welcomed when these critiques came forward. But I was in the room and not pointing out the elephant for a long time because I—like Will—had fears about being torn to shreds for putting myself out there, and I don’t think this is unusual.
I also think that criticisms of underlying trends in groups are really difficult to get at in a substantive way, and though they often come across as put-downs from someone who wants to feel bigger, it is not always clear whether that’s due to authorial intent or reader’s perception. I still think there’s something that can be taken from them though. I remember a scathing article about yuppies who listen to NPR to feel educated and part of the world for signalling purposes. It was very mean-spirited but definitely gave me food for thought on my media consumption and what I am (not) achieving from it. I think a healthy attitude for a community is willingness to find usefulness in seemingly threatening criticism. As all groups are vulnerable to effects of polarisation and fractiousness, this attitude could be a good protective element.
So in summary, even if someone could have done better on articulating their ‘vague critical comments’, I think it’s good to encourage the start of a conversation on a topic which is not easy to bring up or articulate, but is important. So I would say go on ahead and upvote that criticism whilst giving feedback on ways to improve it. If that person hasn’t nailed it, it’s started the conversation at least, and maybe someone else will deliver the argument better. And I think there is a role for us as a community to be curious and open to ‘vague critical comments’ and find the important message, and that will prove more useful than the alternative of shunning it.
Really excited based on what I’ve read above. Some very hot takes before I go and read the detail...
It would be great to see some case studies in due course of people who applied this kind of thinking, what choices they made and what they learned; particularly to highlight other high impact careers which don’t align with priority paths. And it’s easier to make sense of how to use a technique—and how much relative effort to consider for each step—when you have other cases to refer to.
Finally, I hope this career planning process will help the community reframe what it means to have an ‘effective altruist career’. Effective altruism is focused on outcomes, and for good reasons; but focusing a lot on outcomes can have some bad side effects.
This is very welcomed. Drilling into another bad side effects of focusing on outcomes, I would be curious to see if this approach can help readers to make career decisions which are more compatible with a happy career / life. I suspect us EAs can be prone to a relentless focus on impact in the abstract, absolute sense to the detriment of thinking about what makes me personally impactful, and the latter is more likely to be where an individual will get results from placing their energy. And burnout is well worth avoiding because it’s a bloody pox and not fun!
I petition all further posts written by Richard Ngo on this forum to be titled “Ngo ngows best”.
Or revoke his membership.
I welcome the counter-arguments on this, but I think the writer makes a fair point around protecting current institutions and systems which are weakening due to political changes / pressure / defunding. It isn’t ideal when countries withdraw funding from the WHO; and arguably if institution X was less reliant on funding from nation states, it would also be likely less beholden to them politically. More beholden to philanthropists, so here comes the private actors Vs. states as funders debate again, which I’m not going to put forward a solution to now as much as say “it’s an debate alright”.
These institutions aren’t perfect by any means—the masks debacle by the WHO being a case in point—but a question is if it didn’t exist as a mechanism for near—and long-term health protection, would we suggest it should be founded? Answer is likely yes; so if they are underresourced, why not consider funding.More controversial perspective: the message going round now is “we have lots of money, we just want to keep the bar high for what we do with it; ergo be ambitious”. So I think it’s fair enough to say “maybe making sure health protection / poverty alleviation systems to keep the world going in the right direction are fit for increased funding in the absence of these more ambitious and fitting ideas being put forward”…
I guess I’m saying what’s the appropriate default? Very high bar for innovative long-term ideas seems reasonable because this is an emerging field with high uncertainty. But lower bar for ways in which the world is on fire now, and where important institutions could get worse / lead to worse outcomes if defunding / underfunding continues?
I’m replying quickly to this as my questions closely align with the above to save the authors two responses; but admittedly I haven’t read this in full yet.
Next, we conducted research and developed 3-5-page profiles on 41 institutions. Each profile covered the institution’s organizational structure, expected or hypothetical impact on people’s lives in both typical and extreme scenarios, future trajectory, and capacity for change.
Can you explain more about ‘capacity for change’ and what exactly that entailed in the write-ups? I ask because looking at the final top institutions and reading their descriptions, it feels like the main leverage is driven by ‘expected of hypothetical impact on people’s lives in both typical and extreme scenarios’, and less by ‘capacity for change’.
It seems to be a given that EAs working in one of these institutions (e.g. DeepMind) or concrete channels to influence (e.g. thinktanks to the CCP Politburo) constitute ‘capacity for change’ within the organisation, but I would argue that that in fact is driven by a plethora of internal and external factors to the organisation. External might be market forces driving an organisations dominance and threatening its decline (e.g. Amazon), and internal forces like culture and systems (e.g. Facebook / Meta’s resistance to employee action). In fact, the latter example really challenges why this organisation would be in the top institutions if ‘capacity for change’ has been well developed.
For such a powerful institution, the Executive Office of the President is capable of shifting both its structure and priorities with unusual ease. Every US President comes into the office with wide discretion over how to set their agenda and select their closest advisors. Since these appointments are typically network-driven, positioning oneself to be either selected as or asked to recommend a senior advisor in the administration can be a very high-impact career track.
Equally, when it comes to capacity for change this is both a point in favour and against, as such structure and priorities are by definition not robust / easily changed by the next administration.
Basically, it’s really hard to get a sense of whether the analysis captured these broader concerns from the write-up above. If it didn’t, I would hope this would be a next step in the analysis as it would be hugely useful and also add a huge deal more novel insights both from a research perspective and in terms of taking action.
Also curious about how heavy this is weighted towards AI institutions—and I work in the field of AI governance so I’m not a sceptic. Does this potentially tell us anything about the methodology chosen, or experts enlisted?
EDIT: additional point around Executive Office of the President of US
I will be completely honest and share that I downvoted this response as I personally felt it was more defensive than engaging with the critiques, and didn’t engage with specific points that were asked—for example, capacity for change. That said, I recognise I’m potentially coming late to the party in sharing my critiques of the approach / method, and in that sense I feel bad about sharing them now. But usually authors are ultimately open to this input, and I suspect this group is no different :)
A few further points:
I understand the premise of “our unit of analysis was the institutions themselves, so we could focus in on the most likely to be ‘high leverage’ to then gain the contextual understanding required to make a difference”. I would not be surprised if the next step proves less fruitful than expected for a number of reasons, such as:
difficult to gain access to the ‘inner rings’ to ascertain this knowledge on how to make impact
the ‘capacity for change’ / ‘neglectedness, tractability’ turns out to be a significantly lower leverage point within those institutions, which potentially reinforces the point we might have made a reasonable guess at: that impact / scale can be inversely correlated with flexibility / capacity for change / tractability / etc
I get a sense from having had a brief look at the methodology that insider knowledge from making change in these organisations could have been woven in earlier; either by talking to EAs / EA aligned types working within government or big tech companies or whatever else. This would have been useful for deciding what unit of analysis should be, or just sense-checking ‘will what we produce be useful?’
If this was part of the methodology, my apologies: it’s on me for skim-reading.
I’m a bit concerned by choosing to build a model for this, given as you say this work is highly contextual and we don’t have most of this context. My main concerns are something like...:
quant models are useful where there are known and quantifiable distinguishers between different entities, and where you have good reason to think you can:
weight the importance of those distinguishers accordingly
change the weights of those distinguishers as new information comes in
but as Ian says, ‘capacity for change’ in highly contextual, and a critical factor in which organisations should be prioritised
however, the piece above reads like ‘capacity for change’ was factored into the model. If so, how? And why now when there’s lowoer info on it?
just from a time resource perspective, models cost a lot, and sometimes are significantly less efficient than a qualitative estimate especially where things are highly contextual; so I’m keen to learn more about what drove this
This is all intended to be constructive even if challenging. I work in these kinds of contexts, so this work going well is meaningful to me, and I want to see the results as close to ground truth and actionable as possible. Admittedly, I don’t consider the list of top institutions necessarily actionable as things stand or that they provide particularly new information, so I think the next step could add a lot of value.
How does one tag someone with lots of money in this post?
I phrase this in jest, but mean it in all seriousness—the rhetoric at the moment is ‘be more ambitious’ because we are less cash constrained than before, but maybe we should add to this ‘be more ambitious, but doubly as self-critical as before’.
Yeah I’d know how to go about making this happen, including figuring out what’s a decent research question for it, but not undertaking it myself.
Interesting, I think it’s the other way round; there are tonnes of companies and academic groups who do action-oriented evaluation work which can include (and I reckon in some cases exclusively be) ethnography. But in my experience the hard part is always “what can feasibly be researched?” and “who will listen and learn from the findings?” In the case of EA community this would translate to something like the following, which are ranked in order of hardest to simpler...:
what exactly is the EA community? or what is a representative cross-section / group for exploration?
who actually wants to be surveilled and critiqued; to have their assumptions and blindspots surfaced in a way that may cast aspersions on their actions and what they advocate for? especially if these are ‘central nodes’ or public(ish) figures
how can the person(s) doing ethnography be given sufficient power and access to do their work effectively?
what kind of psychological contracts need to be engendered so that the results of this research don’t fall on deaf ears? and how do we go about that?
what things do we want to learn from this? should it be theory-driven, or related to specific EA subject-matter (e.g. long-termism)? or should the ethnographer be given a wider remit to do this work?
I’d be happy to have a conversation about what this could look like—maybe slightly more useful than a paper because I suspect there are an unhelpful amount of potential misunderstanding potholes in this area, so easier to clarify by chatting through.
I’m personally concerned that horoscopes weren’t taken into account in devising this scheme, when there is literally thousands of years worth of work on this, all going back to classical civilisation and aristotle or something. Classic EAs overcomplicating things / reinventing the wheel.
One way to approach this would simply be to make a hypothesis (i.e. the bar for grants is being lowered, we’re throwing money at nonsense grants), and then see what evidence you can gather for and against it.
Another way would be to identify a hypothesis for which it’s hard to gather evidence either way. For example, let’s say you’re worried that an EA org is run by a bunch of friends who use their billionaire grant money to pay each other excessive salaries and and sponsor Bahama-based “working” vacations. What sort of information would you need in order to support this to the point of being able to motivate action, or falsify it to the point of being able to dissolve your anxiety? If that information isn’t available, then why not? Could it be made available? Identifying a concrete way in which EA could be more transparent about its use of money seems like an excellent, constructive research project.
Overall I like your post and think there’s something to be said for reminding people that they have power; and in this case, the power is to probe at the sources of their anxiety and reveal ground-truth. But there is something unrealistic, I think, about placing the burden on the individual with such anxiety; particularly because answering questions about whether Funder X is lowering / raising the bar too much requires in-depth insider knowledge which—understandably—people working for Funder X might not want to reveal for a number of reasons, such as:
they’re too busy, and just want to get on with grant-making
with distributed responsibility for making grants in an organisation, there will be a distribution of happiness across staff with the process, and airing such tensions in public can be awkward and uncomfortable
they’ve done a lot of the internal auditing / assessment they thought was proportional
they’re seeing this work as inherently experimental / learning-by-doing and therefore plan more post-hoc reviews the prior process crafting
I’m also just a bit averse, from experience, of replying to people’s anxieties with “solve it yourself”. I was on a graduate scheme where pretty much every response to an issue raised—often really systemic, challenging issues which people haven’t been able to solve for years, or could be close to whistle-blowing issues—was pretty much “well how can you tackle this?”* The takeaway mesage then feels something like “I’m a failure if I can’t see the way out of this, even if this is really hard, because this smart more experienced person has told me it’s on me”. But lots of these systemic issues do not have an easy solution, or taking steps towards action are either emotionally / intellectually hard or frankly could be personally costly.
From experience, this kind of response can be empowering, but it can also inculcate a feeling of desperation when clever and can-do attitude people (like most EAs) are advised to solve something without support or guidance, especially when this is near intractable. I’m not saying this is what the response of ‘research it yourself’ is—in fact, you very much gave guidance—but I think the response was not sufficiently mindful of the barriers to doing this. Specifically, I think it would be really difficult for a small group of capable people to research this a priori, unless there were other inputs and support like e.g. significant cooperation from Funder X they’re looking to scrutinise, or advice from other people / orgs who’ve done this work. Sometimes that is available, but it isn’t always and I’d argue it’s kind of a condition for success / not getting burned out trying to get answers on the issue that’s been worrying you.
Side-note: I’ve deliberately tried to make this commentary funder neutral because I’m not sure how helpful the focus on FTx is. In fairness to them, they may be planning to publish their processes / invite critique (or have done so in private?), or are planning to take forward rigorous evaluation of their grants like GiveWell did? So would rather frame this as an invitation to comment if they haven’t already, because it felt like the assumptions throughout this thread are “they ain’t doing zilch about this” which might not be the case.
*EDIT: In fact, sometimes a more appropriate response would have been “yes, this is a really big challenge you’ve encountered and I’m sorry you feel so hopeless over it—but the feeling reflects the magnitude of the challenge”. I wonder if that’s something relevant to the EA community as well; that aspects of moral uncertainty / uncertainty about whether what we’re doing is impactful or not is just tough, and it’s ok to sit with that feeling.
Yes to links of what conversations on gaming the system are happening where!
Surely this is something that should be shared directly with all funders as well? Are there any (in)formal systems in place for this?
Here’s a podcast I listened to years ago which has influenced how I think about groups and what to be sceptical about; most specifically what we choose not to talk about.
This is why I’m somewhat sceptical about how EA groups would respond to an offer of an ethnography; what do people find uncomfortable to talk about with a stranger observing them, let alone with each other?
I’d add a fifth; one about individuals personally exploring ways in which an EA mindset and / or taking advice / guidance on lifestyle or career from their EA community has led to less positive results in their own lives.
Some that come to mind are:
Denise’s post about “My mistakes on the path to impact”: https://forum.effectivealtruism.org/posts/QFa92ZKtGp7sckRTR/my-mistakes-on-the-path-to-impact And though I can’t find it the post about how hard it is to get a job in an EA organisation, and how demoralising that is (among other points)
I do suspect there is a lot of interaction happening between social status, deference, elitism and what I’m starting to feel is more of a mental health epidemic then mental health deficit within the EA community. I suspect it’s good to talk about these together, as things going hand in hand.
What do I mean by this interaction?
Things I often hear, which exemplify it:
younger EAs, fresh out of uni following particular career advice from a person / org, investing a lot of faith in it—probably moreso than the person of higher status expects them to. Their path doesn’t go quite right, they get very burned out and disillusioned
people not coming to EA events anymore because, while they want to talk about the ideas and feel inspired to donate, the imposter syndrome becomes too big when they get asked “what do you do for work?”
talented people not going for jobs / knocking themselves down because “I’m not as smart as X” or “I don’t have ‘elite university’ credentials”, which is a big downer for them and reinforces the whole deference to those with said status, particularly because they’re more likely to bei n EA positions of power
this is a particularly pernicious one, because ostensibly smarter / more experienced people do exist, and it’s hard to tell who is smarter / more experienced without looking to signals of it, and we value truth within the community...but these are also not always the most accurate signals, and moreover the response to the signal (i.e. “I feel less smart than that person”)is in fact an input into someone’s ability to perform
Call me a charlatan without my objective data, but speaking to group organisers this seems way more pervasive than I previously realised… Would welcome more group organisers / large orgs like CEA surveying this again, building on the 2018⁄19 work… hence why am I using strong language than might seem almost alarmist language
EDIT: formatting was a mess
Full disclosure: I’m thinking about writing up about ways in which EA’s focus on impact, and the amount deference to high status people, creates cultural dynamics which are very negative for some of its members.
I think that we should just bite the bullet here and recognise that the vast majority of smart dedicated people trying very hard to use reason and evidence to do the most good are working on improving the long run future.
It’s a divisive claim, and not backed up with anything. By saying ‘bite the bullet’, it’s like you’re taunting the reader to say “if you don’t recognise this, you’re willfully avoiding the truth / cowardly in the face of truth”. Whereas for such a claim I think onus is on you to back it up.
It’s also quite a harsh value judgement of others, and bad for that reason—see below.
To be clear, there are plenty of people working on LT issues who have some/all of the above problems and I am also not very excited about them or their work.
This implies “some people matter, others do not”. It’s unpleasant and a value judgement, and worth downvoting on that alone. It also assumes such judgements can easily be made of others, whether they “Don’t think about things well”. I think I’ve pretty good judgement of people and how they think (it’s part of my job to have it), but I wouldn’t make these claims about someone as if it’s definitive and then decide whether to engage / disengage with them off the bat of that.
But it’s even more worth downvoting given how many—in my experience, I’ll caveat—EAs end up disconnecting from the community or beat themselves up because they feel the community makes value judgements about them, their worth, and whether they’re worth talking to. I think it’s bad for all the ‘mental health--> productivity --> impact’ reasons, but most importantly because I think not hurting others or creating conditions in which they would be hurt matters. This statement you made seems to me to be very value judgementy, and would make many people feel threatened and less like expressing their thoughts in case they would be accused of ‘not thinking well’, so I certainly don’t want it going unchallenged, hence downvoting it.
I would be super interested in seeing your list though, I’m sure there are some exceptions.
I think making a list of people doing things, and ranking them against your four criteria above, and sharing that with other people would bring further negative tones to the EA community.
Agreed, and I was going to single out that quote for the same reason.
I think that sentence is really the crux of imposter syndrome. I think it’s also, unfortunately, somewhat uniquely triggered by how EA philosophy is a maximising philosophy, which necessitates comparisons between people or ‘talent’ as well as cause areas.
As well as individual actions, I think it’s good for us to think more about community actions around this as any intervention targeting the individual but not changing environment rarely makes the dent needed.
Thank you for posting this. I massively laud giving slightly ‘left field’ approaches a go, and I think you’ve raised an important issue about communicating about EA movement and thinking generally.
My reply rests on a few some assumptions, which I hope are not too unfair—happy for critique / challenge on them.
The OP’s point about art is worth considering in the context of another question: how can we communicate our thinking (in all its diversity and complexity) accurately and effectively to people outside the community?
Whilst I laud the OP’s ambition, it’s worth thinking about the intermediate steps between logical reasoning (which I observe is our default) and art; using metaphor and analogy to illustrate points. (To note: I believe some animal charities do this already, using the Schindler’s car example to influence actions regarding factory farming.)
Before giving arguments in favour, here’s an example: video explaining a new type of cancer treatment, CAR-T cell therapy
Some brief arguments in favour:
1) Metaphors / analogies can create an ‘aha’ moment where the outline of a complex idea is grasped easily and retained by the listener, which they can then layer nuance on top of. People might otherwise not grasp certain complex EA ideas so easily.
2) Whilst explaining a position in logical sequence with great attention to detail is often effective for influencing (and is the main communication approach observed in this forum), I assume that lots of people are not ‘hooked’ by that approach, or find the line of reasoning too abstract to wish to change their mindset of behaviour in response to it.
3) Metaphors / analogies can be more memorable, and therefore transfer from person to person or ‘spread’ better than prosaic reasoning.
4) If you assume that people often have weak attention spans and inaccurate recollection memory, then 1-3 are even stronger arguments in favour of using metaphors more.
The examples the OP chooses (e.g. Dr Strangelove) prove that communicating an idea through art requires the artist’s ambition to be matched with huge skill, so this strikes me as ‘high risk, high gain’ territory. But we can probably make some decent gains by developing some metaphorical or allegorical ways of communicating EA thinking, testing them out and iterating.....and THEN seeing if people who we want to communicate our messages to apprehend them better.