Yes, but a lot of EAs were those retail investors as well losing their shirts, or will likely lose their jobs now as they were funded via FTX. Many in our community will be a subset of those affected, who indeed need lots of support, but a reasonable number nonetheless.
howdoyousay?
The announcement raises the possibility that “a bunch of this AI stuff is basically right, but we should be focusing on entirely different aspects of the problem.” But it seems to me that if someone successfully argues for this position, they won’t be able to win any of the offered prizes.
Thanks for clarifying this is in fact the case Nick. I get how setting a benchmark—in this case an essay’s persuasiveness at shifting probabilities you assign to different AGI / extinction scenarios—makes it easier to judge across the board. But as someone who works in this field, I can’t say I’m excited by the competition or feel it will help advance things.
Basically, I don’t know if this prize is incentivising things which matter most. Here’s why:
The focus is squarely on likelihood of things going wrong against different timelines. It has nothing to do with solutions space
But solutions are still needed, even if the likelihood reduces / increases by a large amount, because the impact would be so high.
Take Proposition 1: humanity going extinct or drastically curtailing its future due to loss of control of AGI. I can see how a paper which changes your probabilities from 15% to either 7% or 35% would lead to FTX changing the amount invested in this risk relative to other X risks—this is good. However, I doubt it’d lead to a full on disinvestment, let alone that you still wouldn’t want to fund the best solutions, or be worried if the solutions to hand looked weak
Moreover, capabilities advancements have rapidly changed priors of when AGI / transformative AI would be developed, and will likely continue to do so iteratively. Once this competition is done, new research could have shifted the dial again. Solutions space will likely be the same
So long as the capabilities-alignment advancements gap persists, solutions will more likely come from the AI governance space than AI alignment research space just yet
The solution space is pretty sparse still in terms of governance of AI. But given the argument in 2), I think this is a big risk and one where further work should be stimulated. There’s likely loads of value off the table, people sitting on ideas, especially people outside the EA community who have worked in governance / non-proliferation negotiations etc.
I’d be more assured if this competition encouraged submissions on how “a bunch of this AI stuff is basically right, but we should be focusing on entirely different aspects of the problem”. The way the prize criteria is written out, if I had an argument about taking a new approach to AI alignment (incl. ‘it’s likely intractable’) I wouldn’t submit to this competition as I’d think it isn’t for me. But arguments on achievability of alignment—even it’s theoretical possibility—are central to what gets funded in this field, and have flow-through effects for AI governance interventions. This feels like a missed opportuity, and a much bigger loss than the governance interventions bit
Basically, we probably need more solutions on the table regardless of changes in probabilities of AGI being developed sooner / later, and this won’t draw them out.
Would be good to know why this was the focus if you have time, or at least something to consider if you do decide to do another competition off the bat of this.
(Sorry if any of this seems a bit rough as feedback, I think it’s better not to be a nodding dog, esp. for things so high consequence.)
Three things:
1. I’m mostly asking for any theories of victory pertaining to causes which support a long-termist vision / end-goal, such as eliminating AI risk.
2. But also interested in a theory of victory / impact on long-termism itself, of which multiple causes interact. For example, if
long-termism goal = reduce all x-risk and develop technology to end suffereing, enable flourishing + colonise stars
then the composites of a theory of victory/ impact could be...:
reduce X risk pertaining to Ai, bio, others
research / udnerstanding around enabling flourishing / reducing suffering
stimulate innovation
think through governance systems to ensure technologies / research above used for the good / not evil
3. Definitely not ‘advocating for longtermism’ as an ends in itself, but I can imagine that advocacy could be part of a wider theory of victory. For example, could postulate that reducing X-risk would require mobilising considerable private / public sector resources, requiring winnning hearts and minds around both how scarily probably X-risk is and the bigger goal of giving our descendants beautiful futures / leaving a legacy.
Agree there’s something to your 1-3 counterarguments but I find the fourth less convincing, maybe more because of semantics than actual substantive disagreement. Why? A difference in net effect of x-risk reduction of 0.01% Vs. 0.001% is pretty massive. These especially matter if the expected value is massive, because sometimes the same expected value holds across multiple areas. For example, preventing asteroid related X risk Vs. Ai Vs. Bio Vs. runaway climate change; (by definition) all same EV (if you take the arguments at face value). But plausibility of each approach and individual interventions within that would be pretty high variance.
The more I reread your post, the more I feel our differences might be more nuances, but I think your contrarian / playing to an audience of cynics tone (which did amuse me) makes them seem starker?
Before I
grace you with more sappy reasons why you’re wrong, and sign you up to my life-coaching platform[1]I am not sure whether you’re saying “treating people better / worse depending on their success is good”; particularly in the paragraphs about success and worth. Or that you think that’s just an immutable fact of life (which I disagree with). What’s your take?
How do you see “having given my honest best shot”as distinct from my point of the value in trying your hardest? I’m suspicious we’d find them most the same thing if we looked into it...
Do you think that mastery over skills (as a tool to achieve goals) is incompatible with having an intrinsic sense of self worth? I would argue that they’re pretty compatible. Moreover, for people feeling terrible and sh*t-talking themselves non-stop, which makes them think badly, I’m confident that feeling like their worth doesn’t depend on sucessful mastery of skills is itself a pretty good foundation for mastery of skills.
Honestly I’m quite surprised by you saying you haven’t found ‘essentialist’ self-worth, or what I’d call intrinsic self-worth, very valuable. I’d be down to understand this much better. For my part...:
I abandoned the success oriented self-worth because of a) the hedonic treadmill, and b) the practical benefits: believing you are good enough is a much better foundation for doing well in life[2], I’ve found, and c) reading David Foster Wallace[3].
I don’t mind if people think I’m better / worse at something and ‘measure me’ in that way; I don’t mind if it presents fewer opportunities. But I take issue when anyone...:
uses that measurement to update on someone’s value as a person, and treat them differently because of it, or;
over-updates on someone’s ability; the worst of which looks like deference or writing someone off.
I agree with this in principle… But there’s a delicious irony in the idea of EA leadership (apols for singling you out in this way Ben) now realising “yes this is a risk; we should try and convince people to do the opposite of it”, and not realising the risks inherent in that.
The fundamental issue is the way the community—mostly full of young people—often looks to / overrelies on EA leadership for ideas of causes to dedicate themselves to, but also ideas about how to live their life. This isn’t necessarily the EA leadership fault, but it’s not as if EA has never made claims about how people should live their lives before; from donating 10% of their income to productivity ‘hacks’ which can become an industry in themselves.
I think there are many ways to put the wisdom of Helen’s post into action, and one of them might be for more EA leadership to be more open to saying what it doesn’t know. Both in terms of the epistemics but the whole how to live your life stuff. I’m not claiming EA leaders act like some kind of gurus—far from it in fact—but I think some community members often regard them as such. But one thing I think it would be great is to hear more EA leaders coming out with a tone about EA ideas like “honestly, I don’t know—I’m just on this journey trying to figure things out myself, here’s the direction I’m trying to move to”.
I say this for two reasons: 1) because, knowing lots of people in leadership positions, I know this is how a lot of them feel both epistemically and in terms of how to live your life as an EA but it’s not said in public; and 2) I think knowing this has made me feel a lot more healthy psychological distance from EA, because it lowers the likelihood of putting leaders on a pedestal / losing my desire to think independently.
[“We’re just kids feeling our way in the dark of a cold, uncaring universe trying to inch carefully towards ending all suffering and maximising pleasure of all beings everywhere”. New tag-line?]
I didn’t down/up-vote this comment but I feel the down-votes without explanation and critical engagement are a bit harsh and unfair, to be honest. So I’m going to try and give some feedback (though a bit rapidly, and maybe too rapidy to be helpful...)
It feels like just an statement of fact to say that IQ tests have a sordid history; and concepts of intelligence have been weaponised against marginalised groups historically (including women, might I add to your list ;) ). That is fair to say.
But reading this post, it feels less interested in engaging with the OP’s post let alone with Linch’s response, and more like there is something you wanted to say about intelligence and racism and have looked for a place to say that.
I don’t feel like relating the racist history of IQ tests helps the OP think about their role in EA; it doesn’t really engage with what they were saying that they feel they are average and don’t mind that, but rather just want to be empowered to do good.
I don’t feel it meaningfully engages with Linch’s central point; that the community has lots of people with attributes X in it, and is set up for people with attributes X, but maybe there are some ways the community is not optimised for other people
I think your post is not very balanced on intelligence.
general intelligence is as far as I understand a well established psychological / individual differences domain
Though this does how many people with outlying abilities in e.g. maths and sciences will—as they put it themselves—not be as strong on other intelligences, such as social. And in fairness to many EAs who are like this, they put their hands up on their intelligence shortcominds in these domains!
Of course there’s a bio(psycho)social interaction between biological inheritance and environment when it comes to intelligence. The OP’s and Linch’s points still stand with that in mind.
The correlation between top university attendance and opportunity. Notably, the strongest predictor of whether you go to Harvard is whether your parents went to Harvard; but disentangling that from a) ability and b) getting coached / moulded to show your ability in the ways you need to for Harvard admissions interviews is pretty hard. Maybe a good way of thinking of it is something like for every person who get into elite university X...:
there are 100s of more talented people not given the opportunity or moulding to succeed at this, who otherwise would trounce them, but
there are 10000s more who, no matter how much opportunity or moulding they were given, would not succeed
Anyway, in EA we have a problem when it comes to identifying ourselves as a group that could be easily resolved by investing efforts in how our dynamics work, and the ways in which we exclude other people (I’m not just referring to Olivia) and how that affects within the community, at the level of biases and at the level of the effects that all this has on the work we do.
If I’m understanding you correctly, you’re saying “we have some group dynamics problems; we involve some types of people less, and listen to some voices less”. Is that correct?
I agree—I think almost everyone would identify different weird dynamics within EA they don’t love, and ways they think the community could be more inclusive; or some might find lack of inclusiveness unpalateable but be willing to bite that bullet on trade-offs. Some good work has been done recently on starting up EA in non-Anglophone, non-Western countries, including putting forward the benefits of more local interventions; but a lot more could be done.
A new post on voices we should be listening to more, and EA assumptions which prevent this from happening would be welcome!
Thanks for your open and thoughtful response.
Just to emphasise, I would bet that ~all participants would get a lot less value from one / a few doom circle sessions than they would from:
cultivating skills to ask / receive feedback that is effective (with all the elements I’ve written about above) which they can use across time—including after leaving a workshop, and / or;
just a pervasive thread throughout the workshop helping people develop both these skills and also initiate some relationships at the workshops where they can keep practising this feedback seeking / giving in future.
I did loads of this kind of stuff on (granted, somewhat poorly executed) graduate schemes and it proved persistently valuable, and helped you get ‘buddies’ who you could be this open, reflective and insight-seeking with.
I agree there are other types of feedback that are probably better for most people in most cases, and that Doom Circles are just one format that is not right for lots of people. I meant to emphasize that in the post but I see that might not have come through.
I feel like I would re-edit this post maybe to emphasise “this is an option, but not necessarily the lead option”, because its original positioning feels more like it’s a canonical approach?
I’m glad to hear you feel more comfortable setting boundaries now. I think it is a good flag that some people might not be in a place to do that, so we should be mindful of social / status dynamics and try our best to make this truly opt-in.
Sadly I think I would have been a fairly good example of most younger EAs still forging their sense of self and looking for belonging to a community; in particular the kinds of people who might feel they need this kind of feedback. So if these are going to be run up again, I’d think reflecting on this in setting terms / design would be useful.
The original CFAR alumni workshop included a warning:
”be warned that the nature of this workshop means we may be pushing on folks harder than we do at most other CFAR events, so please only sign up if that sounds like something that a) you want, and b) will be good for you.”I’m struggling to understand the motivations behind this.
Reading between the lines, was there a tacit knowledge by the organisers that this was somewhat experimental, and that it could perhaps lead to great breakthroughs and positive emotions as well as the opposite; but could only figure it out by trying?
The reason this feels so weird to me—especially the ‘pushing on folks harder’ - is because I know there are many ways to enable difficult things to be said and heard without people feeling ‘pushed on’; in fact, in ways that feel light! Or at least you can go into it knowing it can go either way, but with the intention of it not feeling heavy / difficult; but it sounds like heaviness / ‘pushing on people’ in explicitly part of the recipe? That feels unnecessary to me...
Grateful for illumination from whoever it comes from!
I’m struggling to understand why anyone would choose one big ritual like ‘Doom circles’ instead of just purposefully inculcating a culture of opennes to giving / receiving critique that is supportive and can help others? And I have a lot of concerns about unintended negative consequences of this approach.
Overall, this runs counter to my experience of what good professional feedback relationships look like;
I suspect the formality will make it feel weirder for people who aren’t used to offering feedback / insights to start doing it in a more natural, every day way; because they’ve only experienced it in a very bounded way which is likely highly emotionally charged. They might get the impression that feedback will always feel emotional, whereas if you approach it well it doesn’t have to feel negatively emotional even when some of the content is less positive.
there should be high enough trust and mutual regard for my colleague to say to me “you know what? You do have a bit of a tendency to rush planning / be a bit rude to key stakeholders and that hasn’t worked so well in the past, so maybe factor that into the upcoming projects”
low-context feedback is often not helpful; this is because someone’s strengths are often what could ‘doom’ them if over-relied upon, and different circumstances require different approaches. This sounds like feedback given with very little context—especially if limited to 90 seconds and the receiver cannot give more context to help the giver.
feedback is ultimately just an opinion; you should be able to take and also discard it. It’s often just based on someone’s one narrower vantage point of you so if you get lots of it it will necessarily be contradictory. So if you acted on it all, you’d be screwed. This sounds like a fetishisation / glorification of the feedback given; which would then make it harder for the receiver of doom to assess each bit on it’s merits, synthesise and integrate it better because of this.
A younger version of myself with less self-esteem would have participated and would have deferred excessively to others views even if I felt they had blindspots. I think I would integrate all of the things I heard, even if they were things I thought were likely not true on balance, and that these would rebound in a chorus of negative self-talk. But I think part of the attraction for me to Doom Circle’s would have been:
all these smart people do it; there must be something to it
feeling like I must not be ‘truly committed to sef-improvement’ if I don’t want to participate
and, in a small part, the rush / pain of hearing ‘the truth’, a form of psychic self-harm like reading a diary you know you shouldn’t
Now, I think I would just refuse to do this and rather put forward my counter-proposal, which would look be more sharing reflections on each others traits / skills, what could enable us and hold us back, and two-way dialogue about this to try and figure out what is / isn’t. And doing so regularly—build up of negativity is always damaging when it eventually comes out, but also why hold back on the positivity when it’s a great fuel for most people?
“70,000 hours back”; a monthly podcast interviewing someone who ‘left EA’ about what they think are some of EAs most pressing problems, and what somebody else should do about them.
Is it all a bit too convenient?
There’s been lots of discussion about EA having so much money; particularly long-termist EA. Worries that that means we are losing the ‘altruist’ side of EA, as people get more comfortable, and work on more speculative cause areas. This post isn’t about what’s right / wrong or what “we should do”; it’s about reconciling the inner tension this creates.
Many of us now have very well-paid jobs, which are in nice offices with perks like table-tennis. And that many people are working on things which often yield no benefit to humans and animals in the near-term but might in future; or indeed the first order effect of the jobs are growing the EA community, and 2nd and 3rd are speculative benefit to humans and animals or sentient beings in the future. These jobs are often high status.
Though not in an EA org, I feel my job fits this bill as well. I get a bit pissed with myself sometimes feeling I’ve sold out; because it just seems to be a bit too convenient that the most important thing I could do gets me high profile speaking events, a nice salary, an impressive title, access to important people, etc. And that potential impact from my job, which is in AI regulation, is still largely speculative.
I feel long-termish, in that I aim to make the largest and most sustainable change for all sentient minds to be blissful, not suffer and enjoy endless pain au raisin. But that doesn’t mean ignoring humans and animals today. To blatantly mis-quote Peter Singer the opportunity cost of not saving a drowning child today is still real, even if that means showing up 5 minutes late to work every day compromising on your productivity, which you believe is so important because you have a 1/10^7* chance of saving 10^700** children.
For me to believe I’m living my values, I think I need to still try to make an impact today. I try donate a good chunk to global health and wellbeing initiatives, lean harder into animal rights, and (am now starting to) support people in my very deprived local community in London.
So two questions:
Do other long-termish leaning people feel this same tension?
And if so, how do you reconcile it within yourself?
*completely glib choice of numbers**exponentially glibber
Your question seems to be both about content and interpersonal relationships / dynamics. I think it’s very helpful to split out the differences between the groups along those lines.
In terms of substantive content and focus, I think the three other responders outline very well; particularly on attitudes towards AGI timelines and types of models they are concerned about.
In terms of the interpersonal dynamics, my personal take is we’re seeing a clash between left / social-justice and EA / long-termism play out stronger in this content area than most others, though to date I haven’t seen any animus from the EA / long-termist side. In terms of explaining the clash, I guess it depends how detailed you want to get.
Could be minimalistic and sum it up as one or both sides hold stereotypical threat models of the other, and are not investigating these models but rather attacking based on them.
Could expand and explain why EA / long-termism evokes such a strong threat response to people from the left, especially marginalised communities and individuals who have been punished for putting forward ethical views—like Gebru herself.
I think the latter is important but requires lots of careful reflection and openness to their world views, which I think requires a much longer piece. (And if anyone is interested in collaborating on this, would be delighted!)
To add to the other papers coming from the “AI safety / AGI” cluster calling for a synthesis in these views...
I think taking this forward would be awesome, and I’m potentially interested to contribute. So consider this comment an earmarking for me to come speak with you and / or Rory about this at a later date :)
Thanks for writing this, completely agree.
I’d love if the EA community was able to have increasingly sophisticated, evidence -backed conversations about e.g. mega-projects vs. prospecting for and / or investing more in low-hanging fruits.
It feels like it will help ground a lot more debates and decision making within the community, especially around prioritising projects which might plausibly benefit the long term future compared with projects we’ve stronger reasons to think will benefit people / animals today (albeit not an almost infinitely large number of people / animals).
But also, you know, an increasingly better understanding of what seems to work is valuable in and of itself!
Cross-post to Leftism Virtue Café′s commentary on this: https://forum.effectivealtruism.org/posts/bsTXHJFu3Srurbg7K/leftism-virtue-cafe-s-shortform?commentId=Q8armqnvxhAmFcrAh
Equally there’s an argument to thank and reply to critical pieces made against the EA community which honestly engage with subject matter. This post (now old) making criticisms of long-termism is a good example: https://medium.com/curious/against-strong-longtermism-a-response-to-greaves-and-macaskill-cb4bb9681982
I’m sure / really hope Will’s new book does engage with the points made here. And if so, it provides the rebuttal to those who come across hit-pieces and take them at face value, or those who promulgate hit-pieces because of their own ideological drives.
Thanks for this thoughtful challenge, and in particular flagging what future provocations could look like so we can prepare ourselves and let our more reactive selves come to the fore, less of the child selves.
In fact, I think I’ll reflect on this list for a long time to ensure I continue not to respond on Twitter!
This seems like an obviously good thing to do, but would challenge us to think of how to take it further.
One further thought—is there something the EA community can do on mental health / helping those affected that goes wider than within the EA community?
A lot of people will have lost a huge deal of savings, potentially worse than that. Supporting them matters no less than supporting EAs, beyond the fact that it’s easier to support other EAs because of established networks. Ironically, that argument leaning into supporting just other EAs is the type of localism that EA rallies against, with it’s global wellbeing / all suffering matters approach.
If we are going to set up supports within the community, I would advise start small but think big—think of how it can be scaled up more widely to others in response to this widescale financial ruin.