Head of Video at 80,000 Hours
(Opinions here my own by default though will sometimes speak in a professional capacity).
Personal website: www.chanamessinger.com
Head of Video at 80,000 Hours
(Opinions here my own by default though will sometimes speak in a professional capacity).
Personal website: www.chanamessinger.com
To add to folks disagreeing with the “size of numbers”, from my perspective:
Most respondents to Rethink’s survey hadn’t encountered EA. Of those who had (233), only 18 (1.1% of total respondents) referred to FTX/SBF explicitly or obliquely when asked what they think effective altruism means or where and when they first heard of it.
I think that number is importantly 7.7% of all the people who had heard of EA, which seems not that small to me (though way smaller than my immersed-all-the-time-in-meta/FTX-stuff brain might have generated on its own when that was where my head was at).
For collecting thoughts on the concept of “epistemic hazards”—contexts in which you should expect your epistemics to be worse. not fleshed out yet. Interested in if this has already been written about, I assume so, maybe in a different framing.
From Habryka: “Confidentiality and obscurity feel like they worsen the relevant dynamics a lot, since they prevent other people from sanity-checking your takes (though this is also much more broadly applicable). For example, being involved in crimes makes it much harder to get outside feedback on your decisions, since telling people what decisions you are facing now exposes you to the risk of them outing you. Or working on dangerous technologies that you can’t tell anyone about makes it harder to get feedback on whether you are making the right tradeoffs (since doing so would usually involve leaking some of the details behind the dangerous technology). ”
I like the point of waves within cause areas! Though I suspect there would be a lot of disagreement—e.g. people who kept up with the x-risk approach even as WWOTF was getting a lot of attention.
I like the distinction between overreacting and underreacting as being “in the world” vs. “memes”—another way of saying this is something like “object level reality” vs. “social reality”.
If the longtermism wave is real, then that was pretty about social reality, at least within EA, and changed how money was spent and things people said (as I understand it, I wasn’t really socially involved at the time).
So to the extent that this is about “what’s happening to EA” I think there’s clearly a third wave here, where people are running and getting funded to run AI specific groups, people are doing policy and advocacy in a way I’ve never seen before.
If this ends up being a flash in the pan, then maybe the way to see this is something like a “trend” or “fad”, like maybe 2022-spending was.
Which maybe brings me to something like “we might want these waves to consistently be about “what’s happening in EA” vs “what’s happening in the world”, and they’re currently not.
I’d be interested in more thoughts if you have them on evidence or predictions one could have made ahead of time that would distinguish this model from other (like maybe a lot of what’s going on is youth and will evaporate over time (youth still has to be mediated by things like what you describe, but as an example).
Also, my understanding is that SBF wasn’t very insecure? Does that affect your model or is the point that the leader / norm setter doesn’t have to be?
Yeah, I’m confused about this. Seems like some amount of “collapsing social uncertainty” is very good for healthy community dynamics, and too much (like having a live ranking of where you stand) would be wildly bad. I don’t think I currently have a precise way of cutting these things. My current best guess is that the more you push to make the work descriptive, the better, and the more it becomes normative and “shape up!”-oriented, the worse, but it’s hard to know exactly what ratio of descriptive:normative you’re accomplishing via any given attempt at transparency or common knowledge creation.
I strongly resonate with this; I think this dynamic also selects for people who are open-minded in a particular way (which I broadly think is great!), so you’re going to get more of it than usual.
Thanks for writing this! I’m not sure how I’d feel if orgs I worked for went more in this direction, but I did find myself nodding along to a bunch of parts (though not all) of what you wrote.
One thing I’m curious about is you have thoughts on avoiding a “nitpick” culture, where every perk or line item becomes a big discussion among leadership or an org, or the org broadly—that seems to me like a big downside of moving in this direction.
Just because, things I especially liked:
1.
We should try to be especially virtuous whenever we find ourselves setting a moral example for others
(Though sometimes/often I think the excellent thing to model for others is “yes, I am really going to do this weird / not-altruistic-looking thing because it is the right thing to do)
2. Bringing in services to make them convenient but then asking people to pay sounds like a bit of a boondoggle but also really clever—I don’t think I’d encountered this kind of compromise before. I’d be interested in more of this form!
I don’t know if this is right, but I take Lincoln to be (a bit implicitly but I see it throughout the post) taking the default cultural norm as a pretty strong starting point, and aiming to vary from that when you have a good reason (I imagine because variations from what’s normal is what sends the most salient messages), rather than think about what a perk is from first principles, which explains the dishwashing and toilet cleaning.
Reminds me of C.S. Lewis’s view on modesty
The Christian rule of chastity must not be confused with the social rule of ‘modesty’ (in one sense of that word); i.e. propriety, or decency. The social rule of propriety lays down how much of the human body should be displayed and what subjects can be referred to, and in what words, according to the customs of a given social circle. Thus, while the rule of chastity is the same for all Christians at all times, the rule of propriety changes. A girl in the Pacific islands wearing hardly any clothes and a Victorian lady completely covered in clothes might both be equally ‘modest’, proper, or decent, according to the standards of their own societies: and both, for all we could tell by their dress, might be equally chaste (or equally unchaste). Some of the language which chaste women used in Shakespeare’s time would have been used in the nineteenth century only by a woman completely abandoned.
I wonder if another way of saying what Lincoln is trying to get at is something like “it’s about the work, not you”, a message of “happy to invest in your work” can have all the same outward features of “happy to invest in you” but with different effects, but not sure he’d endorse this.
I take your disagreement with Lincoln to be something of the form “Lincoln wants and gestures at a certain vibe change that is underspecified”—“Halstead is like “is that even a consistent thing to want / does it make sense in reality”″ which feels like a really common conversational dynamic and can be frustrating for everyone involved.
Thanks for this! I feel like I have a bunch of thoughts swirling in my head as a result of reading this :)
Again quick take: would be interested in more discussion on (conditional on there being any board members) takes on what a good ratio of funders to non funders is in different situations.
I haven’t thought hard about this yet, so this is just a quick take: I’m broadly enthused but don’t feel convinced that experts have actual reason to get engaged. Can you flesh that out more?
A dynamic I keep seeing is that it feels hard to whistleblow or report concerns or make a bid for more EA attention on things that “everyone knows”, because it feels like there’s no one to tell who doesn’t already know. It’s easy to think that surely this is priced in to everyone’s decision making. Some reasons to do it anyway:
You might be wrong about what “everyone” knows—maybe everyone in your social circle does, but not outside. I see this a lot in Bay gossip vs. London gossip—what “everyone knows” is very different in those two places
You might be wrong about what “everyone knows”—sometimes people use a vague shorthand, like “the FTX stuff” and it could mean a million different things, and either double illusion of transparency (you both think you know what the other person is talking about but don’t) or the pressure to nod along in social situations means that it seems like you’re all talking about the same thing but you’re actually not
Just because people know doesn’t mean it’s the right level of salient—people forget, are busy with other things, and so on.
Bystander effect: People might all be looking around assuming someone else has the concern covered because surely everyone knows and is taking the right amount of action on it.
In short, if you’re acting based on the belief that there’s a thing “everyone knows”, check that that’s true.
Relatedly: Everybody Knows, by Zvi Mowshowitz
[Caveat: There’s an important balance to strike here between the value of public conversation about concerns and the energy that gets put into those public community conversations. There are reasons to take action on the above non-publicly, and not every concern will make it above people’s bar for spending the time and effort to get more engagement with it. Just wanted to point to some lenses that might get missed.]
Fwiw, I think we have different perspectives here—outside of epistemics, everything on that list is there precisely because we think it’s a potential source of some of the biggest risks. It’s not always clear where risks are going to come from, so we look at a wide range of things, but we are in fact trying to be on the lookout for those big risks. Thanks for flagging it doesn’t seem like we are; I’m not sure if this comes from miscommunication or a disagreement about where big risks come from.
Maybe another place of discrepancy is that we primarily think of ourselves as looking for where high-impact gaps are, places where someone should be doing something but no one is, and risks are a subset of that but not the entirety.
(To be clear I also agree with Julia that it’s very plausible EA should have more capacity on this)
Wow, excited about this!!
I think as an overall gloss, it’s absolutely true that we have fewer levers in the AI Safety space. There are two sets of reasons why I think it’s worth considering anyway:
Impact—in a basic kind of “high importance can balance out lower tractability” way, we don’t want to only look where the streetlight is, and it’s possible that the AI Safety space will seem to us sufficiently high impact to aim some of our energy there
Don’t want to underestimate the levers—we have fewer explicit moves to make in the broader AI Safety space (e.g. disallowing people from events), but there is both a high overlap with EA and my guess is that some set of people in a new space will appreciate people who have thought about community management a lot giving thoughts / advice / sharing models and so on.
But both of these could be insufficient for a decision to put more of our effort there, and it remains to be seen.
I’m a little worried that use of the word pivot was a mistake on my part in that it maybe implies more of a change than I expect; if so, apologies.
I think this is best understood as a combination of
Maybe this is really important, especially right now [which I guess is indeed a subset of cause prioritization]
Maybe there are unusually high leverage things to do in that space right now
Maybe the counterfactual is worse—it’s a space with a lot of new energy, new organizations, etc, and so a lot more opportunity for re-making old mistakes, not a lot of institutional knowledge, and so on.
I think this basically agrees with your point (1), but as a hypothesis, not a conclusion
In addition, there is an unusual amount of money and power flowing around this space right now, and so it might warrant extra attention
This is a small effect, but we’ve received some requests from within this space to pay more attention to it, which seems like some (small) evidence
In general I use and like this concept quite a lot, but someone else advocating it does give me the chance to float my feelings the other direction:
I think sometimes when I want to go to missing moods as a concept to explain to my interlocutor what I think is going off in our conversation, I end up feeling like I’m saying “I am demanding you are required to be sad because I would feel better if you were”, which I want to be careful of imposing on people. It sometimes also feels like I’m assuming we have the same values in a way I would want to do more upfront work before calling on.
More generally, I think it’s good to notice the costs of the place you’re taking on some tradeoff spectrum, but also good to feel good about doing the right thing, making the right call, balancing things correctly etc.
Ratio of descriptive: “this is how things are” to normative: “shape up”