Mostly agree. I’ve been involved in local orgs a bit more than most people in EA, and grew up in a house where my parents were often serving terms on different synagogue and school boards, and my wife has continued her family’s similar tradition—so I strongly agree that passionate alignment changes things—but even that rarely leads to boards setting the strategic direction.
I think a large part of this is that strategy is hard, as you note, and it’s very high context for orgs. I still wonder about who is best placed to track priority drift, and about how much we want boards to own the strategic direction; it would be easy, but I think very unhelpful, for the board to basically just do what Holden suggests, and only be in charge of the CEO—because a lot of value from the board is, or can be, their broader strategic views and different knowledge. And for local orgs, that happens much more, the leaders need to convince board members to do things or make changes, rather than doing it on their own and getting vague approval from the board. But, as a last point, it seems hard to do lots of this for small orgs. Overhead from the board is costly, and I don’t know how much effort we want to expect.
Davidmanheim
My board isn’t the reason for the lack of clarity—and it certainly is my job to set the direction. I don’t think any of them are particularly dissatisfied with the way I’ve set the org’s agenda. But my conclusion is that I disagree somewhat with Holden’s post that partly guided me in the past couple years, in that it’s more situational, and there are additional useful roles for the board.
Who sets my org’s agenda?
I’d find a breakdown informative, since the distribution both between different frontier firms and between safety and not seems really critical, at least in my view of the net impacts of a program. (Of course, none of this tells us counterfactual impact, which might be moving people on net either way.)
Biggest unanswered but I think critical question:
What proportion are working for frontier labs (not “for profit” generally, but the ones creating the risks,) in which roles (how many are in capabilities work now?) and at which labs?
ALTER Israel Semiannual Update—End of 2025
I don’t think it’s that much of a sacrifice.
I don’t understand how this is an argument applicable to anyone other than yourself; other people clearly feel differently.
I also think that for many, the only difference in practice would be slightly lower savings for retirement.
If that is something they care or worry about, it’s a difference—adding the word “only” doesn’t change that!
I’ve run very successful group brainstorming sessions with experts just in order to require them to actually think about a topic enough to realize what seems obvious to me. Getting people to talk through what the next decade of AI progress will look like didn’t make them experts, or even get to the basic level I could have presented in a 15 minute talk—but it gives me me a chance to push them beyond their cached thoughts, without them rejecting views they see as extremes, since they are the ones thinking them!
But EA should scale, because its ideas are good, and this leaves it in a much more tricky situation.
I’ll just note that when the original conversation started, I addressed this in a few parts.
To summarize, I think that yes, EA should be enormous, but it should not be a global community, and it needs to grapple with how the current community works, and figure out how to avoid ideological conformity.
There’s also an important question about which EA causes are differentially more or less likely to be funded. If you think Pause AI is good, Anthropic’s IPO probably won’t help. If you think mechanistic interpretability is valuable, it might help to fund more training in relevant areas, but you should expect an influx of funding soon. And if you think animal welfare is important, funding new high risk startups that can take advantage of wave of funding in a year may be an especially promising bet.
I still don’t think that works out, given energy cost of transmission and distance.
This could either be a new resource or an extension of an existing one. I expect that improving an existing resource would be faster and require lower maintenance.
My suggestion would be to improve the AI Governance section of aisafety.info.
cc: @melissasamworth / @Søren Elverlin / @plex
...but interstellar communication is incredibly unlikely to succeed—they are far away, we don’t know in which direction, and required energy is incredibly large.
To possibly strengthen the argument made, I’ll point out that moving already-effective money to a more effective cause or donation is smaller counterfactually because they are already looking at the question, and could easily come to the conclusion on their own. Moving money in a “Normie” foundation, on the other hand, can have knock-on effects of getting them to think more about impact at all, and change their trajectory.
I meant that I don’t think it’s obvious that most people in EA working on this would agree.
I do think it’s obvious that most people overall would agree, though most would not agree or be unsure that a simulation matters at all. It’s even very unclear how to count person-experiences overall, as Johnston’s Personite paper argues: https://www.jstor.org/stable/26631215 and I’ll also point to the general double-counting problem: https://link.springer.com/article/10.1007/s11098-020-01428-9 and suggest that it could apply.
I need to write a far longer response to that paper, but I’ll briefly respond (and flag to @Christian Tarsney) that I think my biggest crux is that I think they picked weak objections to causal domain restriction, and that far better objections apply. Secondarily, for axiological weights, the response about egalitarian views leading to rejection of different axiological weights seems to be begging the question, and the next part ignores the fact that any acceptable response to causal domain restriction also addresses the issue of large background populations.
I recently discussed this on twitter with @Jessica_Taylor, and think that there’s a weird claim involved that collapses into either believing that distance changes moral importance, or that thicker wires in a computer increases its moral weight. (Similar to the cutting dominos in half example in that post, or the thicker pencil, but less contrived.) Alternatively, it confuses the question by claiming that identical beings at time t_0 are morally different because they differ at time t_n—which is a completely different claim!
I think the many worlds interpretation confuses this by making it about causally separated beings which are either, in my view, only a single being, or are different because they will diverge. And yes, different beings are obviously counted more than once, but that’s explicitly ignoring the question. (As a reducto, if we asked “Is 1 the same as 1” the answer is yes, they are identical platonic numbers, but if we instead ask “is 1 the same as 1 plus 1″ the answer is no, they are different because the second is… different, by assumption!)
I don’t think that’s at all obvious, though it could be true.
That’s a fair point, and I agree that it leads to a very different universe.
At that point, however, (assuming we embrace moral realism and an absolute moral value of some non-subjective definition of qualia, which seems incoherent,) it also seems to lead to a functionally unsolvable coordination problem for maximization across galaxies.
But that’s not the claim he makes!
To quote: