ACE isn’t fucking around.
Milan_Griffes
SBF: “I never thought that what I was doing was illegal”
Zvi’s take on this is good: https://thezvi.wordpress.com/2023/10/24/book-review-going-infinite
There’s a new paper on jhana (in Cerebral Cortex) out of Matthew Sacchet’s Harvard Center: Fu Zun Yang et al. 2023
Got it, thanks. I’m interested in the cattle analysis because cows yield ~4x more meat than pigs per slaughter, and could perform even better than that when factoring in cognition.
This is beautiful, thank you for creating it.
Did you look at cows as part of the analysis?
Apart from pivoting to “x-risk”, what else could we do?
Cultivate approaches to heal psychological wounds and get people above baseline on ability to coordinate and see clearly.CFAR was in the right direction goalwise (though its approach was obviously lacking). EA needs more efforts in that direction.
Some thoughts on “AI could defeat all of us combined”
When is the independent investigation expected to complete?
I wrote a thread with some reactions to this.
(Overall I agree with Tyler’s outlook and many aspects of his story resonate with my own.)
(b) intriguing IMO and I want to hear more -- #10, #11, #16, #19
10. nuclear safety being as important as AI alignment and plausibly contributing to AI risk via overhang
See discussion in this thread11. EA correctly identifies improving institutional decision-making as important but hasn’t yet grappled with the radical political implications of doing that
This one feels like it requires substantial unpacking; I’ll probably expand on it further at some point.Essentially the existing power structure is composed of organizations (mostly large bureaucracies) and all of these organizations have (formal and informal) immunological responses that activate when someone tries to change them. (Here’s some flavor to pump intuition on this.)
To improve something is to change it. There are few Pareto improvements available on the current margin, and those that exist are often not perceived as Pareto by all who would be touched by the change. So attempts to improve institutional decision-making trigger organizational immune responses by default.
These immune responses are often opaque and informal, especially in the first volleys. And they can arise emergently: top-down coordination isn’t required to generate them, only incentive gradients.
The New York Times’ assault on Scott Alexander (a) is an example to build some intuition of what this can look like: the ascendant power of Slate Star Codex began to feel threatening to the Times and so the Times moved against SSC.
16. taking dharma seriously a la @RomeoStevens76′s current research direction
I’ve since realized that this would be best accomplished by generalizing (and modernizing) to a broader category, which we’ve taken to referring to as valence studies.
19. worldview drift of elite EA orgs (e.g. @CSETGeorgetown, @open_phil) via mimesis being real and concerning
I’m basically saying that mimesis is a thing.
It’s hard to ground things objectively, so social structures tend to become more like the other social structures around them.
CSET is surrounded by and intercourses with DC-style think tanks, so it is becoming more like a DC-style think tank (e.g. suiting up starts to seem like a good idea).
Open Phil interfaces with a lot of mainstream philanthropy, and it’s starting to give away money in more mainstream ways.
Ah, the silent majority and the vocal minority.
But I have a feeling that the community takes revenge on him for all the tension the recent events left. This is cruel. I’m honestly worried if the guy is ok. Hope he is.
The scapegoat mechanism comes to mind:
The key to Girard’s anthropological theory is what he calls the scapegoat mechanism. Just as desires tend to converge on the same object, violence tends to converge on the same victim. The violence of all against all gives way to the violence of all against one. When the crowd vents its violence on a common scapegoat, unity is restored. Sacrificial rites the world over are rooted in this mechanism.
I wrote in this direction a few years ago, and I’m very glad to see you clearly stating these points here.
From What’s the best structure for optimal allocation of EA capital? –
So EA is currently in a regime wherein the large majority of capital flows from a single source, and capital allocation is set by a small number of decision-makers.
Rough estimate: if ~60% of Open Phil grantmaking decisioning is attributable to Holden, then 47.2% of all EA capital allocation, or $157.4M, was decided by one individual in 2017. 2018 & 2019 will probably have similar proportions.
It seems like EA entered into this regime largely due to historically contingent reasons (Cari & Dustin developing a close relationship with Holden, then outsourcing a lot of their philanthropic decision-making to him & the Open Phil staff).
It’s not clear that this structure will lead to optimal capital allocation...
… there is a lot we can actually do. We are currently working on it quite directly at Conjecture.
I was hoping this post would explain how Conjecture sees its work as contributing to the overall AI alignment project, and was surprised to see that that topic isn’t addressed at all. Could you speak to it?
Isn’t the point of being placed on leave in a case like this to (temporarily) remove the trustee from their duties and responsibilities while the situation is investigated, as their ability to successfully execute on their duties and responsibilities has been called into question?
(I’m not trying to antagonize here – I’m genuinely trying to understand the decision-making of EA leadership better as I think it’s very important for us to be as transparent as possible in this moment given how it seems the opacity around past decision-making contributed to bad outcomes.
You’ve certainly thought about this more than I have and I want to learn more about your models here.But I don’t really follow how it conflicting with duties disqualifies being placed on leave as a viable option, as at first brush that sorta seems like the point!)
Thanks, Claire. Can you comment on why Nick Beckstead and Will MacAskill were recused rather than placed on leaves of absence?
Thanks, Nicole! It’s helpful to hear updates like this from EA leadership in the midst of all these scandals.
Can you comment on why Nick Beckstead was recused rather than placed on a leave of absence?
Thank you for a good description of what this feels like . But I have to ask… do you still “want to join that inner circle” after all this? Because this reads like your defense of using a burner account is that it preserves your chance to enter/remain in an inner ring which you believe to be deeply unethical.
Anonymity is not useful solely for preserving the option to join the critiqued group. It can also help buffer against reprisal from the critiqued group.
See Ben Hoffman on this (a):
“Ayn Rand is the only writer I’ve seen get both these points right jointly:
There’s no benefit to joining the inner ring except discovering that their insinuated benefit does not exist.
Ignoring inner rings is refusing to protect oneself against a dangerous adversary.”
Positive knock-on effects from funding animal welfare are likely far greater than from funding global health on the present margin.