Researching Causality and Safe AI at Oxford.
Previously, founder (with help from Trike Apps) of the EA Forum.
Discussing research etc at https://twitter.com/ryancareyai.
Researching Causality and Safe AI at Oxford.
Previously, founder (with help from Trike Apps) of the EA Forum.
Discussing research etc at https://twitter.com/ryancareyai.
The main thing that I doubt is that Sam knew at the time that he was gifting the board to doomers. Ilya was a loyalist and non-doomer when appointed. Elon was I guess some mix of doomer and loyalist at the start. Given how AIS worries generally increased in SV circles over time, more likely than not some of D’Angelo, Hoffman, and Hurd moved toward the “doomer” pole over time.
Nitpicks:
I think Dario and others would’ve also been involved in setting up the corporate structure
Sam never gave the “doomer” faction a near majority. That only happened because 2-3 “non-doomers” left and Ilya flipped.
Causal Foundations is probably 4-8 full-timers, depending on how you count the small-to-medium slices of time from various PhD students. Several of our 2023 outputs seem comparably important to the deception paper:
Towards Causal Foundations of Safe AGI, The Alignment Forum—the summary of everything we’re doing.
Characterising Decision Theories with Mechanised Causal Graphs, arXiv—the most formal treatment yet of TDT and UDT, together with CDT and EDT in a shared framework.
Human Control: Definitions and Algorithms, UAI—a paper arguing that corrigibility is not exactly the right thing to be aiming for, to assure good shut down behaviour.
Discovering Agents, Artificial Intelligence Journal—an investigation of the “retargetability” notion of agency.
What if you just pushed it back one month—to late June?
2 - I’m thinking more of the “community of people concerned about AI safety” than EA.
1,3,4- I agree there’s uncertainty, disagreement and nuance, but I think if NYT’s (summarised) or Nathan’s version of events is correct (and they do seem to me to make more sense to me than other existing accounts) then the board look somewhat like “good guys”, albeit ones that overplayed their hand, whereas Sam looks somewhat “bad”, and I’d bet that over time, more reasonable people will come around to such a view.
It’s a disappointing outcome—it currently seems that OpenAI is no more tied to its nonprofit goals than before. A wedge has been driven between the AI safety community and OpenAI staff, and to an extent, Silicon Valley generally.
But in this fiasco, we at least were the good guys! The OpenAI CEO shouldn’t control its nonprofit board, or compromise the independence of its members, who were doing broadly the right thing by trying to do research and perform oversight. We have much to learn.
Yeah I think EA just neglects the downside of career whiplash a bit. Another instance is how EA orgs sometimes offer internships where only a tiny fraction of interns will get a job, or hire and then quickly fire staff. In a more ideal world, EA orgs would value rejected & fired applicants much more highly than non-EA orgs, and so low-hit-rate internships, and rapid firing would be much less common in EA than outside.
It looks like, on net, people disagree with my take in the original post.
I just disagreed with the OP because it’s a false dichotomy; we could just agree with the true things that activists believe, and not the false ones, and not go based on vibes. We desire to believe that mech-interp is mere safety-washing iff it is, and so on.
My response to the third link is: https://forum.effectivealtruism.org/posts/i7DWM6zhhPr2ccq35/thoughts-on-ea-post-ftx?commentId=pJMPeYWzpPYNrhbHT
On the meta-level, anonymously sharing negative psychoanalyses of people you’re debating seems like very poor behaviour.
Now, I’m a huge fan of anonymity. Sometimes, one must criticise some vindictive organisation, or political orthodoxy, and it’s needed, to avoid some unjust social consequences.
In other cases, anonymity is inessential. One wants to debate in an aggressive style, while avoiding the just social consequences of doing so. When anonymous users misbehave, we think worse of anonymous users in general. If people always write anonymously, then writing anonymously is no-longer a political statement. We no-longer see anonymous writing as a sign that someone might be hiding from unjust retaliation.
Now, Aprilsun wants EA to mostly continue as normal, which is a majority position in EA leadership. And not to look to deeply into who is to blame for FTX, which helps to defend EA leadership. I don’t see any vindictive parties or social orthodoxies being challenged. So why would anonymity be needed?
I’m sorry, but it’s not an “overconfident criticism” to accuse FTX of investing stolen money, when this is something that 2-3 of the leaders of FTX have already pled guilty to doing.
This interaction is interesting, but I wasn’t aware of it (I’ve only reread a fraction of Hutch’s messages since knowing his identity) so to the extent that your hypothesis involves me having had some psychological reaction to it, it’s not credible.
Moreover, these psychoanalyses don’t ring true. I’m in a good headspace, giving FTX hardly any attention. Of course, I am not without regret, but I’m generally at peace with my past involvement in EA—not in need of exotic ways to process any feelings. If these analyses being correct would have taken some wind out of my sails, then their being so silly ought to put some more wind in.
Creditors are expected by manifold markets to receive only 40c on each dollar that was invested on the platform (I didn’t notice this info in the post when I previously viewed it). And, we do know why there is money missing: FTX stole it and invested it in their hedge fund, which gambled away and lost the money.
The annual budgets of Bellingcat and Propublica are in the single-digit millions. (The latter has had negative experiences with EA donations, but is still relevant for sizing up the space.)
It’s hard to say, but the International Federation of Journalists has 600k members, so maybe there exists 6M journalists worlwide, of which maybe 10% are investigative journalists (600k IJs). If they are paid like $50k/year, that’s $30B used for IJ.
Surely from browsing the internet and newspapers, it’s clear than less than 1% (<60k) of journalists are “investigative”. And I bet that half of the impact comes from an identifiable 200-2k of them, such as former Pulitzer Prize winners, Propublica, Bellingcat, and a few other venues.
Anthropic is small compared with Google and OpenAI+Microsoft.
I hope this is just cash and not a strategic partnership, because if it is, then it would mean there is now a third major company in the AGI race.
There are also big incentive gradients within longtermism:
To work on AI experiments rather than AI theory (higher salary, better locations, more job security)
To work for a grantmaker rather than a grantee (for job security), and
To work for an AI capabilities company rather than outside (higher salary)
this is something that ends up miles away from ‘winding down EA’, or EA being ‘not a movement’.
To be clear, winding down EA is something I was arguing we shouldn’t be doing.
I feel like we’re closer to agreement here, but on reflection the details of your plan here don’t sum up to ‘end EA as a movement’ at all.
At a certain point it becomes semantic, but I guess readers can decide, when you put together:
the changes in sec 11 of the main post
ideas about splitting into profession-oriented subgroups, and
shifting whether we “motivate members re social pressures” and expose junior members to risk
whether or not it counts as changing from being a “movement” to something else.
JWS, do you think EA could work as a professional network of “impact analysts” or “impact engineers” rather than as a “movement”? Ryan, do you have a sense of what that would concretely look like?
Well I’m not sure it makes sense to try to fit all EAs into one professional community that is labelled as such, since we often have quite different jobs and work in quite different fields. My model would be a patchwork of overlapping fields, and a professional network that often extends between them.
It could make sense for there to be a community focused on “effective philanthropy”, which would include OpenPhil, Longview, philanthropists, and grant evaluators. That would be as close to “impact analysis” as you would get, in my proposal.
There would be an effective policymaking community too.
And then a bevy of cause-specific research communities: evidence-based policy, AI safety research, AI governance research, global priorities research, in vitro meat, global catastrophic biorisk research, global catastrophic risk analysis, global health and development, and so on.
Lab heads and organisation leaders in these research communities would still know that they ought to apply to the “effective philanthropy” orgs to fund their activities. And they would still give talks at universities to try to attract top talent. But there wouldn’t be a common brand or cultural identity, and we would frown upon the risk-increasing factors that come from the social movement aspect.
Hmm, OK. Back when I met Ilya, about 2018, he was radiating excitement that his next idea would create AGI, and didn’t seem sensitive to safety worries. I also thought it was “common knowledge” that his interest in safety increased substantially between 2018-22, and that’s why I was unsurprised to see him in charge of superalignment.
Re Elon-Zillis, all I’m saying is that it looked to Sam like the seat would belong to someone loyal to him at the time the seat was created.
You may well be right about D’Angelo and the others.