I strongly agree that being associated with EA in AI policy is increasingly difficult (as many articles and individuals’ posts on social media can attest), in particular in Europe, DC, and the Bay Area.
I appreciate Akash’s comment, and at the same time, I understand the object of this post is not to ask for people’s opinions about what the priorities of CEA would be, so I won’t go too much into detail. I want to highlight that I’m really excited for Zach Robinson to lead CEA!
With my current knowledge of the situation in three different jurisdictions, I’ll simply comment that there is a huge problem related to EA connections and AI policy at the moment. I would support CEA getting strong PR support so that there is a voice defending EA rather than mostly receiving punches. I truly appreciate the CEA’s communication efforts over the last year and it’s very plausible that CEA needs more than one person working on this. One alternative is for most people working on AI policy to cut their former connections to EA which I think is a shame due to the usually good epistemics and motivation the community brings. (In any case, the AI safety movement should become more and more independent and “big tent” as soon as possible and I’m looking forward to more energy being put into PR there.)
Caro
If the fallout from FTX has you concerned, it’s worth looking inward at your own organization and potentially other orgs. Are there parallels, like a weak board, conflicts of interest, questionable incentives, or a lack of risk management and crisis planning? Is liquidity an issue, or are there unconventional approaches in management? These red flags warrant closer inspection.
I agree that these decisions are going in the right direction. I think their resignations should have been given earlier given the severity of the conflicts of interest with FTX and the problem of their judgments over the situations.
(I still appreciate Nick and Will as individuals and value immensely their contribution to the fields)
Thanks so much for your work, Will! I think this is the right decision given the circumstances and that will help EV move in a good direction. I know some mistakes were made but I still want to recognize your positive influence.
I’m eternally grateful for getting me to focus on the question of “how to do the most good with our limited resources?”.
I remember how I first heard about EA.
The unassuming flyer taped to the philosophy building wall first caught my eye: “How to do the most good with your career?”
It was October 2013, midterms week at Tufts University, and I was hustling between classes, focused on nothing but grades and graduation. But that disarmingly simple question gave me pause. It felt like an invitation to think bigger.
Curiosity drew me to the talk advertised on the flyer by some Oxford professor named Will MacAskill. I arrived to find just two other students in the room. None of us knew that Will would become so influential.
What followed was no ordinary lecture, but rather a life-changing conversation that has stayed with me for the past decade. Will challenged us to zoom out and consider how we could best use our limited time and talents to positively impact the world. With humility and nuance, he focused not on prescribing answers, but on asking the right questions.
Each of us left that classroom determined to orient our lives around doing the most good. His talk sent me on a winding career journey guided by this question. I dabbled in climate change policy before finding my path in AI safety thanks to 80K’s coaching.
Ten years later, I’m still asking myself that question Will posed back in 2013: How can I use my career to do the most good? It shapes every decision I make. (I’m arguably a bit too obsessed with it!). I know countless others can say the same.
So thank you, Will, for inspiring generations of people with your catalytic question. The ripples from that day continue to spread. Excited for what you’ll do next!
I’ve used the “Calm me” feature multiple times. I find it very easy to use during the day—taking just a few minutes off. I don’t have panic attacks but found it helpful to have a tool to reduce stress. I found it especially helpful around the release of GPT-4 and dealing with lots of worries about the speed of AI progress then. After a couple of exercises, I could go back to work and focus again on my AI governance work with renewed resolve.
I’m very supportive of MindEase growth and focus on panic attacks, but honestly found it very useful as a general “relaxing and calming down” app.
My quick initial research:
The UK’s influence on DeepMind, a subsidiary of US-based Alphabet Inc., is substantial despite its parent company’s origin. This control stems from DeepMind’s location in the UK (jurisdiction principle), which mandates its compliance with the country’s stringent data protection laws such as the UK GDPR. Additionally, the UK’s Information Commissioner’s Office (ICO) has shown it can enforce these regulations, as exemplified by a ruling on a collaboration between DeepMind and the Royal Free NHS Foundation Trust. The UK government’s interest in AI regulation and DeepMind’s work with sensitive healthcare data further subjects the company to UK regulatory oversight.However, the recent fusion of DeepMind with Google Brain, an American entity, may reduce the UK’s direct regulatory influence. Despite this, the UK can still impact DeepMind’s operations via its general AI policy, procurement decisions, and data protection laws. Moreover, voices like Matt Clifford, the founder and CEO of Entrepreneur First, suggest a push for greater UK sovereign control over AI, which could influence future policy decisions affecting companies like DeepMind.
I’m looking for insights on the potential regulatory implications this could have, especially in relation to the UK’s AI regulation policies.
Given that DeepMind was a UK-based subsidiary of Alphabet Inc., does the UK still have the jurisdiction to regulate it after the merger with Google Brain?
On the other hand, what is the weight of the US regulation on DeepMind?
I appreciate any insights or resources you can share on this matter. I understand this is a complex issue, and I’m keen to understand it from various perspectives.
This post is beautiful, rational, and useful—thank you!
As the beginning of a reply to the question “What does a “realistic best case transition to transformative AI” look like?”, we could maybe say that a worthwhile intermediary goal is getting to a Long Reflection when we can use safe (probably narrow) AIs to help us build a Utopia for the many years to come.
Congrats on launching cFactual; it sounds great!
Exploring how you can help launch small or mega projects could also be interesting. If we expect this century or decade to be “wild”, the EA community will create many new organizations and projects to deal with new challenges. It would be great to help these projects have a solid ToC, governance structure, etc., from the beginning. I understand that these projects may be on a slightly longer timeline (e.g. “the first year of the creation of a new AI governance organization...”) but it could be great. I’d personally feel more confident about launching a new large project if I had cFactual to help!
(However, it is very difficult to hire taxis to go to and come back from there, which often takes 30 min). Edit: people can wait up to 1h30 to get a taxi from Wytham, which isn’t super practical.
I agree with Adam here about the fact that it’s better to host all attendees in one place during retreats.
However, I am not sure of the number of bedrooms that Wytham has. It could be that a lot of attendees have to rent bedrooms outside of Wytham anyways, which makes the deal worse.
Agreed that it would be very helpful to have a widely distributed survey about this, ideally with in-depth conversations. Quantitative and qualitative data seem to be lacking, while there seems to be a lot of anecdotal evidence. Wondering if CEA or RP could lead such work, or whether an independent organization should do it.
Very excited about this competition! Is it still happening?
In this case, it seems like a very good strategy for the world, too, in that it doesn’t politicize one issue too much (like climate change has been in the US because it was tied to Democrats instead of both sides of the aisle).
More opportunities:
The AI Safety Microgrant Round: “We are offering microgrants up to $2,000 USD with the total size of this round being $6,000 USD”; “We believe there are projects and individuals in the AI Safety space who lack funding but have high agency and potential.”; “Fill out the form at Microgrant.ai by December 1, 2022.”
Nonlinear Emergency Funding: “Some of you counting on Future Fund grants are suddenly finding yourselves facing an existential financial crisis, so, inspired by the Covid Fast Grants program, we’re trying something similar for EA. If you are a Future Fund grantee and <$10,000 of bridge funding would be of substantial help to you, fill out this short form (<10 mins) and we’ll get back to you ASAP.”
+ 1 for way more investigations and background checks for major donations, megaprojects, and association with EA.
I agree that the tone was too tribalistic, but the content is correct.
(Seems a bit like a side-topic, but you can read more about Leverage on this EA Forum post and, even more importantly, in the comments. I hope that’s useful for you! The comments definitely changed my views—negatively—about the utility of Leverage’s outputs and some cultural issues.)
For what it’s worth, these different considerations can be true at the same time:
“He may have his own axe to grind.”: that’s probably true, given that he’s been fired by CEA.
“Kerry being at CEA for four years makes it more important to pay serious attention to what he has to say even if it ultimately doesn’t check out.”: it also seems like he may have particularly useful information and contexts.
“He’s now the program manager at a known cult that the EA movement has actively distanced itself from”: it does seem like Leverage is shady and doesn’t have a very good culture and epistemic, which doesn’t reflect greatly on Kerry.
So I would personally be inclined to pay close attention to his criticisms of CEA. At the same time, I would need more “positive” contexts from others to be able to trust what he says.
Here’s a report on Positive AI Economic Futures published by the World Economic Forum and supported by the Center for Human-Compatible AI (CHAI).
Could you explain a bit more what you mean by “confidence to forge our own path”? I think if the validity of the claims made about AI safety is systematically attacked due to EA connections, there is a strong reason to worry about this. I find that it makes it more difficult for a bunch of people to have an impact on AI policy.