AI strategy & governance. Blog: Not Optional.
Zach Stein-Perlman
FLI open letter: Pause giant AI experiments
2022 AI expert survey results
My favorite AI governance research this year so far
How to think about slowing AI
[Question] AI strategy career pipeline
It’s our policy to not discuss the specifics of people’s applications with other people besides them. I don’t think it would be appropriate for me to give more detail about why you were rejected publicly, so it is hard to really reply to the substance of this post, and share the other side of this story.
This of course is correct as a default policy. But if Constance explicitly said she wants to have this conversation more publicly, would you comment publicly? Or could you comment in a private message to her, and endorse her sharing the message if she chose to?
(Good luck with EAG DC in the meantime.)
GovAI: Towards best practices in AGI safety and governance: A survey of expert opinion
Data point: in the three cases I know of, undergraduates with around 30–40 hours of engagement with EA ideas were accepted to EAG London 2022.
- 25 Sep 2022 3:19 UTC; 4 points) 's comment on Open EA Global by (
AI policy ideas: Reading list
Ajeya’s TAI timeline shortened from 2050 to 2040
I’ve left AI Impacts; I’m looking for jobs/projects in AI governance. I have plenty of runway; I’m looking for impact, not income. Let me know if you have suggestions!
(Edit to clarify: I had a good experience with AI Impacts.)
PSA about credentials (in particular, a bachelor’s degree): they’re important even for working in EA and AI safety.
When I dropped out of college to work on AI safety, I thought credentials are mostly important as evidence-of-performance, for people who aren’t familiar with my work, and are necessary in high-bureaucracy institutions (academia, government). It turns out that credentials are important—for working with even many people who know you (such that the credential provides no extra evidence) and are willing to defy conventions—for rational, optics-y reasons. It seems even many AI governance professionals/orgs are worried (often rationally) about appearing unserious by hiring or publicly-collaborating-with the uncredentialed, or something. Plus irrationally-credentialist organizations are very common/important, and may even comprise a substantial fraction of EA jobs and x-risk-focused AI governance jobs (which I expected to be more convention-defying), and sometimes an organization/institution is credentialist even when it’s led by weird AI safety people (those people operate under constraints).
Disclaimer: the evidence-from-my-experiences for these claims is pretty weak. This point’s epistemic status is more considerations + impressions from a few experiences than facts/PSA.
Upshot: I’d caution people against dropping out of college to increase impact unless they have a great plan.
(Edit to clarify: this paragraph is not about AI Impacts — it’s about everyone else.)
Frontier AI Regulation
I used a diff checker to find the differences between the current post and the original post. There seem to be two:
“Alice worked there from November 2021 to June 2022” became “Alice travelled with Nonlinear from November 2021 to June 2022 and started working for the org from around February”
“using Lightcone funds” became “using personal funds”
So it seems Kat’s comment is wrong and Emerson’s is misleading/wrong. They are free to point to another specific edit if it exists.
Update: Kat guesses she was thinking of changes from a near-final draft rather than changes from the first published version.
This was the press release; the actual order has now been published.
One safety-relevant part:
4.2. Ensuring Safe and Reliable AI. (a) Within 90 days of the date of this order, to ensure and verify the continuous availability of safe, reliable, and effective AI in accordance with the Defense Production Act, as amended, 50 U.S.C. 4501 et seq., including for the national defense and the protection of critical infrastructure, the Secretary of Commerce shall require:
(i) Companies developing or demonstrating an intent to develop potential dual-use foundation models to provide the Federal Government, on an ongoing basis, with information, reports, or records regarding the following:
(A) any ongoing or planned activities related to training, developing, or producing dual-use foundation models, including the physical and cybersecurity protections taken to assure the integrity of that training process against sophisticated threats;
(B) the ownership and possession of the model weights of any dual-use foundation models, and the physical and cybersecurity measures taken to protect those model weights; and
(C) the results of any developed dual-use foundation model’s performance in relevant AI red-team testing based on guidance developed by NIST pursuant to subsection 4.1(a)(ii) of this section, and a description of any associated measures the company has taken to meet safety objectives, such as mitigations to improve performance on these red-team tests and strengthen overall model security. Prior to the development of guidance on red-team testing standards by NIST pursuant to subsection 4.1(a)(ii) of this section, this description shall include the results of any red-team testing that the company has conducted relating to lowering the barrier to entry for the development, acquisition, and use of biological weapons by non-state actors; the discovery of software vulnerabilities and development of associated exploits; the use of software or tools to influence real or virtual events; the possibility for self-replication or propagation; and associated measures to meet safety objectives; and
(ii) Companies, individuals, or other organizations or entities that acquire, develop, or possess a potential large-scale computing cluster to report any such acquisition, development, or possession, including the existence and location of these clusters and the amount of total computing power available in each cluster.
(b) The Secretary of Commerce, in consultation with the Secretary of State, the Secretary of Defense, the Secretary of Energy, and the Director of National Intelligence, shall define, and thereafter update as needed on a regular basis, the set of technical conditions for models and computing clusters that would be subject to the reporting requirements of subsection 4.2(a) of this section. Until such technical conditions are defined, the Secretary shall require compliance with these reporting requirements for:
(i) any model that was trained using a quantity of computing power greater than 1026 integer or floating-point operations, or using primarily biological sequence data and using a quantity of computing power greater than 1023 integer or floating-point operations; and
(ii) any computing cluster that has a set of machines physically co-located in a single datacenter, transitively connected by data center networking of over 100 Gbit/s, and having a theoretical maximum computing capacity of 1020 integer or floating-point operations per second for training AI.
Choosing causes re Flynn for Oregon
I’m a fan of donation swapping, but I don’t think this is legal under US campaign finance law. (If someone else knows for sure, please tell us. Edit: Peter says it’s not legal. Edit: Peter now only says it’s illegal for non-Americans, but I also think it’s illegal for Americans as a way to get around the individual contribution limit.)
Edit, meta: several people have downvoted Caleb’s comment after he no longer endorsed it, and some have downvoted his reply too (both to below zero). This isn’t right, epistemically or in terms of desert. Downvoting a retracted idea doesn’t improve the conversation, and Caleb’s comments are clearly good- and truth-seeking. If you want to punish the author for saying something that turned out to be unpopular, you should consider the effects of that policy (here and more generally) on the community’s epistemic culture. See also Oliver’s comment.
Edit: Caleb’s comments are safely back in nonnegative territory, for now, but I’ll leave the above note since it’s still worth saying.
I don’t know him, but I really want Carrick in Congress. I think donating to his campaign is a not-unreasonable thing to do as a hits-based-giving-opportunity, since it would be great for him to be in the House...
...but I don’t think most readers of this post would appreciate how unlikely he is to win. People who would be great politicians often aren’t great candidates. Unless there’s relevant private information (e.g., he’s expecting endorsements from major Democrats), it’s quite unlikely that Carrick—whose name isn’t known in the district and who doesn’t have government experience—will get more votes in the Democratic primary than the candidates with name recognition, state government experience, and endorsements from many state government officials. (And if he doesn’t win, marginal performance in a failed House primary isn’t very helpful to future pursuits.) I wish I knew how Carrick plans to win: I wish we lived in the world where you could win elections just by dazzling voters with your policy chops, but we don’t.
I’m an elections junkie. I wish I could vote for Carrick, and I really hope he wins. But I would not feel comfortable recommending others donate or volunteer until his campaign gives us reason to believe that he has a real chance.
Edit, 2.5 days later. I still think Carrick’s chances are pretty low and that some people in these comments are excessively optimistic due to misweighting the relevant factors (which is ok—not everyone needs to be knowledgeable about elections—but skews the sentiment in these comments). And I still have meta-level concerns about how we decide to pursue certain interventions (which I plan to share after the primary). But I now think that donating is a highly effective thing to do in expectation (although on balance I would rather give to the Long-Term Future Fund), because it seems quite high-value for Carrick to win.
- 18 May 2022 5:48 UTC; 35 points) 's comment on Choosing causes re Flynn for Oregon by (
- 23 Mar 2022 1:52 UTC; 10 points) 's comment on Nat’s Quick takes by (
Thanks for this post.
A few data points and reactions from my somewhat different experiences with EA:
I’ve known many EAs. Many have been vegan and many have not (I’m not). I’ve never seen anyone “treat [someone] as non-serious (or even evil)” based on their diet.
A significant minority achieves high status across EA contexts while loudly disagreeing with utilitarianism.
You claim that EA takes as given “Not living up to this list is morally bad. Also sort of like murder.” Of course failing to save lives is sort of like murder, for sufficiently weak “sort of.” But at the level of telling people what to do with their lives, I’ve always seen community leaders endorse things like personal wellbeing and non-total altruism (and not always just for instrumental, altruistic reasons). The rank-and-file and high-status alike talk (online and offline) about having fun. The vibe I get from the community is that EA is more of an exciting opportunity than a burdensome obligation. (Yes, that’s probably an instrumentally valuable vibe for the community to have—but that means that ‘having fun is murder’ is not endorsed by the community, not the other way around.)
[Retracted; I generally support noting disagreements even if you’re not explaining them; see Zvi’s reply]
It feels intellectually lazy to “strongly disagree” with principles like “The best way to do good yourself is to act selflessly to do good” and then not explain why. To illustrate, here’s my confused reading of your disagreement. Maybe you narrowly disagree that selflessness is the optimal psychological strategy for all humans. But of course EA doesn’t believe that either. Maybe you think it does. Or maybe you have a deeper criticism about “The best way to do good yourself”… but I can’t guess what that is.Relatedly, you claim that you are somehow not allowed to say real critiques. “There are also things one is not socially allowed to question or consider, not in EA in particular but fully broadly. Some key considerations are things that cannot be said on the internet, and some general assumptions that cannot be questioned are importantly wrong but cannot be questioned.” “Signals are strong [real criticism] is unwelcome and would not be rewarded.” I just don’t buy it. My experiences strongly suggest that the community goes out of its way to be open to good-faith criticism, in more than a pat-ourselves-on-the-back-for-being-open-to-criticism manner. I guess insofar as you have had different experiences that you decline to discuss explicitly, fine, you’ll arrive at different beliefs. But your observations aren’t really useful to the rest of us if you don’t say them, including the meta-observation that you’re supposedly not allowed to say certain things.
I think you gesture toward useful criticism. It would be useful for me if you actually made that criticism. You might change my mind about something! This post doesn’t seem written to make it easy for even epistemically virtuous EAs to be able to change their minds, even if you’ve correctly identified some good criticism, though, since you don’t share it.