Listen to the AI Safety Newsletter for free on Spotify.
This week, we cover:
Consolidation in the corporate AI landscape, as smaller startups join forces with larger funders.
Several countries have announced new investments in AI, including Singapore, Canada, and Saudi Arabia.
Congress’s budget for 2024 provides some but not all of the requested funding for AI policy. The White House’s 2025 proposal makes more ambitious requests for AI funding.
How will AI affect biological weapons risk? We reexamine this question in light of new experiments from RAND, OpenAI, and others.
AI Startups Seek Support From Large Financial Backers
As AI development demands ever-increasing compute resources, only well-resourced developers can compete at the frontier. In practice, this means that AI startups must either partner with the world’s largest tech companies or risk falling behind.
In this article, we cover some recent developments in this trend of consolidation.
The deal is a change of pace for Mistral, which lobbied against the EU AI act on the grounds that it was a small startup that couldn’t comply with regulatory requirements. Its deal with Microsoft has some wondering whether Mistral was arguing in good faith.
Microsoft (Essentially) Acquires Inflection.Microsoft has hired most of Inflection’s staff, including Mustafa Suleyman and Karén Simonyan, who will lead the newly formed Microsoft AI division. The company also paid Inflection $650 million to license its models.
Given Microsoft’s already significant stake in AI (Microsoft is OpenAI’s largest shareholder), this move could be designed to acquire Inflection in practice while avoiding potential antitrust issues.
Microsoft and OpenAI Plan to Build $100 Billion Supercomputer. In addition to its recent investments in Inflection and Mistral, Microsoft is also in talks with OpenAI to build a new supercomputer called “Stargate.” Stargate would be two orders of magnitude more expensive than some of the world’s largest existing compute clusters.
If the next generation of frontier AI development requires this magnitude of compute, then it’s possible that only the world’s largest companies and governments will be able to keep up.
Amazon Invests Additional $2.75 Billion in Anthropic. Microsoft isn’t the only tech company getting involved. Amazon increased its investment in Anthropic to $4 billion, marking the largest investment in another company in Amazon’s history. As part of the deal, Amazon Web Services (AWS) will become Anthropic’s primary cloud compute provider.
Instability at Stability AI. Last week, Emad Mostaque resigned as CEO of Stability AI. The company, best known for its image-generating model Stable Diffusion, has recently seen an exodus of many key developers. Stability AI is facing a cash crunch as it struggles to raise sufficient funds to compete at the frontier of AI development.
National AI Investments
With small private AI developers unable to compete with larger labs, some countries have decided to make national investments in AI.
Canada Invests $2B in AI; Establishes AI Safety Institute. Canadian Prime Minister Justin Trudeau announced that Canada will invest $2B in “a new AI Compute Access Fund to provide near-term support to researchers and industry.” Canada will also establish an AI Safety Institute with $50 million in funding. This is less than the UK AI Safety Institute’s $100M in funding, but much more than the $10M provided by Congress for the US AI Safety Institute.
Singapore invests $375M in AI chips. Singapore’s new budget includes $15M in scholarships for students focused on AI, and $375M to purchase access to GPUs, the specialized computer chips used for developing and running AI systems. The country has previously engaged with efforts to govern AI and wrote publicly about the need for evaluating AI risks.
Saudi Arabia Plans $40 Billion AI Investment. Saudi Arabia’s Public Investment Fund is in talks to partner with the firm Andreessen Horowitz to invest in AI. The new fund would make Saudi Arabia the world’s largest investor in AI.w
Federal Spending on AI
Congress finalized the budget for FY2024 in March, with somewhatdisappointing allocations for key agencies working on AI including NSF, BIS, and NIST. Yet the process for planning next year’s spending is already underway, with the White House submitting its proposed FY2025 budget last week.
The White House requests ambitious funding of AI-related efforts to support the CHIPS and Science Act and its Executive Order on AI. Here are a few key components of the proposed budget.
Department of Commerce. The budget requests $65 million for NIST to establish the U.S. AI Safety Institute. This $65M request is much more ambitious than the $10M requests made by severalothers.
The Institute will “operationalize NIST’s AI Risk Management Framework by creating guidelines, tools, benchmarks, and best practices for evaluating and mitigating dangerous capabilities and conducting evaluations including red-teaming to identify and mitigate AI risk.” Importantly, the NIST AI RMF was developed before ChatGPT and other advances in generative AI. NIST is working to update their guidance accordingly.
National Science Foundation. It also requests$729 million for the National Science Foundation to support research and development in AI, a 10% increase from last year’s budget.
It also requests $30 million for the second year of the pilot National AI Research Resource (NAIRR). NAIRR supports AI researchers who lack access to sufficient resources like compute and data, and prioritizes research into safe, secure and trustworthy AI. However, this is much less than it would receive under the CREATE AI Act, which authorizes $1B for the program.
Department of Energy. The budget requests $335 million for AI R&D within the Department of Energy, a 54 percent increase on last year’s spending. It also requests an additional $37 million for DOE’s National Nuclear Security Administration (NNSA) to assess AI models for chemical, biological, nuclear, and radiological (CBRN) misuse risks.
Chief AI Officers and AI talent. One of the key goals of the White House’s executive order was to increase the technical capacity and AI talent of federal agencies. Accordingly, the budget requests $70 million for various agencies, including the Departments of Homeland Security, Agriculture, Housing and Urban Development, Justice, Labor, State, Transportation, and the Treasury, to establish Chief AI Officers and promote the safe and responsible use of AI within their respective domains. It also requests $32 million for the U.S. Digital Service (USDS), General Services Administration, and OPM to support hiring AI talent across the federal government.
The proposed federal budget is ambitious, and it could face significant challenges in Congress during this election year. However, it does signal the current administration’s commitment to AI and AI risk.
An Updated Assessment of AI and Biorisk
Last June, researchers at MIT found that chatbots could help users access information about biological weapons. But this study was criticized on the grounds that the same information could be found in textbooks or online.
This is a fair criticism, and it underscores the importance of focusing on marginal risk — that is, whether an AI system creates risks that would not otherwise be present. As we’ll discuss below, recent studies from RAND and OpenAI suggest that current chatbots do not substantially increase the marginal risk of biological weapons development.
But this doesn’t mean there is no risk from AI and biological weapons. While current AI systems may lack certain dangerous capabilities, future AIs with more general capabilities could pose greater threats. This article explains recent research on AI and biorisk, explains the threats that could arise in future AI systems, and proposes measures to mitigate those risks.
RAND and OpenAI compare LLMs to internet access. To assess the marginal risk from AI in aiding bioweapons development, OpenAI and RAND conducted two separate studies where they asked participants to develop plans for building bioweapons. Some people were given access to LLMs and the internet, while others only had access to the internet. Neither study found that LLMs significantly increased the participants’ ability to plan bioweapons attacks.
There are important caveats on these results. The studies sometimes used models that had been trained to refuse questions about bioweapons development, which could fail to reflect the risk from models whose guardrails have been circumvented by fine-tuning or adversarial attacks. Others have argued the OpenAI study used too high a threshold for statistical significance.
In the model card for Claude 3, Anthropic reported results from similar human trials measuring performance on biorisk-relevant tasks. They found that participants were slightly more accurate and efficient when given access to LLMs lacking safeguards, but the effect did not pass the Anthropic’s threshold for internal review.
Two kinds of AI biorisk: foundation models and biology-specific tools. Researchers have previously distinguished between two kinds of AI biorisk. Large language models (LLMs) might expand access to existing bioweapons by helping novices through answering questions. On the other hand, biological design tools (BDTs) trained on genetic or protein data raise the ceiling of harm by helping malicious actors design more lethal or transmissible pathogens.
Over time, however, the distinction between these two kinds of models and two types of risks could become blurred. Already, large language models can accept visual inputs. For example, a user could take pictures of lab equipment and ask GPT-4V for help with their experiments. This would make it easier for non-experts to successfully work in wet labs.
Moreover, BDTs could be integrated into language models. ChemCrow is an AI system that uses a large language model to operate 18 tools within a chemistry lab. Similar setups in biology could reduce the expertise needed to effectively use biology-specific tools.
What are biological design tools? Biological design tools (BDTs) are often architecturally similar to LLMs, but are trained on biological sequence data – such as the nucleotide sequences that comprise DNA, or the amino acid sequences that comprise proteins – rather than natural language text. This allows them to perform a wide range of biology research tasks, such as predicting the structure and function of proteins. For example, DeepMind’s AlphaFold was trained on protein sequences to predict the 3D structure of a protein.
The White House has established reporting requirements for models trained on biological sequence data using more than 10^23 operations. This is a lower threshold than for LLMs, reflecting concerns that smaller BDTs could pose more acute risks.
BDTs are open-source and scaling quickly. A report from Epoch found that researchers have been quickly scaling up BDTs, that nearly all BDTs and their training data are open source, and that there is no standardized risk assessment for BDTs. Nearly all frontier BDTs are below the Executive Order reporting threshold, revealing a potential regulatory gap.
Different BDTs affect the risk landscape differently. This paper by the Centre for Long-Term Resilience categorizes BDTs and characterizes their risk profiles. First, BDTs and other AI tools can be used in different stages of bioweapons development. Second, different categories of BDTs may shift the bioweapons offense-defense balance differently. Some BDTs, such as vaccine design tools, are more likely to increase defensive capabilities than offensive capabilities.
Improving our biosecurity. One direct option to mitigate AI-enabled biorisk would be to regulate the development and use of AI models. This might mean more aggressive model auditing requirements, for example, or limiting access to biological sequence data for training.
Another option, however, is to invest more in standard biodefense. This blog post surveys some proposals, such as:
Collect and sequence wastewater samples in airports and other travel hubs to detect new viruses before they can spread widely.
Mandate “know your customer” requirements for DNA synthesis equipment orders and novel pathogen checks for DNA synthesis requests.
Develop cheap, accurate, and easy-to-administer tests for infectious diseases.
Finally, we could leverage AI for biodefense, for example, by solving problems like DNA synthesis screening. Ideally, AI could help us improve systemic safety and accelerate defensive capabilities faster than AI risks.
Overall, today’s chatbots are not clearly a biosecurity threat, but that does not negate the potential for future risks at the intersection of AI and biological weapons. Large language models (LLMs) like ChatGPT could expand access to bioweapons, while biological design tools (BDTs) could make them more deadly and transmissible. Reducing this risk will require both AI-specific measures and broader improvements to our biosecurity and pandemic preparedness.
$250K in Prizes: SafeBench Competition Announcement
The Center for AI Safety is excited to announce SafeBench, a competition to develop benchmarks for empirically assessing AI safety. This project is supported by Schmidt Sciences, with $250,000 in prizes available for the best benchmarks. Submissions are open until February 25th, 2025.
For updates on SafeBench, feel free to sign up on our homepage here.
Links
New AI Systems
Suno AI is a new AI system that can generate full songs. Hundreds of leading musicians signed an open letter calling the use of AI in music an “assault on human creativity.”
OpenAI created Voice Engine which, given a 15 second audio sample of someone’s voice, can produce realistic imitations of it. They decided not to release the model for safety concerns.
Can large language models improve cybersecurity by finding and fixing vulnerabilities in code? A new paper argues that little progress has been made on AI for cybersecurity.
In adversarial robustness, researchers accelerated a leading adversarial attack algorithm by 38x, and Anthropic documented a new, difficult-to-thwart method called many-shot jailbreaking.
AISN #33: Reassessing AI and Biorisk Plus, Consolidation in the Corporate AI Landscape, and National Investments in AI
Link post
Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
Subscribe here to receive future versions.
Listen to the AI Safety Newsletter for free on Spotify.
This week, we cover:
Consolidation in the corporate AI landscape, as smaller startups join forces with larger funders.
Several countries have announced new investments in AI, including Singapore, Canada, and Saudi Arabia.
Congress’s budget for 2024 provides some but not all of the requested funding for AI policy. The White House’s 2025 proposal makes more ambitious requests for AI funding.
How will AI affect biological weapons risk? We reexamine this question in light of new experiments from RAND, OpenAI, and others.
AI Startups Seek Support From Large Financial Backers
As AI development demands ever-increasing compute resources, only well-resourced developers can compete at the frontier. In practice, this means that AI startups must either partner with the world’s largest tech companies or risk falling behind.
In this article, we cover some recent developments in this trend of consolidation.
Microsoft Announces Partnership with Mistral. Last month, Microsoft acquired a minority stake in the French AI startup Mistral. The partnership also grants Mistral access to Microsoft’s Azure infrastructure to develop and host its models.
The deal is a change of pace for Mistral, which lobbied against the EU AI act on the grounds that it was a small startup that couldn’t comply with regulatory requirements. Its deal with Microsoft has some wondering whether Mistral was arguing in good faith.
Microsoft (Essentially) Acquires Inflection. Microsoft has hired most of Inflection’s staff, including Mustafa Suleyman and Karén Simonyan, who will lead the newly formed Microsoft AI division. The company also paid Inflection $650 million to license its models.
Given Microsoft’s already significant stake in AI (Microsoft is OpenAI’s largest shareholder), this move could be designed to acquire Inflection in practice while avoiding potential antitrust issues.
Microsoft and OpenAI Plan to Build $100 Billion Supercomputer. In addition to its recent investments in Inflection and Mistral, Microsoft is also in talks with OpenAI to build a new supercomputer called “Stargate.” Stargate would be two orders of magnitude more expensive than some of the world’s largest existing compute clusters.
If the next generation of frontier AI development requires this magnitude of compute, then it’s possible that only the world’s largest companies and governments will be able to keep up.
Amazon Invests Additional $2.75 Billion in Anthropic. Microsoft isn’t the only tech company getting involved. Amazon increased its investment in Anthropic to $4 billion, marking the largest investment in another company in Amazon’s history. As part of the deal, Amazon Web Services (AWS) will become Anthropic’s primary cloud compute provider.
Instability at Stability AI. Last week, Emad Mostaque resigned as CEO of Stability AI. The company, best known for its image-generating model Stable Diffusion, has recently seen an exodus of many key developers. Stability AI is facing a cash crunch as it struggles to raise sufficient funds to compete at the frontier of AI development.
National AI Investments
With small private AI developers unable to compete with larger labs, some countries have decided to make national investments in AI.
Canada Invests $2B in AI; Establishes AI Safety Institute. Canadian Prime Minister Justin Trudeau announced that Canada will invest $2B in “a new AI Compute Access Fund to provide near-term support to researchers and industry.” Canada will also establish an AI Safety Institute with $50 million in funding. This is less than the UK AI Safety Institute’s $100M in funding, but much more than the $10M provided by Congress for the US AI Safety Institute.
Singapore invests $375M in AI chips. Singapore’s new budget includes $15M in scholarships for students focused on AI, and $375M to purchase access to GPUs, the specialized computer chips used for developing and running AI systems. The country has previously engaged with efforts to govern AI and wrote publicly about the need for evaluating AI risks.
Saudi Arabia Plans $40 Billion AI Investment. Saudi Arabia’s Public Investment Fund is in talks to partner with the firm Andreessen Horowitz to invest in AI. The new fund would make Saudi Arabia the world’s largest investor in AI.w
Federal Spending on AI
Congress finalized the budget for FY2024 in March, with somewhat disappointing allocations for key agencies working on AI including NSF, BIS, and NIST. Yet the process for planning next year’s spending is already underway, with the White House submitting its proposed FY2025 budget last week.
The White House requests ambitious funding of AI-related efforts to support the CHIPS and Science Act and its Executive Order on AI. Here are a few key components of the proposed budget.
Department of Commerce. The budget requests $65 million for NIST to establish the U.S. AI Safety Institute. This $65M request is much more ambitious than the $10M requests made by several others.
The Institute will “operationalize NIST’s AI Risk Management Framework by creating guidelines, tools, benchmarks, and best practices for evaluating and mitigating dangerous capabilities and conducting evaluations including red-teaming to identify and mitigate AI risk.” Importantly, the NIST AI RMF was developed before ChatGPT and other advances in generative AI. NIST is working to update their guidance accordingly.
National Science Foundation. It also requests $729 million for the National Science Foundation to support research and development in AI, a 10% increase from last year’s budget.
It also requests $30 million for the second year of the pilot National AI Research Resource (NAIRR). NAIRR supports AI researchers who lack access to sufficient resources like compute and data, and prioritizes research into safe, secure and trustworthy AI. However, this is much less than it would receive under the CREATE AI Act, which authorizes $1B for the program.
Department of Energy. The budget requests $335 million for AI R&D within the Department of Energy, a 54 percent increase on last year’s spending. It also requests an additional $37 million for DOE’s National Nuclear Security Administration (NNSA) to assess AI models for chemical, biological, nuclear, and radiological (CBRN) misuse risks.
Chief AI Officers and AI talent. One of the key goals of the White House’s executive order was to increase the technical capacity and AI talent of federal agencies. Accordingly, the budget requests $70 million for various agencies, including the Departments of Homeland Security, Agriculture, Housing and Urban Development, Justice, Labor, State, Transportation, and the Treasury, to establish Chief AI Officers and promote the safe and responsible use of AI within their respective domains. It also requests $32 million for the U.S. Digital Service (USDS), General Services Administration, and OPM to support hiring AI talent across the federal government.
The proposed federal budget is ambitious, and it could face significant challenges in Congress during this election year. However, it does signal the current administration’s commitment to AI and AI risk.
An Updated Assessment of AI and Biorisk
Last June, researchers at MIT found that chatbots could help users access information about biological weapons. But this study was criticized on the grounds that the same information could be found in textbooks or online.
This is a fair criticism, and it underscores the importance of focusing on marginal risk — that is, whether an AI system creates risks that would not otherwise be present. As we’ll discuss below, recent studies from RAND and OpenAI suggest that current chatbots do not substantially increase the marginal risk of biological weapons development.
But this doesn’t mean there is no risk from AI and biological weapons. While current AI systems may lack certain dangerous capabilities, future AIs with more general capabilities could pose greater threats. This article explains recent research on AI and biorisk, explains the threats that could arise in future AI systems, and proposes measures to mitigate those risks.
RAND and OpenAI compare LLMs to internet access. To assess the marginal risk from AI in aiding bioweapons development, OpenAI and RAND conducted two separate studies where they asked participants to develop plans for building bioweapons. Some people were given access to LLMs and the internet, while others only had access to the internet. Neither study found that LLMs significantly increased the participants’ ability to plan bioweapons attacks.
There are important caveats on these results. The studies sometimes used models that had been trained to refuse questions about bioweapons development, which could fail to reflect the risk from models whose guardrails have been circumvented by fine-tuning or adversarial attacks. Others have argued the OpenAI study used too high a threshold for statistical significance.
In the model card for Claude 3, Anthropic reported results from similar human trials measuring performance on biorisk-relevant tasks. They found that participants were slightly more accurate and efficient when given access to LLMs lacking safeguards, but the effect did not pass the Anthropic’s threshold for internal review.
Two kinds of AI biorisk: foundation models and biology-specific tools. Researchers have previously distinguished between two kinds of AI biorisk. Large language models (LLMs) might expand access to existing bioweapons by helping novices through answering questions. On the other hand, biological design tools (BDTs) trained on genetic or protein data raise the ceiling of harm by helping malicious actors design more lethal or transmissible pathogens.
Over time, however, the distinction between these two kinds of models and two types of risks could become blurred. Already, large language models can accept visual inputs. For example, a user could take pictures of lab equipment and ask GPT-4V for help with their experiments. This would make it easier for non-experts to successfully work in wet labs.
Moreover, BDTs could be integrated into language models. ChemCrow is an AI system that uses a large language model to operate 18 tools within a chemistry lab. Similar setups in biology could reduce the expertise needed to effectively use biology-specific tools.
What are biological design tools? Biological design tools (BDTs) are often architecturally similar to LLMs, but are trained on biological sequence data – such as the nucleotide sequences that comprise DNA, or the amino acid sequences that comprise proteins – rather than natural language text. This allows them to perform a wide range of biology research tasks, such as predicting the structure and function of proteins. For example, DeepMind’s AlphaFold was trained on protein sequences to predict the 3D structure of a protein.
The White House has established reporting requirements for models trained on biological sequence data using more than 10^23 operations. This is a lower threshold than for LLMs, reflecting concerns that smaller BDTs could pose more acute risks.
BDTs are open-source and scaling quickly. A report from Epoch found that researchers have been quickly scaling up BDTs, that nearly all BDTs and their training data are open source, and that there is no standardized risk assessment for BDTs. Nearly all frontier BDTs are below the Executive Order reporting threshold, revealing a potential regulatory gap.
Different BDTs affect the risk landscape differently. This paper by the Centre for Long-Term Resilience categorizes BDTs and characterizes their risk profiles. First, BDTs and other AI tools can be used in different stages of bioweapons development. Second, different categories of BDTs may shift the bioweapons offense-defense balance differently. Some BDTs, such as vaccine design tools, are more likely to increase defensive capabilities than offensive capabilities.
Improving our biosecurity. One direct option to mitigate AI-enabled biorisk would be to regulate the development and use of AI models. This might mean more aggressive model auditing requirements, for example, or limiting access to biological sequence data for training.
Another option, however, is to invest more in standard biodefense. This blog post surveys some proposals, such as:
Collect and sequence wastewater samples in airports and other travel hubs to detect new viruses before they can spread widely.
Mandate “know your customer” requirements for DNA synthesis equipment orders and novel pathogen checks for DNA synthesis requests.
Develop cheap, accurate, and easy-to-administer tests for infectious diseases.
Finally, we could leverage AI for biodefense, for example, by solving problems like DNA synthesis screening. Ideally, AI could help us improve systemic safety and accelerate defensive capabilities faster than AI risks.
Overall, today’s chatbots are not clearly a biosecurity threat, but that does not negate the potential for future risks at the intersection of AI and biological weapons. Large language models (LLMs) like ChatGPT could expand access to bioweapons, while biological design tools (BDTs) could make them more deadly and transmissible. Reducing this risk will require both AI-specific measures and broader improvements to our biosecurity and pandemic preparedness.
$250K in Prizes: SafeBench Competition Announcement
The Center for AI Safety is excited to announce SafeBench, a competition to develop benchmarks for empirically assessing AI safety. This project is supported by Schmidt Sciences, with $250,000 in prizes available for the best benchmarks. Submissions are open until February 25th, 2025.
For updates on SafeBench, feel free to sign up on our homepage here.
Links
New AI Systems
Suno AI is a new AI system that can generate full songs. Hundreds of leading musicians signed an open letter calling the use of AI in music an “assault on human creativity.”
OpenAI created Voice Engine which, given a 15 second audio sample of someone’s voice, can produce realistic imitations of it. They decided not to release the model for safety concerns.
xAI open sourced Grok-1.
Nvidia’s newest AI hardware, the B100, is expected to retail at $30,000 per chip.
OpenAI and Figure release a new robots demo.
US AI Policy
Marc Andreessen and allies plan to spend tens of millions lobbying against tech regulation.
“Effective accelerationists” including Based Beff Jezos launch a new lobbying organization.
The NTIA released their AI Accountability Policy Report, after soliciting more than 1,400 comments on the topic.
A House of Representatives subcommittee held a hearing on White House Overreach on AI. Lawmakers cited a letter from the Attorneys General of 20 states criticizing the White House Executive Order on AI’s use of the Defense Production Act.
Other AI Policy
France fines Google $250 million for illegally training AI models on copyrighted data.
AI companies have scraped training data from YouTube, books, and user-generated content on Google Docs and other platforms.
A detailed overview of responsibilities and plans for the EU AI Office.
Chinese and American scientists put forth red lines for AI development.
Opportunities
The Office of Management and Budget issued a request for information about federal procurement policies for AI.
The US Agency for International Development issued a request for information about developing a global agenda for AI research.
The US and UK agree to work together on evaluating frontier AI systems.
NSF announced $16 million in funding opportunities for responsible technological development.
Safety Research Updates
Can large language models improve cybersecurity by finding and fixing vulnerabilities in code? A new paper argues that little progress has been made on AI for cybersecurity.
In adversarial robustness, researchers accelerated a leading adversarial attack algorithm by 38x, and Anthropic documented a new, difficult-to-thwart method called many-shot jailbreaking.
DeepMind describes their evaluations for several risks from frontier AI systems.
The Collective Intelligence Project published A Roadmap to Democratic AI.
Researchers demand safe harbor to evaluate and red-team proprietary AI systems without threats of lawsuits or loss of access to these AI systems.
Other
Hackers accessed passwords, databases, and networks from OpenAI, Amazon, and thousands of other companies through vulnerabilities in the Ray computing framework.
Some weights from production language models can be stolen via a new technique.
Private conversations with chatbots may be accessible to hackers via newly uncovered vulnerability.
Recent articles consider the energy requirements of training increasingly large AI systems.
The small Caribbean nation of Anguilla brought in more than 10% of its GDP last year by selling web addresses that end in .ai.
See also: CAIS website, CAIS twitter, A technical safety research newsletter, An Overview of Catastrophic AI Risks, our new textbook, and our feedback form
Listen to the AI Safety Newsletter for free on Spotify.
Subscribe here to receive future versions.