This semiannual update is intended to inform the community of what we have been doing, and provide a touchpoint for those interested in engaging with us. Since the last update in Mid-2023, we have had several updates, but the past several months have been a tumultuous time in Israel, and this has affected our work in a variety of ways, as outlined in a few places below.
@Rona Tobolsky has been a policy fellow during the second half of 2023, continuing her work with ALTER. She has been working on a number of things, including iodization and biosecurity, especially focused on metagenomic sequencing for surveillance. She also started a masters program at Tel Aviv University’s School of Public Health, in disaster preparedness and management, and is considering next steps.
Ram Rachum has completed his fellowship with ALTER, during which he focused on multi-agent cooperation and multipolar AI scenarios. He has co-run a conference on disobedience in AI, as well as written several papers on how agents cooperate. His latest paper was just accepted to AAMAS 2024. He is currently seeking funding or support for his next steps as an affiliate researcher.
A new, independent program based in the US, under Ashgro fiscal sponsorship, was started to promote mathematical and learning theoretic alignment research. This is independent from ALTER, but we are supporting its work. The project has hired Gergely Szucs, and will be continuing work on that agenda. See below section for further updates.
Ongoing and New Projects
Our work on infectious disease policy, on the BWC, and on salt iodization in Israel is at a near-complete standstill, as almost all governmental attention is on the war.
We are working on a paper with Isabel Meusel on applying a model for metagenomic sequencing to Israel. This is part of a broader plan to promote biomonitoring in Israel, and we are hoping to have the paper complete and ready for submission later this month.
The AI safety coworking day at the EA office, which ALTER encouraged, has been successful, as has the reading group. Several members have also applied for external funding to continue this work, and at least one has received it. Unfortunately, this is on hold due to current logistical issues and the war.
EA Israel and members of the AI safety group are potentially working on a cybersecurity and AI safety education project. This is still being developed.
On public engagement, David has facilitated several reading groups for BlueDot in both Biosecurity and AI policy, and is working on a new project to do public communication about current and future AI benefits and risks. We have also successfully made connections with several individuals in Israel working on biosecurity-relevant projects.
Gergely Szucs is working on completing a project in infra-Bayesian physicalism, and tentatively plans to start work on a project on regret bounds in infra-Bayesian reinforcement learning, possibly related to Decision-Estimation Coefficients.
Vanessa Kosoy is going to be mentoring scholars in MATS, and potentially in ATHENA, to focus on other aspects of the Learning-Theoretic Agenda.
We have recently gathered a list of those who have expressed interest in mathematical AI alignment, and got well over 100 responses. We have begun putting people in touch on the basis of that list, and hope to do more in that vein. If there are individuals who have not filled in this form who are doing relevant work, whether or not they expect we know about them, please encourage them to do so!
Funding
We reached a settlement with FTX Debtors that allowed ALTER to return all unspent funds. (This excludes roughly 1/3rd of the initial grant which had already been committed or spent before the FTX bankruptcy.)
Including incoming grants, ALTER will have enough cash on hand to fund core operations until the end of the 2024 calendar year, but the allocation of funds is unclear, and beyond work on Learning Theory, below, there is no funding for additional programming or projects. (We have a grant application outstanding which may change this.)
We have been awarded a Survival and Flourishing Fund grant totalling $339,900, which consists of two overlapping grants, with Lightspeed Grants contributing $316,900 focused on Learning-theory research and mathematical alignment, but the marginal $23,000 is coming from SFF. Note that we are currently deciding how to allocate this funding between projects; the allocation from SFF was for general expenses and then learning theory, but is allocated as marginal funding over the Lightspeed amount, whereas the Lightspeed grant was specific to learning theory work.
As noted, the earlier funding for hiring an additional alignment researcher for Vanessa is being managed by a fiscally sponsored project run by Ashgro. This project will be used for funding learning theory work outside of Israel, and we may recommend that Lightcone direct some of the Lightspeed grant to that project.
ALTER Israel—End-of-2023 Update
This semiannual update is intended to inform the community of what we have been doing, and provide a touchpoint for those interested in engaging with us. Since the last update in Mid-2023, we have had several updates, but the past several months have been a tumultuous time in Israel, and this has affected our work in a variety of ways, as outlined in a few places below.
People
@Yonatan Cale and Shahar Avin have joined the ALTER board of directors, joining current board members @Vanessa Kosoy, @Joshua Fox , @EdoArad, @GidiKadosh, Daniel Aronovich, and Ezra Hausdorff.
@Rona Tobolsky has been a policy fellow during the second half of 2023, continuing her work with ALTER. She has been working on a number of things, including iodization and biosecurity, especially focused on metagenomic sequencing for surveillance. She also started a masters program at Tel Aviv University’s School of Public Health, in disaster preparedness and management, and is considering next steps.
Ram Rachum has completed his fellowship with ALTER, during which he focused on multi-agent cooperation and multipolar AI scenarios. He has co-run a conference on disobedience in AI, as well as written several papers on how agents cooperate. His latest paper was just accepted to AAMAS 2024. He is currently seeking funding or support for his next steps as an affiliate researcher.
A new, independent program based in the US, under Ashgro fiscal sponsorship, was started to promote mathematical and learning theoretic alignment research. This is independent from ALTER, but we are supporting its work. The project has hired Gergely Szucs, and will be continuing work on that agenda. See below section for further updates.
Ongoing and New Projects
Our work on infectious disease policy, on the BWC, and on salt iodization in Israel is at a near-complete standstill, as almost all governmental attention is on the war.
We are working on a paper with Isabel Meusel on applying a model for metagenomic sequencing to Israel. This is part of a broader plan to promote biomonitoring in Israel, and we are hoping to have the paper complete and ready for submission later this month.
The AI safety coworking day at the EA office, which ALTER encouraged, has been successful, as has the reading group. Several members have also applied for external funding to continue this work, and at least one has received it. Unfortunately, this is on hold due to current logistical issues and the war.
EA Israel and members of the AI safety group are potentially working on a cybersecurity and AI safety education project. This is still being developed.
David has worked on a few AI policy projects. Safety culture for AI—https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4491421, the debate on “pausing” AI, and in-progress work on Audit standards boards and other work with Transformative Futures Institute.
On public engagement, David has facilitated several reading groups for BlueDot in both Biosecurity and AI policy, and is working on a new project to do public communication about current and future AI benefits and risks. We have also successfully made connections with several individuals in Israel working on biosecurity-relevant projects.
Learning Theoretic / Mathematical AI alignment
(Largely via Ashgro fiscally-sponsored Affiliate):
Gergely Szucs is working on completing a project in infra-Bayesian physicalism, and tentatively plans to start work on a project on regret bounds in infra-Bayesian reinforcement learning, possibly related to Decision-Estimation Coefficients.
Vanessa Kosoy is going to be mentoring scholars in MATS, and potentially in ATHENA, to focus on other aspects of the Learning-Theoretic Agenda.
We have recently gathered a list of those who have expressed interest in mathematical AI alignment, and got well over 100 responses.
We have begun putting people in touch on the basis of that list, and hope to do more in that vein. If there are individuals who have not filled in this form who are doing relevant work, whether or not they expect we know about them, please encourage them to do so!
Funding
We reached a settlement with FTX Debtors that allowed ALTER to return all unspent funds. (This excludes roughly 1/3rd of the initial grant which had already been committed or spent before the FTX bankruptcy.)
Including incoming grants, ALTER will have enough cash on hand to fund core operations until the end of the 2024 calendar year, but the allocation of funds is unclear, and beyond work on Learning Theory, below, there is no funding for additional programming or projects. (We have a grant application outstanding which may change this.)
We have been awarded a Survival and Flourishing Fund grant totalling $339,900, which consists of two overlapping grants, with Lightspeed Grants contributing $316,900 focused on Learning-theory research and mathematical alignment, but the marginal $23,000 is coming from SFF. Note that we are currently deciding how to allocate this funding between projects; the allocation from SFF was for general expenses and then learning theory, but is allocated as marginal funding over the Lightspeed amount, whereas the Lightspeed grant was specific to learning theory work.
As noted, the earlier funding for hiring an additional alignment researcher for Vanessa is being managed by a fiscally sponsored project run by Ashgro. This project will be used for funding learning theory work outside of Israel, and we may recommend that Lightcone direct some of the Lightspeed grant to that project.