Similar to others, I want to ask basically if the board also includes people with expertise other than biorisk and AI, like geopolitics. Since the post was part of a series on COVID (which was not thought to be an existential threat at any point), I had imagined the CRT to also be intended to respond to crises not in the AI/biorisk areas.
Yes, but it wasn’t an existential biorisk. And I assume once you include risks which are catastrophic but not existential, you also get things which aren’t AI/pandemics. So that’s what I was trying to say.
We will activate for things besides x-risks. Besides the direct help we render, this is to learn about parts of the world it’s difficult to learn about any other time.
Yeah, we have a whole top-level stream on things besides AI, bio, nukes. I am a drama queen so I want to call it “Anomalies” but it will end up being called “Other”.
Some of this text is suggesting a vision different than what I expected and I have questions. What would an alert org for AI look like?
Partially the reason I’m writing is that there was a vision for another org that looks similar, but has a different form. This org would respond much more directly to crises like Afghanistan or Ukraine. It would harness sentiment and redirect loose efforts to much more effective, coordinated activity, producing a large counterfactual increase in aid and welfare.
I’m guessing the vision for this org is probably is more along the lines of what most people on the forum are thinking for a rapid response organization.
In this org that mobilizes efforts effectively, the substantive differences are:
The competencies and projects are distinct from past EA competencies and projects (quick decisions in noisy environments, organizing hundreds of people with feedback loops in hours and drawing on a lot of local competence)
The amount of work and (fairly) tangible output would build trust and create a place to recruit talent, including very strong candidates who are effective/impressive in different competencies.
This has deeper strategic value in building EA, especially in regions/countries where it isn’t established and where community building efforts have difficulty.
Created and supported by EAs, it would have a lot of real world knowledge and provides a very strong response to EA being esoteric.
A major theme of this org is proactive work, avoiding reactions to emergencies, but preparing plans and resources in advance, when a much smaller amount of resources can be much more impactful, or even reducing the size of a crises altogether. Socializing and executing this proactive viewpoint provide a great way to communicate EA ideas.
The reason this org wasn’t written up or executed (separate from time constraints), was that the org would demand a lot of attention (it’s easy to get running nominally but quality of leadership and decisions is important; the resulting activity/size of people involved is large and difficult to control and manage; many correct decisions seem unpopular and difficult to socialize; it needs to accommodate other viewpoints and pressure, including from very impressive non-EA leaders). This demand for executive attention made it less viable, but still above most other projects.
Another reason is that creating this org might be harder as some of this is harder to socialize to EAs and take plenty of focus (it’s sort of hard to explain, as there’s not that many templates for this org; momentum from some sort of early networking exercise of high status EAs has less value and is harder to achieve; initial phases are delicate, tentative investment won’t attract the kind of talent needed to drive the organization).
Now, sort of because of the same challenges above, I think any vision of a response/proactive/coordination project needs a lot of focus.
So a project that tags the top EA interests of “AI” and “biorisk” is valuable (or extremely valuable by some worldviews), but doesn’t seem like it would have the same form as what was described above, e.g.:
It seems like you’re advising and directing national decisions. It seems like a bit of a “pop-up” think tank? This is different than the vision above.
It seems hard and exploratory to do this alert org for AI.
Both of these traits results in a very different org than what was described above.
Do you have any comments?
For example, does the org described above make any sense?
Do you think there is room for this org?
(For natural reasons, it’s unclear what form the new ALERT org will take) but was any of the text I wrote, a mischaracterization for your new org?
Yeah we’re not planning on doing humanitarian work or moving much physical plant around. Highly recommend ALLFED, SHELTER, and help.ngo for that though.
This is bad, since it’s already hard to see the nature of the org suggested in my parent comment and this further muddies it. Answering your comment by going through the orgs is laborious and requires researching individual orgs and knocking them down, which seems unreasonable. Finally, it seems like your org is taking up space for this org.
Yeah we’re not planning on doing humanitarian work or moving much physical plant around. Highly recommend ALLFED, SHELTER, and help.ngo for that though.
ALLFED has a specific mission that doesn’t resemble the org in the parent comment. SHELTER isn’t an EA org, it provides accommodations for UK people? It’s doubtful help.ngo or its class of orgs occupy the niche—looking at the COVID-19 response gives some sense of how a clueful org would be valuable even in well resourced situations.
To be concrete, for what the parent org would do, we could imagine maintaining a list of crises and contingent problems in each of them, and build up institutional knowledge in those regions, and preparing a range of strategies that coordinate local and outside resources. It would be amazing if this niche is even partially well served or these things are done well in past crises. Because it uses existing interest/resources, and EA money might just pay for admin, the cost effectiveness could be very high. This sophistication would be impressive to the public and is healthy for the EA ecosystem. It would also be a “Task-Y” and on ramp talent to EA, who can be impressive and non-diluting.
It takes great literacy and knowledge to make these orgs work, instead of deploying money or networking with EAs, it looks outward and brings resources to EA and makes EA more impressive.
Earlier this year, I didn’t writeup or describe the org I mentioned (mostly because writing is costly and climbing the hills/winning the games involved uses effort that is limited and fungible), but also because your post existed and it would be great if something came out of it.
I asked what an AI safety alert org would look like. As we both know, the answer is that no one has a good idea what it would do, and basically, it seems to ride close to AI policy orgs, of which some exist. I don’t think it’s reasonable to poke holes because of this or the fact it’s exploratory, but it’s pretty clear this isn’t in the space described, which is why I commented.
We’re not really adding to the existing group chat / Samotsvety / Swift Centre infra at present, because we’re still spinning up.
My impression is that Great Power stuff is unusually hard to influence from the outside with mere research and data. We could maybe help with individual behaviour recommendations (turning the smooth forecast distributions of others into expected values and go / no-go advice).
A kessler syndrome of sufficient severity prevents spacecraft from leaving Earth for, depending on its’ duration, centuries to millennia.
A kessler cascade will eventually result in such a syndrome, it’s only a question of time, and this can be estimated by looking at the slope on the graph of the rate of debris increase from collisions. This slope is easy to increase and hard to decrease.
Starlink has a lot of small satellites in orbit.
Starlink is carrying communications for a party to a terrestrial conflict, these may include military communications.
A different party to the conflict, wishing to deny use of the constellation for the military communications of its’ enemy, may take actions to degrade or destroy the constellation in the course of the war.
Are there classes of action that would sufficiently degrade Starlink, such that it is no longer suitable for use as a communications platform for the party to the conflict, which would lead to a near term kessler syndrome?
Optimistic: No, there’s no risk to manned spaceflight of any action that could be taken against the Starlink constellation, including kinetic destruction of its’ spacecraft in their current locations.
Pessimistic: any damage to the constellation or its’ control systems results in an immediate kessler syndrome, which prevents manned spacecraft from ascending to the high (or escape) orbits required to colonize the solar system.
SpaceX engineers should be able to definitively answer this question.
In the most pessimistic case, the kessler syndrome will outlive terrestrial energy resources and/or climate reserve, so the human race will end starving, buried in our waste.
Yeah could be terrible. As such risks go it’s relatively* well-covered by the military-astronomical complex, though events continue to reveal the inadequacy of our monitoring. It’s on our Other list.
* This is not saying much: on the absolute scale of “known about” + “theoretical and technological preparedness” + “predictability” + “degree of financial and political support” it’s still firmly mediocre.
Russian arms control officials have now made public statements suggesting that commercial space infrastructure that is used to support the conflict may be a legitimate target.
EA did the analysis on alienating billionaires, so nobody is going to mock a US billionaire who wants to colonize space, but deployed a commercial sat swarm that is now being talked about as a valid military target.
I’m guessing nobody funded by EA is putting the work in from an engineering standpoint to see if there’s an existential risk there.
There are no new physics required, just engineering analysis. An engineer at a relevant firm could answer the questions. What breaks their system, how much debris does that course of action generate, is their constellation equipped to avoid cascading failure due to debris, what would be the impact on launch windows for high orbits of the worst case scenario?
I guess it has been done already and everything is totally fine, let’s focus on other stuff, no need to call this an emergency.
I guess it has been done already and everything is totally fine,
Right, just like there’s no cause for concern about the human health impacts of living on Mars for awhile, I should just wait for my space ticket to go join the colonies, assuming my ship makes it through the building wall of space debris orbiting the planet
Quick update since April:
We got seed funding.
We formed a board, including some really impressive people in bio risk and AI.
We’re pretty far through hiring a director and other key crew, after 30 interviews and trials.
We have 50 candidate reservists, as well as some horizon-scanners with great track records. (If you’re interested in joining in, sign up here.)
Bluedot and ALLFED have kindly offered to share their monitoring infrastructure too.
See the comments in the job thread for more details about our current structure.
Major thanks to Isaak Freeman, whose Future Forum event netted us half of our key introductions and let us reach outside EA.
Similar to others, I want to ask basically if the board also includes people with expertise other than biorisk and AI, like geopolitics. Since the post was part of a series on COVID (which was not thought to be an existential threat at any point), I had imagined the CRT to also be intended to respond to crises not in the AI/biorisk areas.
Yeah, we’re still looking for someone on the geopolitics side. Also, Covid was a biorisk.
Cool!
Yes, but it wasn’t an existential biorisk. And I assume once you include risks which are catastrophic but not existential, you also get things which aren’t AI/pandemics. So that’s what I was trying to say.
We will activate for things besides x-risks. Besides the direct help we render, this is to learn about parts of the world it’s difficult to learn about any other time.
Yeah, we have a whole top-level stream on things besides AI, bio, nukes. I am a drama queen so I want to call it “Anomalies” but it will end up being called “Other”.
Some of this text is suggesting a vision different than what I expected and I have questions. What would an alert org for AI look like?
Partially the reason I’m writing is that there was a vision for another org that looks similar, but has a different form. This org would respond much more directly to crises like Afghanistan or Ukraine. It would harness sentiment and redirect loose efforts to much more effective, coordinated activity, producing a large counterfactual increase in aid and welfare.
I’m guessing the vision for this org is probably is more along the lines of what most people on the forum are thinking for a rapid response organization.
In this org that mobilizes efforts effectively, the substantive differences are:
The competencies and projects are distinct from past EA competencies and projects (quick decisions in noisy environments, organizing hundreds of people with feedback loops in hours and drawing on a lot of local competence)
The amount of work and (fairly) tangible output would build trust and create a place to recruit talent, including very strong candidates who are effective/impressive in different competencies.
This has deeper strategic value in building EA, especially in regions/countries where it isn’t established and where community building efforts have difficulty.
Created and supported by EAs, it would have a lot of real world knowledge and provides a very strong response to EA being esoteric.
A major theme of this org is proactive work, avoiding reactions to emergencies, but preparing plans and resources in advance, when a much smaller amount of resources can be much more impactful, or even reducing the size of a crises altogether. Socializing and executing this proactive viewpoint provide a great way to communicate EA ideas.
The reason this org wasn’t written up or executed (separate from time constraints), was that the org would demand a lot of attention (it’s easy to get running nominally but quality of leadership and decisions is important; the resulting activity/size of people involved is large and difficult to control and manage; many correct decisions seem unpopular and difficult to socialize; it needs to accommodate other viewpoints and pressure, including from very impressive non-EA leaders). This demand for executive attention made it less viable, but still above most other projects.
Another reason is that creating this org might be harder as some of this is harder to socialize to EAs and take plenty of focus (it’s sort of hard to explain, as there’s not that many templates for this org; momentum from some sort of early networking exercise of high status EAs has less value and is harder to achieve; initial phases are delicate, tentative investment won’t attract the kind of talent needed to drive the organization).
Now, sort of because of the same challenges above, I think any vision of a response/proactive/coordination project needs a lot of focus.
So a project that tags the top EA interests of “AI” and “biorisk” is valuable (or extremely valuable by some worldviews), but doesn’t seem like it would have the same form as what was described above, e.g.:
It seems like you’re advising and directing national decisions. It seems like a bit of a “pop-up” think tank? This is different than the vision above.
It seems hard and exploratory to do this alert org for AI.
Both of these traits results in a very different org than what was described above.
Do you have any comments?
For example, does the org described above make any sense?
Do you think there is room for this org?
(For natural reasons, it’s unclear what form the new ALERT org will take) but was any of the text I wrote, a mischaracterization for your new org?
Yeah we’re not planning on doing humanitarian work or moving much physical plant around. Highly recommend ALLFED, SHELTER, and help.ngo for that though.
Your comment isn’t a reply and reduced clarity.
This is bad, since it’s already hard to see the nature of the org suggested in my parent comment and this further muddies it. Answering your comment by going through the orgs is laborious and requires researching individual orgs and knocking them down, which seems unreasonable. Finally, it seems like your org is taking up space for this org.
ALLFED has a specific mission that doesn’t resemble the org in the parent comment. SHELTER isn’t an EA org, it provides accommodations for UK people? It’s doubtful help.ngo or its class of orgs occupy the niche—looking at the COVID-19 response gives some sense of how a clueful org would be valuable even in well resourced situations.
To be concrete, for what the parent org would do, we could imagine maintaining a list of crises and contingent problems in each of them, and build up institutional knowledge in those regions, and preparing a range of strategies that coordinate local and outside resources. It would be amazing if this niche is even partially well served or these things are done well in past crises. Because it uses existing interest/resources, and EA money might just pay for admin, the cost effectiveness could be very high. This sophistication would be impressive to the public and is healthy for the EA ecosystem. It would also be a “Task-Y” and on ramp talent to EA, who can be impressive and non-diluting.
It takes great literacy and knowledge to make these orgs work, instead of deploying money or networking with EAs, it looks outward and brings resources to EA and makes EA more impressive.
Earlier this year, I didn’t writeup or describe the org I mentioned (mostly because writing is costly and climbing the hills/winning the games involved uses effort that is limited and fungible), but also because your post existed and it would be great if something came out of it.
I asked what an AI safety alert org would look like. As we both know, the answer is that no one has a good idea what it would do, and basically, it seems to ride close to AI policy orgs, of which some exist. I don’t think it’s reasonable to poke holes because of this or the fact it’s exploratory, but it’s pretty clear this isn’t in the space described, which is why I commented.
Thank you for this work. I recommend updating the post.
Do you have plans for nuclear war, given that’s the most likely GCR right now?
We’re not really adding to the existing group chat / Samotsvety / Swift Centre infra at present, because we’re still spinning up.
My impression is that Great Power stuff is unusually hard to influence from the outside with mere research and data. We could maybe help with individual behaviour recommendations (turning the smooth forecast distributions of others into expected values and go / no-go advice).
Anyone thinking about this?
A kessler syndrome of sufficient severity prevents spacecraft from leaving Earth for, depending on its’ duration, centuries to millennia.
A kessler cascade will eventually result in such a syndrome, it’s only a question of time, and this can be estimated by looking at the slope on the graph of the rate of debris increase from collisions. This slope is easy to increase and hard to decrease.
Starlink has a lot of small satellites in orbit.
Starlink is carrying communications for a party to a terrestrial conflict, these may include military communications.
A different party to the conflict, wishing to deny use of the constellation for the military communications of its’ enemy, may take actions to degrade or destroy the constellation in the course of the war.
Are there classes of action that would sufficiently degrade Starlink, such that it is no longer suitable for use as a communications platform for the party to the conflict, which would lead to a near term kessler syndrome?
Optimistic: No, there’s no risk to manned spaceflight of any action that could be taken against the Starlink constellation, including kinetic destruction of its’ spacecraft in their current locations.
Pessimistic: any damage to the constellation or its’ control systems results in an immediate kessler syndrome, which prevents manned spacecraft from ascending to the high (or escape) orbits required to colonize the solar system.
SpaceX engineers should be able to definitively answer this question.
In the most pessimistic case, the kessler syndrome will outlive terrestrial energy resources and/or climate reserve, so the human race will end starving, buried in our waste.
Yeah could be terrible. As such risks go it’s relatively* well-covered by the military-astronomical complex, though events continue to reveal the inadequacy of our monitoring. It’s on our Other list.
* This is not saying much: on the absolute scale of “known about” + “theoretical and technological preparedness” + “predictability” + “degree of financial and political support” it’s still firmly mediocre.
Russian arms control officials have now made public statements suggesting that commercial space infrastructure that is used to support the conflict may be a legitimate target.
EA did the analysis on alienating billionaires, so nobody is going to mock a US billionaire who wants to colonize space, but deployed a commercial sat swarm that is now being talked about as a valid military target.
I’m guessing nobody funded by EA is putting the work in from an engineering standpoint to see if there’s an existential risk there.
There are no new physics required, just engineering analysis. An engineer at a relevant firm could answer the questions. What breaks their system, how much debris does that course of action generate, is their constellation equipped to avoid cascading failure due to debris, what would be the impact on launch windows for high orbits of the worst case scenario?
I guess it has been done already and everything is totally fine, let’s focus on other stuff, no need to call this an emergency.
Go run it, I’d read it.
Someone at spaceX is taking meaningful action to mitigate this, thankfully. https://www.reuters.com/business/aerospace-defense/spacex-curbed-ukraines-use-starlink-internet-drones-company-president-2023-02-09/
Maybe seeing the Russian sat throw debris is what it took to ask the ‘so...about our constellation’ question: https://www.businessinsider.com/russian-satellite-breaks-up-orbit-space-debris-could-last-century-2023-2?utm_source=reddit.com
Thanks for the downvotes everyone!
Right, just like there’s no cause for concern about the human health impacts of living on Mars for awhile, I should just wait for my space ticket to go join the colonies, assuming my ship makes it through the building wall of space debris orbiting the planet
FYI: I think I signed up as a reservist but I’m not totally sure. I’ve not heard anything from you by email, so I just signed up again.
Got you! Pardon the delay, am leaving confirmations to the director we eventually hire.
Nice update. Given that you’ve just been curated, suggest you edit the OP to add this update or link to this comment.
Been trying! the editor doesn’t load for some reason.
Maybe a client-side content blocker on your end? Works fine for me today.