Some of this text is suggesting a vision different than what I expected and I have questions. What would an alert org for AI look like?
Partially the reason I’m writing is that there was a vision for another org that looks similar, but has a different form. This org would respond much more directly to crises like Afghanistan or Ukraine. It would harness sentiment and redirect loose efforts to much more effective, coordinated activity, producing a large counterfactual increase in aid and welfare.
I’m guessing the vision for this org is probably is more along the lines of what most people on the forum are thinking for a rapid response organization.
In this org that mobilizes efforts effectively, the substantive differences are:
The competencies and projects are distinct from past EA competencies and projects (quick decisions in noisy environments, organizing hundreds of people with feedback loops in hours and drawing on a lot of local competence)
The amount of work and (fairly) tangible output would build trust and create a place to recruit talent, including very strong candidates who are effective/impressive in different competencies.
This has deeper strategic value in building EA, especially in regions/countries where it isn’t established and where community building efforts have difficulty.
Created and supported by EAs, it would have a lot of real world knowledge and provides a very strong response to EA being esoteric.
A major theme of this org is proactive work, avoiding reactions to emergencies, but preparing plans and resources in advance, when a much smaller amount of resources can be much more impactful, or even reducing the size of a crises altogether. Socializing and executing this proactive viewpoint provide a great way to communicate EA ideas.
The reason this org wasn’t written up or executed (separate from time constraints), was that the org would demand a lot of attention (it’s easy to get running nominally but quality of leadership and decisions is important; the resulting activity/size of people involved is large and difficult to control and manage; many correct decisions seem unpopular and difficult to socialize; it needs to accommodate other viewpoints and pressure, including from very impressive non-EA leaders). This demand for executive attention made it less viable, but still above most other projects.
Another reason is that creating this org might be harder as some of this is harder to socialize to EAs and take plenty of focus (it’s sort of hard to explain, as there’s not that many templates for this org; momentum from some sort of early networking exercise of high status EAs has less value and is harder to achieve; initial phases are delicate, tentative investment won’t attract the kind of talent needed to drive the organization).
Now, sort of because of the same challenges above, I think any vision of a response/proactive/coordination project needs a lot of focus.
So a project that tags the top EA interests of “AI” and “biorisk” is valuable (or extremely valuable by some worldviews), but doesn’t seem like it would have the same form as what was described above, e.g.:
It seems like you’re advising and directing national decisions. It seems like a bit of a “pop-up” think tank? This is different than the vision above.
It seems hard and exploratory to do this alert org for AI.
Both of these traits results in a very different org than what was described above.
Do you have any comments?
For example, does the org described above make any sense?
Do you think there is room for this org?
(For natural reasons, it’s unclear what form the new ALERT org will take) but was any of the text I wrote, a mischaracterization for your new org?
Yeah we’re not planning on doing humanitarian work or moving much physical plant around. Highly recommend ALLFED, SHELTER, and help.ngo for that though.
This is bad, since it’s already hard to see the nature of the org suggested in my parent comment and this further muddies it. Answering your comment by going through the orgs is laborious and requires researching individual orgs and knocking them down, which seems unreasonable. Finally, it seems like your org is taking up space for this org.
Yeah we’re not planning on doing humanitarian work or moving much physical plant around. Highly recommend ALLFED, SHELTER, and help.ngo for that though.
ALLFED has a specific mission that doesn’t resemble the org in the parent comment. SHELTER isn’t an EA org, it provides accommodations for UK people? It’s doubtful help.ngo or its class of orgs occupy the niche—looking at the COVID-19 response gives some sense of how a clueful org would be valuable even in well resourced situations.
To be concrete, for what the parent org would do, we could imagine maintaining a list of crises and contingent problems in each of them, and build up institutional knowledge in those regions, and preparing a range of strategies that coordinate local and outside resources. It would be amazing if this niche is even partially well served or these things are done well in past crises. Because it uses existing interest/resources, and EA money might just pay for admin, the cost effectiveness could be very high. This sophistication would be impressive to the public and is healthy for the EA ecosystem. It would also be a “Task-Y” and on ramp talent to EA, who can be impressive and non-diluting.
It takes great literacy and knowledge to make these orgs work, instead of deploying money or networking with EAs, it looks outward and brings resources to EA and makes EA more impressive.
Earlier this year, I didn’t writeup or describe the org I mentioned (mostly because writing is costly and climbing the hills/winning the games involved uses effort that is limited and fungible), but also because your post existed and it would be great if something came out of it.
I asked what an AI safety alert org would look like. As we both know, the answer is that no one has a good idea what it would do, and basically, it seems to ride close to AI policy orgs, of which some exist. I don’t think it’s reasonable to poke holes because of this or the fact it’s exploratory, but it’s pretty clear this isn’t in the space described, which is why I commented.
Some of this text is suggesting a vision different than what I expected and I have questions. What would an alert org for AI look like?
Partially the reason I’m writing is that there was a vision for another org that looks similar, but has a different form. This org would respond much more directly to crises like Afghanistan or Ukraine. It would harness sentiment and redirect loose efforts to much more effective, coordinated activity, producing a large counterfactual increase in aid and welfare.
I’m guessing the vision for this org is probably is more along the lines of what most people on the forum are thinking for a rapid response organization.
In this org that mobilizes efforts effectively, the substantive differences are:
The competencies and projects are distinct from past EA competencies and projects (quick decisions in noisy environments, organizing hundreds of people with feedback loops in hours and drawing on a lot of local competence)
The amount of work and (fairly) tangible output would build trust and create a place to recruit talent, including very strong candidates who are effective/impressive in different competencies.
This has deeper strategic value in building EA, especially in regions/countries where it isn’t established and where community building efforts have difficulty.
Created and supported by EAs, it would have a lot of real world knowledge and provides a very strong response to EA being esoteric.
A major theme of this org is proactive work, avoiding reactions to emergencies, but preparing plans and resources in advance, when a much smaller amount of resources can be much more impactful, or even reducing the size of a crises altogether. Socializing and executing this proactive viewpoint provide a great way to communicate EA ideas.
The reason this org wasn’t written up or executed (separate from time constraints), was that the org would demand a lot of attention (it’s easy to get running nominally but quality of leadership and decisions is important; the resulting activity/size of people involved is large and difficult to control and manage; many correct decisions seem unpopular and difficult to socialize; it needs to accommodate other viewpoints and pressure, including from very impressive non-EA leaders). This demand for executive attention made it less viable, but still above most other projects.
Another reason is that creating this org might be harder as some of this is harder to socialize to EAs and take plenty of focus (it’s sort of hard to explain, as there’s not that many templates for this org; momentum from some sort of early networking exercise of high status EAs has less value and is harder to achieve; initial phases are delicate, tentative investment won’t attract the kind of talent needed to drive the organization).
Now, sort of because of the same challenges above, I think any vision of a response/proactive/coordination project needs a lot of focus.
So a project that tags the top EA interests of “AI” and “biorisk” is valuable (or extremely valuable by some worldviews), but doesn’t seem like it would have the same form as what was described above, e.g.:
It seems like you’re advising and directing national decisions. It seems like a bit of a “pop-up” think tank? This is different than the vision above.
It seems hard and exploratory to do this alert org for AI.
Both of these traits results in a very different org than what was described above.
Do you have any comments?
For example, does the org described above make any sense?
Do you think there is room for this org?
(For natural reasons, it’s unclear what form the new ALERT org will take) but was any of the text I wrote, a mischaracterization for your new org?
Yeah we’re not planning on doing humanitarian work or moving much physical plant around. Highly recommend ALLFED, SHELTER, and help.ngo for that though.
Your comment isn’t a reply and reduced clarity.
This is bad, since it’s already hard to see the nature of the org suggested in my parent comment and this further muddies it. Answering your comment by going through the orgs is laborious and requires researching individual orgs and knocking them down, which seems unreasonable. Finally, it seems like your org is taking up space for this org.
ALLFED has a specific mission that doesn’t resemble the org in the parent comment. SHELTER isn’t an EA org, it provides accommodations for UK people? It’s doubtful help.ngo or its class of orgs occupy the niche—looking at the COVID-19 response gives some sense of how a clueful org would be valuable even in well resourced situations.
To be concrete, for what the parent org would do, we could imagine maintaining a list of crises and contingent problems in each of them, and build up institutional knowledge in those regions, and preparing a range of strategies that coordinate local and outside resources. It would be amazing if this niche is even partially well served or these things are done well in past crises. Because it uses existing interest/resources, and EA money might just pay for admin, the cost effectiveness could be very high. This sophistication would be impressive to the public and is healthy for the EA ecosystem. It would also be a “Task-Y” and on ramp talent to EA, who can be impressive and non-diluting.
It takes great literacy and knowledge to make these orgs work, instead of deploying money or networking with EAs, it looks outward and brings resources to EA and makes EA more impressive.
Earlier this year, I didn’t writeup or describe the org I mentioned (mostly because writing is costly and climbing the hills/winning the games involved uses effort that is limited and fungible), but also because your post existed and it would be great if something came out of it.
I asked what an AI safety alert org would look like. As we both know, the answer is that no one has a good idea what it would do, and basically, it seems to ride close to AI policy orgs, of which some exist. I don’t think it’s reasonable to poke holes because of this or the fact it’s exploratory, but it’s pretty clear this isn’t in the space described, which is why I commented.