This is bad, since it’s already hard to see the nature of the org suggested in my parent comment and this further muddies it. Answering your comment by going through the orgs is laborious and requires researching individual orgs and knocking them down, which seems unreasonable. Finally, it seems like your org is taking up space for this org.
Yeah we’re not planning on doing humanitarian work or moving much physical plant around. Highly recommend ALLFED, SHELTER, and help.ngo for that though.
ALLFED has a specific mission that doesn’t resemble the org in the parent comment. SHELTER isn’t an EA org, it provides accommodations for UK people? It’s doubtful help.ngo or its class of orgs occupy the niche—looking at the COVID-19 response gives some sense of how a clueful org would be valuable even in well resourced situations.
To be concrete, for what the parent org would do, we could imagine maintaining a list of crises and contingent problems in each of them, and build up institutional knowledge in those regions, and preparing a range of strategies that coordinate local and outside resources. It would be amazing if this niche is even partially well served or these things are done well in past crises. Because it uses existing interest/resources, and EA money might just pay for admin, the cost effectiveness could be very high. This sophistication would be impressive to the public and is healthy for the EA ecosystem. It would also be a “Task-Y” and on ramp talent to EA, who can be impressive and non-diluting.
It takes great literacy and knowledge to make these orgs work, instead of deploying money or networking with EAs, it looks outward and brings resources to EA and makes EA more impressive.
Earlier this year, I didn’t writeup or describe the org I mentioned (mostly because writing is costly and climbing the hills/winning the games involved uses effort that is limited and fungible), but also because your post existed and it would be great if something came out of it.
I asked what an AI safety alert org would look like. As we both know, the answer is that no one has a good idea what it would do, and basically, it seems to ride close to AI policy orgs, of which some exist. I don’t think it’s reasonable to poke holes because of this or the fact it’s exploratory, but it’s pretty clear this isn’t in the space described, which is why I commented.
Your comment isn’t a reply and reduced clarity.
This is bad, since it’s already hard to see the nature of the org suggested in my parent comment and this further muddies it. Answering your comment by going through the orgs is laborious and requires researching individual orgs and knocking them down, which seems unreasonable. Finally, it seems like your org is taking up space for this org.
ALLFED has a specific mission that doesn’t resemble the org in the parent comment. SHELTER isn’t an EA org, it provides accommodations for UK people? It’s doubtful help.ngo or its class of orgs occupy the niche—looking at the COVID-19 response gives some sense of how a clueful org would be valuable even in well resourced situations.
To be concrete, for what the parent org would do, we could imagine maintaining a list of crises and contingent problems in each of them, and build up institutional knowledge in those regions, and preparing a range of strategies that coordinate local and outside resources. It would be amazing if this niche is even partially well served or these things are done well in past crises. Because it uses existing interest/resources, and EA money might just pay for admin, the cost effectiveness could be very high. This sophistication would be impressive to the public and is healthy for the EA ecosystem. It would also be a “Task-Y” and on ramp talent to EA, who can be impressive and non-diluting.
It takes great literacy and knowledge to make these orgs work, instead of deploying money or networking with EAs, it looks outward and brings resources to EA and makes EA more impressive.
Earlier this year, I didn’t writeup or describe the org I mentioned (mostly because writing is costly and climbing the hills/winning the games involved uses effort that is limited and fungible), but also because your post existed and it would be great if something came out of it.
I asked what an AI safety alert org would look like. As we both know, the answer is that no one has a good idea what it would do, and basically, it seems to ride close to AI policy orgs, of which some exist. I don’t think it’s reasonable to poke holes because of this or the fact it’s exploratory, but it’s pretty clear this isn’t in the space described, which is why I commented.