Now, sort of because of the same challenges above, I think any vision of a response/proactive/coordination project needs a lot of focus.
So a project that tags the top EA interests of “AI” and “biorisk” is valuable (or extremely valuable by some worldviews), but doesn’t seem like it would have the same form as what was described above, e.g.:
It seems like you’re advising and directing national decisions. It seems like a bit of a “pop-up” think tank? This is different than the vision above.
It seems hard and exploratory to do this alert org for AI.
Both of these traits results in a very different org than what was described above.
Do you have any comments?
For example, does the org described above make any sense?
Do you think there is room for this org?
(For natural reasons, it’s unclear what form the new ALERT org will take) but was any of the text I wrote, a mischaracterization for your new org?
Now, sort of because of the same challenges above, I think any vision of a response/proactive/coordination project needs a lot of focus.
So a project that tags the top EA interests of “AI” and “biorisk” is valuable (or extremely valuable by some worldviews), but doesn’t seem like it would have the same form as what was described above, e.g.:
It seems like you’re advising and directing national decisions. It seems like a bit of a “pop-up” think tank? This is different than the vision above.
It seems hard and exploratory to do this alert org for AI.
Both of these traits results in a very different org than what was described above.
Do you have any comments?
For example, does the org described above make any sense?
Do you think there is room for this org?
(For natural reasons, it’s unclear what form the new ALERT org will take) but was any of the text I wrote, a mischaracterization for your new org?