The term “global catastrophic risk” has been defined in multiple different and mutually inconsistent ways.[1] What will the Bay Area EAG focus on, specifically? And is there a specific reason why this term was chosen instead of a less ambiguous one?
That comment doesn’t even include all the definitions of “global catastrophic risk” that I’ve seen. According to Wikipedia, “[m]ost global catastrophic risks would not be so intense as to kill the majority of life on earth, but even if one did, the ecosystem and humanity would eventually recover (in contrast to existential risks),” directly contradicting a lot of other definitions people have given, especially Open Phil.
Thanks for the comment! I expect the main cause areas represented at the Bay Area event to be AI safety, biorisk, and nuclear security. I also expect there’ll be some meta-related content, including things like community building, improving decision making, and careers in policy.
We weren’t sure exactly what to call this event and were torn between this name and EA Global (X-Risk). We decided on EA Global (GCRs) because it was the majority preference of the advisors we polled, and because we felt it would more fully represent the types of ideas we expect to see at the event, as nuclear security and some types of risks from advanced AI or synthetic biology may not quite be considered to be existential in nature.
The term “global catastrophic risk” has been defined in multiple different and mutually inconsistent ways.[1] What will the Bay Area EAG focus on, specifically? And is there a specific reason why this term was chosen instead of a less ambiguous one?
That comment doesn’t even include all the definitions of “global catastrophic risk” that I’ve seen. According to Wikipedia, “[m]ost global catastrophic risks would not be so intense as to kill the majority of life on earth, but even if one did, the ecosystem and humanity would eventually recover (in contrast to existential risks),” directly contradicting a lot of other definitions people have given, especially Open Phil.
Thanks for the comment! I expect the main cause areas represented at the Bay Area event to be AI safety, biorisk, and nuclear security. I also expect there’ll be some meta-related content, including things like community building, improving decision making, and careers in policy.
We weren’t sure exactly what to call this event and were torn between this name and EA Global (X-Risk). We decided on EA Global (GCRs) because it was the majority preference of the advisors we polled, and because we felt it would more fully represent the types of ideas we expect to see at the event, as nuclear security and some types of risks from advanced AI or synthetic biology may not quite be considered to be existential in nature.