I am the director of Tlön, an organization that translates content related to effective altruism, existential risk, and global priorities research into multiple languages.
After living nomadically for many years, I recently moved back to my native Buenos Aires. Feel free to get in touch if you are visiting BA and would like to grab a coffee or need a place to stay.
Every post, comment, or wiki edit I authored is hereby licensed under a Creative Commons Attribution 4.0 International License.
Thanks for the clarification. I didn’t mean to be pedantic: I think these discussions are often unclear about the relevant time horizon. Even Bostrom admits (somewhere) that his earlier writing about existential risk left the timeframe unspecified (vaguely talking about “premature” extinction).
On the substantive question, I’m interested in learning more about your reasoning. To me, it seems much more likely that Earth-originating intelligence will go extinct this century than, say, in the 8973th century AD (conditional on survival up to that century). This is because it seems plausible that humanity (or its descendants) will soon develop technology with enough destructive potential to actually kill all intelligence. Then the question becomes whether they will also successfully develop the technology to protect intelligence from being so destroyed. But I don’t think there are decisive arguments for expecting the offense-defense balance to favor either defense or offense (the strongest argument for pessimism, in my view, is stated in the first paragraph of this book review). Do you deny that this technology will be developed “over time horizons that are brief by cosmological standards”? Or are you confident that our capacity to destroy will be outpaced by our capacity to prevent from destruction?