Milan: I’ve categorized the post as “personal blog” for now. Can you say any more about how this relates to EA, or how readers might be able to take action if they want to find a way to help?
I thought “taking tail risks seriously” was kinda an EA thing...? In particular, we all agree that there probably won’t be a coup or civil war in the USA in early 2021, but is it 1% likely? 0.001% likely? I won’t try to guess, but it sure feels higher after I read that link (including the Vox interview) … and plausibly high enough to warrant serious thought and contingency planning.
At least, that’s what I got out of it. I gave it a bit of thought and decided that I’m not in a position that I can or should do anything about it, but I imagine that some readers might have an angle of attack, especially given that it’s still 6 months out.
“You should read it right now (or at least read this Vox interview), if you want to think through the contours of a civilizational Singularity that seems at least as plausible to me as the AI Singularity, but whose fixed date of November 3, 2020 we’re now hurtling toward.”
The EA implications of the 2020 US presidential election seem obvious?
Thanks for sharing the last link, which I think provides useful context (that Open Philanthropy’s funder has a history of donating to partisan political campaigns).
The very last line of the Vox interview is the only one I saw which suggests concrete action a person could take to reduce the chances of an electoral crisis (I assume that trying to get relevant laws changed within five months would be really hard):
The only real way to avoid this is to make sure we don’t enter into this scenario, and the best way to do that is to ensure that he loses decisively in November. That’s the best guarantee. That’s the best way that we can secure the future of a healthy constitutional democracy.
Given these points, though, the upshot of this post is effectively an argument that supporting Biden’s campaign should be thought of as an EA cause area, because even though it’s very hard to tell what impact political donations have, an unclear election result runs the risk of triggering a civil war, which is bad enough that even hard-to-quantify forms of risk reduction are very valuable here? With some bonus value because Biden donations mean a candidate with mostly better policy ideas is more likely to win (though the article doesn’t really go into policy differences)?
Does that seem like the right takeaway to you? Did you mean to make a different point about the value of changing electoral laws?
(I realize that the above is me making a lot of assumptions, but that’s another reason why it’s helpful to summarize what you found valuable/actionable in a given crosspost; it saves readers from having to work through all of the implications themselves.)
Thanks for sharing the last link, which I think provides useful context (that Open Philanthropy’s funder has a history of donating to partisan political campaigns).
Why is this context useful? It feels like this the relevance of this post should not be particularly tied to Dustin and Cari’s donation choices.
the upshot of this post is effectively an argument that supporting Biden’s campaign should be thought of as an EA cause area
Is “X should be thought of as an EA cause area” distinct from “X would be good”? More generally, I’d like the forum to be a place where we can share important ideas without needing to include calls to action.
On the other hand, I also endorse holding political posts to a more stringent standard, so that we don’t all get sucked in.
I should also mention that a post like this doesn’t need to have expected-value calculations attached, or anything in that level of detail; it’s just good to have a couple of sentences along the lines of “here’s why I posted this, and why I think it demonstrates a chance to make an effective donation // take other effective actions,” even if no math is involved.
(This kind of explanation seems more important the further removed a post is from “standard” EA content. When I crossposted Open Phil’s 2019 year-in-review post, I didn’t include a summary, because the material seemed to have very clear relevance for people who want to keep up with the EA community.)
I liked that post when it came out, but I had forgotten about it in the ensuing year-plus. Maybe you could link to this post when you make situational-awareness crossposts?
Milan: I’ve categorized the post as “personal blog” for now. Can you say any more about how this relates to EA, or how readers might be able to take action if they want to find a way to help?
I thought “taking tail risks seriously” was kinda an EA thing...? In particular, we all agree that there probably won’t be a coup or civil war in the USA in early 2021, but is it 1% likely? 0.001% likely? I won’t try to guess, but it sure feels higher after I read that link (including the Vox interview) … and plausibly high enough to warrant serious thought and contingency planning.
At least, that’s what I got out of it. I gave it a bit of thought and decided that I’m not in a position that I can or should do anything about it, but I imagine that some readers might have an angle of attack, especially given that it’s still 6 months out.
From the part I excerpted:
“You should read it right now (or at least read this Vox interview), if you want to think through the contours of a civilizational Singularity that seems at least as plausible to me as the AI Singularity, but whose fixed date of November 3, 2020 we’re now hurtling toward.”
The EA implications of the 2020 US presidential election seem obvious?
See also Dustin & Cari’s $20m grant to the 2016 Clinton campaign.
Thanks for sharing the last link, which I think provides useful context (that Open Philanthropy’s funder has a history of donating to partisan political campaigns).
The very last line of the Vox interview is the only one I saw which suggests concrete action a person could take to reduce the chances of an electoral crisis (I assume that trying to get relevant laws changed within five months would be really hard):
Given these points, though, the upshot of this post is effectively an argument that supporting Biden’s campaign should be thought of as an EA cause area, because even though it’s very hard to tell what impact political donations have, an unclear election result runs the risk of triggering a civil war, which is bad enough that even hard-to-quantify forms of risk reduction are very valuable here? With some bonus value because Biden donations mean a candidate with mostly better policy ideas is more likely to win (though the article doesn’t really go into policy differences)?
Does that seem like the right takeaway to you? Did you mean to make a different point about the value of changing electoral laws?
(I realize that the above is me making a lot of assumptions, but that’s another reason why it’s helpful to summarize what you found valuable/actionable in a given crosspost; it saves readers from having to work through all of the implications themselves.)
Why is this context useful? It feels like this the relevance of this post should not be particularly tied to Dustin and Cari’s donation choices.
Is “X should be thought of as an EA cause area” distinct from “X would be good”? More generally, I’d like the forum to be a place where we can share important ideas without needing to include calls to action.
On the other hand, I also endorse holding political posts to a more stringent standard, so that we don’t all get sucked in.
I should also mention that a post like this doesn’t need to have expected-value calculations attached, or anything in that level of detail; it’s just good to have a couple of sentences along the lines of “here’s why I posted this, and why I think it demonstrates a chance to make an effective donation // take other effective actions,” even if no math is involved.
(This kind of explanation seems more important the further removed a post is from “standard” EA content. When I crossposted Open Phil’s 2019 year-in-review post, I didn’t include a summary, because the material seemed to have very clear relevance for people who want to keep up with the EA community.)
I usually do link posts to improve the community’s situational awareness.
This is upstream of advocating for specific actions, though it’s definitely part of that causal chain.
I liked that post when it came out, but I had forgotten about it in the ensuing year-plus. Maybe you could link to this post when you make situational-awareness crossposts?