Why so few recent published net assessments of x-risks?

AI entirely aside, has anyone seen any recent and published net assessments of existential/​catastrophic(ish) risks? Why are there so few?

Specifically for this forum, what would very recent assessments say about new opportunities for moral actors?

A lot of risk assessment and then mitigation work has historically been done inside governments but is not widely shared (for a few decades at least). x-risk is not unaffected as geopolitical constraints have shifted and actions that once seemed politically impossible may now be on the table—some climate stabilisation plans might fit into a paragraph if acts of war or genocide are no longer dealbreakers.

In a Trump2 world (with other illiberal shifts globally), what are organisations or individuals now more able and incentivised to do in their own interests for their own reasons? Are there good new solutions in addition to new bad solutions?

Is anyone (else) thinking around this? (without the security clearances that keep things secret)

(I’m posting this now as a bunch of people will shortly be nearby for EAG London)