On your future directions / tentative reflections (with apologies that I haven’t looked into your model, which is probably cool and valuable!):
To the extent that we think this is relevant for things like lock-in and x-risk prioritisation we need to also think that current trends are predictive of future trends. But it’s not at all clear that they are once you take into account the possibility of explosive growth a la https://www.cold-takes.com/all-possible-views-about-humanitys-future-are-wild/. Moreover, worlds where there is explosive growth have way more moral patients, so if their probability is non-negligible they tend to dominate moral considerations.
Once we focus on explosive growth scenarios as the most important, I find much more persuasive considerations like these: https://www.effectivealtruism.org/articles/the-expected-value-of-extinction-risk-reduction-is-positive
I’ve written up decently extended reflections on why we shouldn’t give much weight to the fact that the history and presency of our world is an utter moral hellscape that I’m happy to share privately if these questions are important for you.
(All that said, I do think lock-in is undervalued in longtermism and I’m excited to see more work on that, and I do think the path to x-risk prioritisation is much more complicated than many EAs think and that these kinds of considerations you point out are exactly why.)
I haven’t tried this, but I’m excited about the idea! Effective Altruism as an idea seems unusually difficult to communicate faithfully, and creating a GPT that can be probed on various details and correct misconceptions seems like a great way to increase communication fidelity.