It’s great to know where your specific weights differ! I agree that each of the arguments you put forth are important. Some specifics:
I agree that differences in the future (especially the weird possibilities like digital minds and acausal trade) is a big reason to discount historical evidence. Also, by these lights, some historical evidence (e.g., relations across huge gulfs of understanding and ability like from humans to insects) seems a lot more important than others (e.g., the fact that animal muscle and fat happens to be an evolutionarily advantageous food source).
I’m not sure if I’d agree that historical harms have occurred largely through divergence; there are many historical counterfactuals that could have prevented many harms: the nonexistence of humans, an expansion of the moral circle, better cooperation, discovery of a moral reality, etc.. In many cases, a positive leap in any of these would have prevented the atrocity. What makes divergence more important? I would make the case based on something like “maximum value impact from one standard deviation change” or “number of cases where harm seemed likely but this factor prevented it.” You could write an EA Forum post going into more detail on that. I would be especially excited for you to go through specific historical events and do some reading to estimate the role of (small changes in) each of these forces.
As I mention in the post, reasons to put negative weight on DMPS include the vulnerability of digital minds to intrusion, copying, etc., the likelihood of their instrumental usefulness in various interstellar projects, and the possibility of many nested minds who may be ignored or neglected.
I agree moral trade is an important mechanism of reasoned cooperation.
I’m really glad you put your own numbers in the spreadsheet! That’s super useful. The ease of flipping the estimates from negative to positive and positive to negative is one reason I only make the conclusion “not highly positive” or “close to zero” and not going with the mean estimate from myself and others (which would probably be best described as moderately negative, e.g., the average at an EA meetup where I presented this work was around −10).
I think your analysis is on the right track to getting us better answers to these crucial questions :)
Thanks! Fixed, I think.