My list of effective altruism ideas that seem to be underexplored

Below is the list of things which in my view could affect the wellbeing of all people, but which is not part of known to me research in EA. As I found these topics important but underexplored I naturally tried my best to look as deep as I can into them, so many of the suggested below ideas have links to my works.

  1. Use the Moon as a data storage about humanity. This data could be used by the next civilization on Earth and will help it to escape global catastrophes or even will help it to resurrect humans.

  2. Explore the dangers of passive SETI. We could download dangerous alien AI. See also a recent post by Matthew Barnett.

  3. Study of UAP and their relation to our future prospects and global risks.

  4. Plastination as an alternative to cryonics. Some forms of chemical preservation are much cheaper than cryonics and do not require maintenance.

  5. Prove that death is bad (from the preferential utilitarianism point of view), and thus we need to fight aging, strive for immortality and research the ways to resurrect the dead (unpublished working draft).

  6. Research the topic of so-called “quantum immortality”. Will it cause eternal sufferings to anyone, or it could be used to increase one’s chances of immortality?

  7. Explore the ways how to resurrect the dead.

  8. New approaches to digital immortality and life-logging which is the cheapest way to immortality available to everyone. Explore active self-description as an alternative to life-logging.

  9. Explore how to “cure” past sufferings. Past sufferings are bad. If we have a time machine, it could be used to save past minds from sufferings. But also, we can save them by creating indexical uncertainty about their location, which will work similarly to a time-machine.

  10. Global chemical contamination as an x-risk. Seems to be underexplored.

  11. Anthropic effects of the expected probability of runaway global warming: our world is more fragile than we think and thus climate catastrophe is more probable. Unpublished draft.

  12. Plan B in AI safety. Let’s speak seriously about AI boxing and the best ways to do it.

  13. Dig deeper into the acausal deals and messaging to any future AI. The utility of killing humans is small for advanced superintelligent AI and adding any small value to our existence can help.

  14. How the future nuclear war will be different from the 20s century nuclear war scenarios?

  15. Explore and create refuges to survive a global catastrophe on an island or in a submarine. Create a general overview of surviving options. Surviving in caves. Surviving moisture greenhouse (unpublished draft).

  16. How to survive the end of the universe. We may have to make important choices before we start space colonization.

  17. Simulation: Experimental and theoretical research. Explore simulation termination risks. Explore types of evidence that we are in a simulation and analyze the topic of so-called “glitches in the matrix” – are they the evidence that we are in the simulation?

  18. Psychology of human values: do they actually exist as a stable set of preferences and what does psychology tell us about that?

  19. Doomsday argument: what if it is true after all? What can be done to escape its prediction?

  20. Explore the risks of wireheading as a possible cause of the civilizational decline.