[Feedback Request] Hypertext Fiction Piece on Existential Hope

For a class I am taking, I made a piece of hypertext fiction tentatively titled Will We Flourish? A Choose-Our-Shared-Future. If you have time, I would love if you could read it and give feedback!

Will We Flourish? contains three sections,

  1. An introductory narrative that shows how humanity can overcome existential risk

  2. A decision tree where readers are shown different value framings for the case for working on existential risk

  3. Select Tegmark utopias that follow from each value

Additional context here: https://​​github.com/​​starmz123/​​Will-We-Flourish/​​tree/​​main

It should take no more than ~15 minutes to completely read all the branches.

While it is effectively finalized (so that I can count as having completed my class), I am willing to make minor edits before June 4th. More importantly, I hope to learn from this attempt and maybe even build on it in the future, so feedback is greatly appreciated as I will be open to making substantial revisions after June 4th!

Goals

  1. Grow the body of work that inspires Existential Hope

  2. Catalyze conversation around values that may be relevant in thinking about existential risk (very much inspired by public deliberation and somewhat by The Long Reflection)

  3. ‘Study’ the efficacy of different message frames in communicating about existential risk (ranging from cause areas directly, like climate change, to the broader case for working on longtermist issues)

Feedback Priorities

Do give me any and all feedback you have, but I am especially curious about the following.

Content

  • I really fear miscommunicating EA ideas so please let me know if there is something really off about how I am messaging existential risk etc.

  • Did my narratives align with the values I assigned them?

  • Is there something missing that you think I should add?

Values

  • Which narratives did you find most/​least persuasive/​appealing? (Feel free to divide this in terms of the case for [working on] existential risk and the case for steering humanity towards the envisioned scenario)

  • What values should I add?

Technical

  • Is something broken?

Direct link to the Google form for feedback: https://​​docs.google.com/​​forms/​​d/​​e/​​1FAIpQLSeIRLrE7kMcuW24KesVKLIbGRjjiPfiEEQoDaNJMWrVe3qByw/​​viewform

Finally, since I drew so much inspiration from Max Tegmark’s work, please consider giving your feedback on Tegmark’s Life 3.0 AI aftermath scenarios to the Future of Life Institute: https://​​www.surveymonkey.com/​​r/​​QMT9XXG

As well as their survey on what gives you existential hope: https://​​www.existentialhope.com/​​contribute

No comments.