These goals are not good goals.
Encourage people to start thinking about the future in more positive terms.
It is actively harmful for people to start thinking about the future in more positive terms, if those terms are misleading and unrealistic. The contest ground rules frame “positive terms” as being familiar, not just good in the abstract—they cannot be good but scary, as any true good outcome must be. See Eutopia is Scary:
We, in our time, think our life has improved in the last two or three hundred years. Ben Franklin is probably smart and forward-looking enough to agree that life has improved. But if you don’t think Ben Franklin would be amazed, disgusted, and frightened, then I think you far overestimate the “normality” of your own time.
Receive inspiration for our real-world policy efforts and future projects to run / fund.
It is actively harmful to take fictional evidence as inspiration for what projects are worth pursuing. This would be true even if the fiction was not constrained to be unrealistic and unattainable, but this contest is constrained in that way, which makes it much worse.
Identify potential collaborators from outside of our existing network.
Again, a search which is specifically biased to have bad input data is going to be harmful, not helpful.
Update our messaging strategy.
Your explicit goal here is to look for ‘positive’, meaning ‘non-scary’, futures to try to communicate. This is lying—no such future is plausible, and it’s unclear any is even possible in theory. You say
not enough effort goes into thinking about what a good future with (e.g.) artificial general intelligence could look like
but this is not true. Lots of effort goes into thinking about it. You just don’t like the results, because they’re either low-quality (failing in all the old ways utopias fail) or they are high-quality and therefore appropriately terrifying.
The best result I can picture emerging from this contest is for the people running the contest to realize the utter futility of the approach they were targeting and change tacks entirely. I’m unsure whether I hope that comes with some resignations, because this was a really, spectacularly terrible idea, and that would tend to imply some drastic action in response, but on the other hand I’d hope FLI’s team is capable from learning from its mistakes better than most.
A positive vision which is false is a lie. No vision meeting the contest constraints is achievable, or even desirable as a post-AGI target. There might be some good fiction that comes out of this, but it will be as unrealistic as Vinge’s Zones of Thought setting. Using it in messaging would be at best dishonest, and, worse, probably self-deceptive.