My second suggestion is to explicitly connect the present to the future. Compare these two examples:
Example 1:
In the future, your doctor could be an AI.
Example 2:
In the future, your doctor could be an AI. Here’s how it could happen: …
I think the main issue with example 1 is that it lacks detail. I think a solution is to be as concrete and specific as possible when describing possible futures, and note when you’re uncertain.
What I would find helpful is a list of potential career pathways in the AI safety space, categorised by the level of technical skills you’ll need (or not) to pursue them.
I’m not sure if this is currently possible to make, because there are very few established career paths in AI safety (e.g. “people have been doing jobs involving X for the past 10 years and here’s the trajectory they usually follow”), especially outside of technical research and engineering careers. I did make a small list of roles at maisi.club/help; but again, it’s hard to find clear examples of what these career paths actually look like.
I think the main issue with example 1 is that it lacks detail. I think a solution is to be as concrete and specific as possible when describing possible futures, and note when you’re uncertain.
I’m not sure if this is currently possible to make, because there are very few established career paths in AI safety (e.g. “people have been doing jobs involving X for the past 10 years and here’s the trajectory they usually follow”), especially outside of technical research and engineering careers. I did make a small list of roles at maisi.club/help; but again, it’s hard to find clear examples of what these career paths actually look like.