My understanding of the hinge of history argument is that the current time has more leverage than either the past or future. Even if that’s true, it doesn’t necessarily mean that it’s any more obvious what needs to be done to influence the future.
If I believed that e.g. AI is obviously the most important lever right now, and think I know which direction to push that lever, I would ask myself “using the same reasoning, which levers would I be trying to push where in 1920”. As far as I can tell this is pretty agnostic about how easy it is to push these levers around, just which you would want to be pushing.
My impression is that people like you are pretty rare, but all of this is based off subjective impressions and I could be very wrong.
Have you met a lot of other people who came to AI safety from some background other than the Yudkowsky/Superintelligence cluster?