Thanks for sharing your experiences, too! As for transformers, yeah it seems pretty plausible that you could specialize in a bunch of traditional Deep RL methods and qualify as a good research engineer (e.g. very employable). That’s what several professionals seem to have done, e.g. Daniel Ziegler.
But maybe that’s changing, and it’s worth it to start learning things. It seems like most of the new RL papers incorporate some kind of transformer encoder in the loop, if not basically being a straight-up Decision Transformer.
Thanks for sharing your experiences, too! As for transformers, yeah it seems pretty plausible that you could specialize in a bunch of traditional Deep RL methods and qualify as a good research engineer (e.g. very employable). That’s what several professionals seem to have done, e.g. Daniel Ziegler.
But maybe that’s changing, and it’s worth it to start learning things. It seems like most of the new RL papers incorporate some kind of transformer encoder in the loop, if not basically being a straight-up Decision Transformer.
Interesting. Do you have any good examples?
Sure!
A Generalist Agent (deepmind.com)
SayCan: Grounding Language in Robotic Affordances (say-can.github.io)
From motor control to embodied intelligence (deepmind.com)
Transformers are Sample Efficient World Models (arxiv.org)
Decision Transformer: Reinforcement Learning via Sequence Modeling (arxiv.org)