Hi Alexander, thanks for writing this up!
Some context. I used to use Anki for 1-2 years. Completed the “Learning How to Learn” MOOC and read the book it was based on. Taught 13-16 year olds math and English for 2 years. Conducted EA presentations in Malaysia and previously in Singapore. Currently running EA Virtual Programs (I noticed that you’re in the intro program!). FYI, my opinions are mine and not CEA’s.
In conjunction with “learning how to learn better”, “learning how to prioritise which learning strategy works for specific scenarios” seems just as important. It’s really hard to know:
The value of information
The value of easy retrieval of information beforehand.
I think for many of the us, time is likely one of the biggest bottlenecks to better learning. For example, I really really want to apply a lot of the meta-learning tools when I’m reading The Happiness Trap, but I intuitively chose to just do two things only:
Read and take summarised notes.
Write down how I want to practice the ACT therapy techniques from the book.
In my case, I don’t think doing deep learning (e.g. writing notes, creating space repetition notes, reflect, do exercises, discuss, etc) is what I needed considering how busy life is for me now. My end goal is to be more sustainable mental health wise, and I want to apply the tools I’ve read in the book. It seems like the value of information is high here for achieving my goal, but the value of easy retrieval of information is low because I don’t know how I’m going to use it or when I’m going to use it.
But again it’s hard to know whether a certain information is valuable and should be easily retrievable. One failure mode that could happen is not being able to make a connection with something else important because I didn’t do enough deep learning. Like if I didn’t understand the concept of “cognitive fusion” fully, I might be forgoing a potential connection with another therapy technique that can help me better. But it’s really hard to know for sure beforehand.
Applying this to EA VP, I wonder if there are certain key learning outcomes that participants should really internalise and do a lot of deep learning; and, whether there are other learning outcomes that are less important that reading and remembering fuzzy impressions of it is enough for most participants.
That makes me think that we should try to be clear as much as we can with the value of information and the value of easy retrieval of information for most of our learning outcomes so that participants can say, for example, “oh EA VP says X is super important and will likely need it in future work, so I should do more deep learning here. And Y isn’t so important, so I’ll just read it.”
Besides these two things, I wonder if there’s a simpler heuristic for choosing when one should prioritise doing deep learning versus prioritise doing shallow learning. Or something in the middle, which is the likelier case.