Want To Be An Expert? Build Deep Models
This piece is crossposted on my blog .
Good news! This post is shorter than it looks. There’s a bunch of supplemental info in endnotes if you want the extended edition director’s cut. But it’s much shorter if you skip those.
Summary: For impact, we need people who can really understand problems and then act on their understanding. Building deep models of a field is necessary for understanding problems and therefore developing expertise. I claim that researchers or strategists who want to have the most impact should spend a significant amount of time building these deep models early on in order to develop expertise as soon as possible. This could be 50%+ of your time for the first year or two, and then 10% or so on an ongoing basis.
Intro
It is really easy to fail to have an impact because you didn’t know something.
Someone once told me this story of how their work would make an impact (their theory of change): “A lot of people are suffering from mental health problems. Most of them don’t access any support despite interventions existing that can help. If we can make online interventions more accessible, maybe we can help a lot of people.”
This plan will probably fail to make an impact.
This answer is far too shallow. What online intervention do they intend to make, and why do they think it will work? I got 251 results when I typed “mental health app” into Google Play. What bottleneck is stopping these 251 other apps from solving the problem? Are they not evidence-based? Maybe people don’t know the apps exist? Maybe online apps just don’t work?
A crude theory of change is a good starting point for investigating the problem more. But if this is their final theory of change, then they will fail.
When I imagine someone who I expect to actually succeed at building an organization to improve mental health, I visualize an expert who can answer all of the questions above right away, with convincing arguments supported by facts that they know off the top of their heads. They might not have a direct answer to every question, but whenever they don’t have a direct answer they’ll have a good explanation of why that question is not particularly important. In short, these experts have an unusually complex mental map of the problem. I call these mental maps ‘deep models’.
A theory of change is a plan for how to fix a problem. A model is a map of the problem. Without that map, it’s much more difficult to make a good plan. You need to understand the landscape extremely well to do impactful research or start a great charity or, heck, succeed at basically anything where there’s no one to tell you what to do.*
Importantly, you don’t need to start out with a deep model. Impactful people usually started with their best guess at a theory of change, learned a bit more to deepen their model, checked if they needed to update their theory of change, and so forth. The important thing is to 1) keep learning and 2) keep checking if this is still the important thing.
Michael Plant’s cause profile is an example of a deep model for mental health. It’s a 7,000+ word explanation that includes detailed evidence, numbers to compare to other interventions, and crucial considerations that might change the conclusion.
I suspect you need a model as detailed as this before you’re likely to reach correct substantive conclusions about the action-relevant questions. Most people with models that detailed I’d call “experts”—experts loosely defined here as the top people in the world in their niche.
The world desperately needs more expert-level people working with good theories of change on important problems. I’m surrounded by good-intentioned effective altruists. Few of those people have the knowledge required to make superb theories of change (yet). This suggests that building deep models should be a top priority. (See endnote 1 for why I think models lead to expertise.)
Specifically, my claim is: if you want to be unusually impactful in a role that requires correctly understanding how the world works, it’s worth investing a significant amount of time upfront in building a deep model of the problem.
(*If your focus is executing on someone else’s strategy, then, well, this post might not be for you. E.g. support roles can be quite impactful without deep models. This post is aimed at people who are trying to do original research or set strategy for an intervention or trying to plan out a theory of change for themselves that actually works. See endnote 2 for why we can’t just defer to existing experts.)
What does an expert model look like?
There are a lot of cool things implied by the term “models”—gearsiness, pattern matching, tacit knowledge. However, it’s a common enough term that I’ll skip reinventing the wheel. (See endnote 3 for examples of deep models.) There’s one area I’m worried about being misunderstood about:
Memorizing history facts isn’t going to make you the next Caesar.
Models are not about just accumulating facts about a problem. (Sorry, anki cards probably aren’t the solution here.) It’s true you need a lot of baseline knowledge to build a good model, but choosing what to learn and understanding the connections between facts are more important.
The hard part here is figuring out what models you should be building. Saving the world doesn’t come with a roadmap, but you need a roadmap to save the world.
One solution is to iteratively bootstrap your way to having a better plan. It basically looks like the opposite of Rohin Shah’s description of how AI safety researchers fail to have an impact:
“We want to think, figure out some things to do, and then, if we do those things, the world will be better. An important part of that, obviously, is making sure that the things you think about, matter for the outcomes you want to cause to happen.
In practice, it seems to me that what happens is people get into an area, look around, look at what other people are doing. They spend a few minutes, possibly hours thinking about, “Okay, why would they be doing this?” This seems great as a way to get started in a field. It’s what I did.
But then they just continue and stay on this path, basically, for years as far as I can tell, and they don’t really update their models of “Okay, and this is how the work that I’m doing actually leads to the outcome.” They don’t try to look for flaws in that argument or see whether they’re missing something else.
Most of the time when I look at what a person is doing, I don’t really see that. I just expect this is going to make a lot of their work orders of magnitude less useful than it could be.”
_________________________________
Proactive examples of building models might look like:
Al Smith going from high school dropout to successful politician in the early 1900s:
Smith read “all the bills, even those which concerned the construction of a side road or a tiny dam in some remote upstate district. And as he read them, he tried to understand them. Why had one legislator included in a bill a provision that the dam be built by the Conservation Department rather than by the Department of Public Works? What had been in the mind of another that led him to specify that the side road must be surfaced with a specific type of asphalt?”
“Since most of the bills amended or referred back to other bills passed years before—and not described in the new bills—he took to spending evenings in the Legislative Library reading those old bills. Since some of the most confusing wording was based on legal technicalities, Smith began to borrow lawbooks from law libraries….Tired of listening to speeches he couldn’t understand, he would pay clerks for transcripts and study the transcripts.”
In the end, Smith wielded power among the legislatures because he was “familiar with their little pet projects, their side roads, the peculiar problems of their districts.”
_________________________________
Sam Hilton on leading the team responsible for civil nuclear safety policy in the UK government:
Sam wanted to develop good models of regulatory policy. Sam had worked in regulatory policy previously but hadn’t actively focused on building models and expertise then.
So in this role, to force himself to build deeper models, he spent time outside of work hours drafting a written model describing best practice in regulatory policy and how it could be applied to AI policy.
He actively took steps to develop his models and expertise, including:
Finding work tasks that would inform his models (e.g. working on a national regulatory self-assessment).
Discussing relevant topics during workplace social interactions with colleagues.
Reading broadly around the topic. In particular, reading government papers.
Focusing his training days on unique experiences that would provide valuable insights and interactions with experts (e.g. shadowing an inspection of a nuclear site) rather than building expertise that could be found elsewhere (e.g. classroom lessons on how nuclear reactors work).
Reducing his working hours so he could spend longer on other tasks such as expertise building.
Sam went on to support effective altruism organizations regarding AI regulation policy and advise the UK government on regulatory policy in various capacities.
_________________________________
Rohin building his model for AI safety as a grad student (Rohin now works on the DeepMind safety team, runs the Alignment Newsletter, and consults for Open Philanthropy on AI safety):
“I was building a model of how the future progresses: “What are the key points? What are the key problems? What are the key decision points? Where’s their leverage to affect the future?”
That was a small part of it, really. Then there was also: “Okay, how does AI work? What does the future of AI progress look like? What are the different problems people are concerned about? How do they relate to each other?”
Really, most of this looks like building an internal model for what powerful AI systems will look like and also, to a lesser extent, how, if we have powerful AI systems, they will be used. As that model got more refined, it became relatively obvious where the intervention points were and what the theory of change for different interventions would be.
So, I don’t think the theory of change was ever very hard. It feels like the difficult part was having the model in the first place.”
Rohin has also summarized well over 1000 articles on AI safety for the Alignment Newsletter. He thinks that writing so many summaries forced him to learn what is important and what he can ignore.
“After joining the field in September 2017, I spent ~50% of my time reading and understanding other people’s work for the first 6 months. I think I was actually too conservative, and it would have been better to spend 70-80% of my time on this.
I literally had meetings with other at-the-time junior researchers where we would pick a prominent researcher and try to state their beliefs, and why they held them. Looking back, we were horribly wrong most of the time, and yet I think that was quite valuable for me to have done, since it really helped in building my inside-view models about AI alignment.”
_________________________________
Scott Young building his model of ultralearning, the topic of his Wall Street Journal Bestselling book:
His first big step was a personal experiment to “learn the curriculum of MIT’s four year computer science undergrad, evaluate myself by trying to pass the final exams, complete the programming projects and finish in one year.” Then he did a second project learning four languages in one year.
Then he studied people who took on intense and ambitious learning projects. That checked out. But what if these people were just unusually good at learning quickly?
So he sent out a call: who wanted to try their own ultralearning experiment? He gathered a small group, gave them some advice, and let them run with it. No randomized controlled trial, but a nice experiment of how ultralearning worked in the real-world. Would his advice actually succeed when put to the test?
Wave away a bit of cherry picking, and it succeeded indeed. His group included Tristan de Montebell, who decided to try his hand at public speaking. In a seven-month sprint, Tristan went from “near-zero experience to competing in the finalists for the World Championship of Public Speaking”.
Somewhere in here Young formalized steps, ran courses (even more real world data!), and wrote a book on the whole thing. Most of his model building was in action, testing his ideas on himself, on others he worked closely with, and with larger groups in courses. He needed to be in the field to see what worked and why.
_________________________________
How can this go wrong?
When I ask myself “how does someone trying to apply this advice go horribly, terribly wrong?”, two ideas spring to mind:
1. They go “Ah, I have no idea where to even start! I can’t possibly become an expert.”
Number One got tangled up in not immediately, personally knowing what to do, so they never seriously even considered trying to become an expert. First, they need to actually consider if this might be a path for them, even if it’s not in their conception of “normal” career paths (like “genius” isn’t in their normal set of career paths).
Second, they need to step back to meta tools—look for reading lists or research agendas, ask more experienced people how they succeeded, look for overviews of the field, read biographies or linkedin summaries to see how other people got started, try brainstorming for 5 minutes about what big questions there might be, etc. (See endnote 4 on not getting paralyzed.)
2. They go “Ah, I now know exactly what to do! I’ll start a two year training program studying this list of papers I’ve put together, and then I’ll be an expert.”
Number Two leapt too far ahead, assuming that the plan they can see now won’t change as they get new information. They need to set smaller goals, check if new information confirms their previous hypothesis about what’s best, try to quickly get new information that changes their plans, ask for feedback, murphyjitsu their plans, and only make big commitments once they have enough evidence to warrant that level of commitment.
I’m wary that some people will read the above examples and think “Ah, I need to hole up in a room by myself and think really hard!”
Lots of people read a bunch. Competing on number of hours of reading isn’t going to make you 10x as effective. You want to build up gearsy models by learning what’s important, what to prioritize and ignore, how things connect to each other, and what causes things to happen.
And for that, you usually need to do a lot more than just reading. Al Smith didn’t just read—he questioned why the bills were written that way to get at the underlying motives. Rohin didn’t just read papers, he wrote summaries of what was important about them.
Sometimes you need to just hole up and think if you’re doing research (but even research is usually better off done with regular feedback and discussions with peers).
It’s almost never the case with planning interventions, starting a charity, policy, or, you know, anything that interacts with the broader world. For those, you need to be bouncing against reality as you build your model, so that you learn where you’re going wrong.
_________________________________
Make model building a priority early on
Before you have a good model, it’s really hard to plan good research directions or interventions. You need to put in the time to understand the landscape before your attempts for independent direct impact are likely to work.
So frontload your model building intensely in the first year to a couple years after you commit to one career path. Becoming an expert will require building up your model over the course of years, but you can get a pretty good model in a year if you put in solid effort.
Frontloading model building causes the benefits to compound over time, for a number of reasons.
Having a model speeds up integrating new information and processing. As you encounter new information, you have a map of the landscape to see where the new information fits in with your previous knowledge.
You can better figure out who the best experts are to learn from while deepening your model. It’s hard to evaluate expertise in an area where you have only a shallow understanding.
Perhaps most importantly, you can make better plans the better your model is. You can’t just create great theories of change from scratch. However, there’s a great feedback loop to starting with your best guess of a theory of change, deepening your model based on the theory of change, and then using that model to update your theory of change. (See theory of change as a hypothesis)
I’m a bit worried that people will dismiss this section with something like “I’m already learning in my job, I’m all set.” And I’m actually trying to push an ambitious goal of setting aside time on a regular basis for learning, developing skills, and building your models.
It never feels like there’s time just to learn and understand something. There’s a million things to do, and they all feel more urgent then building models. But you need to understand the problem unusually well before you’re going to have an unusually large impact. (See endnote 5 for a model on building deep models.)
For calibration, I expect that it’s reasonable to spend 50%+ of your time building models during your first couple years of research, and 10%-25% of your time on an ongoing basis after that.
If you’re in a full time job in something like policy, I expect you still need to take explicit time to build models (maybe something like 10%).
That’s half a day each week, at minimum.
We need more experts
Right now, only a handful of people in the world have deep models on the topics that effective altruists care about. Depending on how you count it, the EA community has between ten and a hundred experts for many of the top EA cause areas. For the people considered experts in these fields, most of them have less than ten years’ direct experience in the field.
This has a couple implications. First, the threshold to becoming an “expert” in the EA community is lower than in most fields. Second, we could really, really use more people investing now to build their knowledge and skills, so that in a few years, we’ll have people ready to step into these roles.
So, if you haven’t really sat down and asked whether you could ever be an expert in your niche, this is a good time to try out the thought. Could you build an expert-level deep model if you tried?
_________________________________
Note: I don’t think this post tells you how to make great models. I’m hoping that the post inspires people to start thinking at all about making models and becoming experts. So let me know in the comments if you’re interested in a followup post on *how* to build models.
Many thanks to Rohin Shah, Meg Tong, Nora Amman, Jonathan Mustin, Sam Hilton, Aaron Gertler for their help and feedback.
Enjoyed the piece? Subscribe to EA Coaching’s newsletter to get more posts delivered to you.
_________________________________
Endnote 1: Why do I think that models lead to expertise?
There’s a moderately large voice in my head going, “Hey! Where are all the caveats? Where is the careful evaluation of evidence? This is LessWrong, dammit!* Give me the reasoning transparency!” (*Yes, I know this isn’t actually LessWrong.)
Ah, ahem. Yeah, my bad. I tried out the version that was carefully laying out my evidence. However, some kind humans pointed out to me that dry, careful analysis is what *should* convince people, but unless you’re Scott Alexander, compelling writing needs examples and clear messages if you want people to act on it. Kind of makes sense – you usually need examples to really get the message anyway.
Since I cannot yet make pure reasoning transparency compelling, here’s a footnote instead for you wonderful, beautiful people demanding more nuance and evidence that building deep models leads to expertise. (I hope you exist.)
Why am I convinced that models are the path to expertise?
Honestly, I’m like…70% confident models are this important? Maybe the people I see with good models just happen to have them because these people are insanely smart or something, and all the poor people I’m nudging toward building models are doomed before they start. This doesn’t seem crazy. But, ah, most of what I do is predicated on people being able to make big improvements if they can learn the right strategies. And I do see people making improvements by learning better strategies. So I’m going to take that assumption as a given for practical purposes, and hope I don’t someday learn fate is more overdetermined by genetics than I already think it is.
There’s also the fuzzy line between a model and a skill. There is a major skill component to expertise. Knowing about a research topic, and knowing how to conduct great research are clearly different types of expertise. These skills may or may not also fall under the term “models”? Like, I’ve been learning rock climbing. It’s blindingly obvious that some of the people I climb with have more expertise than I do. Knowing the correct way to hold your hand for a particular kind of grip is a skill, but is it also a model? It feels like it is to me. If you don’t include skills, models feel like a smaller piece of becoming an expert.
Okay, but why am I convinced that models are the path to expertise?
If models weren’t crucial to expertise/being good at stuff, I might expect to see:
1. Lots of experts who don’t seem to have models.
False. I don’t see any really good people who don’t seem to have models. Experts just can give you a detailed take on what they think and why. (They might be wrong, which is deeply concerning, but that’s a slightly separate issue. Epistemics seem like a really important part of model building.) When I read Holden Karnofsky writing about his investigations, or Paul Christiano discussing economics, or Scott Alexander writing about psychiatry meds, it is clearly obvious that they understand the topic much, much better than I do.
2. Successful people would dive straight into tackling big problems without learning/ramp up time.
This doesn’t seem to be true (e.g. the examples in the post, Alexander Hamilton, Robert Moses, Duncan Sabien, Scott Alexander, Holden – they all build up models over time).
3. People should have the ability to make useful theories of change from the get go.
This rarely seems to be the case, although most people aren’t trying. People make terrible theories of change early in their career, because they just don’t have the information to figure out the important details.
4. Maybe intelligence would utterly dominate who becomes good regardless of models.
I don’t feel like I have enough evidence here. Maybe the appearance of models is overly predetermined by other factors that make one successful, like having an amazing memory or super intelligence.
5. Going straight for the goal beats trying to generally understand the landscape.
This seems like a plausible hypothesis. I suspect this is true in support roles. I could imagine a world where just trying to accomplish the goal as efficiently as possible was more effective for building models than setting out to build models.
6. Models wouldn’t be correlated with better performance.
I expect models to have predictive power for whether someone will be doing useful things. I.e. a detailed model + theory of change should predict whether I would think someone is doing something useful when they tell me about their job. Maybe message me if you think you’re doing useful stuff but have no models?
And what about the problem that maybe people need a lot more info than this post gives before they are going to figure out how to make good models?
I would be surprised if many people could figure out how to make good models based on this post. There is a lot of depths and nuance to making good models.
How do you figure out which parts are most important? How to figure out the best way to learn them? How do you know if you’re building a robust model or one that will fall apart like a bad relationship under the right test?
For example, to become a great writer, you could build models of sentence structure, or jokes, or stringing together arguments, or great examples. Or maybe you need deep subject matter expertise and great models of epistemics. Or maybe modeling the audience well is more important than all of those. Sigh.
The best advice I have is to cycle between theory of change and building models. Try to figure out what’s important, learn more, repeat. I have other tips for building models that I might make into another post, but even then I expect a big inferential gap between the ideal and what people will do. Maybe back to an apprenticeship model of getting regular feedback on your model building?
Okay, but actually. I don’t think people will make great models based on this post. I’m hoping that people will start thinking at all about making models and becoming experts. If I can get promising EAs trying to figure out how to become experts, I’m going to be happy.
_________________________________
Endnote 2: Why can’t we just defer to existing experts, instead of figuring stuff out for ourselves?
Most big questions won’t have a central authority you can defer to. There are occasions where you would do better by 1) taking a consensus of views or 2) deferring to the smartest, most epistemically sound expert you can find. But that usually only works when there is a specific decision you can directly get advice on—like deferring to GiveWell on where you should donate.
Effective altruism doesn’t yet have enough people with deep models of the causes effective altruists care about to provide that supervision to all of the up-and-coming researchers and leaders. In theory, you could have an expert produce a plan for what you should do, that you then execute to have impact. In practice, no such plan survives contact with the real world. To replan on the fly, you need a model, or extremely regular input from the expert with the model to do the replanning for you. I’ve spoken with various managers in EA roles, and they generally estimate they can only manage four or five people at a time really well. We don’t have nearly enough people setting strategy for them to manage everyone at that ratio.
_________________________________
Endnote 3: Examples of deep models
(Note: these are compressed versions. Squeezing a model down to a shareable size necessarily cuts out a lot of the details in the authors’ heads.)
This talk by Buck Shlegeris is a good example of building models even when you’re really unsure about a lot of things. I particularly liked the part in the Q&A where he discussed the people he defers to and how he evaluates who to defer to.
Scott Alexander probably has many deep models, but one I’ve followed is predictive processing (e.g. this 2017 post and this 2018 post). I particularly valued seeing Scott slotting together different information—since deliberately integrating knowledge is one of the key factors setting deep models apart from just memorizing a bunch of data.
The 80,000 Hour problem profile on global catastrophic biological risks was written by Greg Lewis, a researcher who studies biorisks. I liked how the level of detail in the post’s considerations indicated the depth of generating model behind the career advice.
GiveWell charity recommendations such as their write up on the Against Malaria Foundation. When I was at GiveWell, the care to selecting which questions to research impressed me. There is an unbelievable amount of data that you could try sorting through to pick the best charities—charity websites, academic studies of interventions, economic studies of flow-through effects, on-the-ground visits, other funders. The list goes on, and the final reports reflect how GiveWell managed to choose the key pieces that would influence their final decisions. That plus the sheer comprehensiveness of the research make this a good example of a deep model.
Working by Robert Caro gives an glimpse into the incredible depth of research the man puts into his biographies. His recurring mantra is “turn every page”—I have no idea how many thousands upon thousands of documents he read while researching LBJ, but he guesses that he did thousands of interviews.
_________________________________
Endnote 4: You don’t need to start as an expert
Now I’m imagining a rather dejected audience saying “Do I have to be able to write a book before I have a model?”
Again, you don’t need to start with write-a-book-depth. Expecting that you need to become really, really good at something right at the beginning is a good way to paralyze yourself. You don’t have to have it all figured out; you don’t need to have good models right now. (Though you might want to get to book-depth eventually if you’re building a really deep model.)
If you’re struggling with feeling paralyzed by trying to get your theory of change perfect or otherwise thinking you need to have everything figured out NOW, you might want to start getting used to Doing Things No One Expects You To. Once that’s a bit easier, come back to trying to do the Right Things.
Personally, I find that building a model is a more straightforward task than trying to solve a problem. Developing deep models is really about error correction, rather than getting it right at the first try.
Thus, you want to a) start somewhere (starting just about anywhere is better than never getting started) and b) make sure that you keep updating/learning and that the “direction of travel” is towards getting less wrong.
_________________________________
Endnote 5: A model for why to build models
“For calibration, I expect that it’s reasonable to spend 50%+ of your time building models during your first couple years of research, and 10%-25% of your time on an ongoing basis after that.
If you’re in a full time job in something like policy, I expect you still need to take explicit time to build models (maybe something like 10%).”
On the face of it, this is an extraordinary claim. Building a deep model of a field or problem area usually takes thousands of hours. Should you spend that time on something that won’t produce concrete output?
I claim it’s not just worth the time, but that building deep models is one of the best things you can do if you’re doing research or setting strategy.
Absolutely deep models
The world is really complicated. Like, really, really complicated. It’s not easy to figure out the answers to big questions.
And there’s not a lot of low hanging fruit left. There are a lot of people trying to solve the big problems in the world. If the problem was easy to solve, we probably would have solved it already.
Complicated problems require complex and detailed models. A robust, gearsy model that incorporates lots of data increases your chances of being right about big questions.
This sort of deep model helps you figure out what questions to ask in the first place. You need a lot of information just to locate new hypotheses among all the possibilities in the world. Figuring out where to look, what experiment to run, or who to ask is often harder than finding the answer once you know where to start.
A deep model also increases the chances of being right about solutions once you find them. For example, there may be a bunch of crucial considerations that would completely change your conclusion. The more you build out your model, the more crucial considerations you’ve already identified.
Comparatively deep models
It also matters how deep your model is compared to other people’s models. By the time you can generate a detailed model, only other people who have spent a lot of time on the question are likely to understand the issue better than you. You’ve positioned yourself to generate better plans and insights than can most people who lack that depth of understanding.
That makes you more likely to be among the top in your field.
Big List O’ Materials Related to Building Expertise
- How to pursue a career in technical AI alignment by Jun 4, 2022, 9:36 PM; 265 points) (
- How to pursue a career in technical AI alignment by Jun 4, 2022, 9:11 PM; 69 points) (LessWrong;
- EA Updates for January 2022 by Jan 5, 2022, 11:35 AM; 37 points) (
- Aug 18, 2022, 9:54 AM; 36 points) 's comment on Concrete Advice for Forming Inside Views on AI Safety by (
- List of links for getting into AI safety by Jan 4, 2023, 7:45 PM; 6 points) (LessWrong;
Yes please!
Same here. I feel like I don’t have the executive function to do so since I tend to be interested in a bunch of things at once and generally have generalist tendencies. I’d also be curious to hear more about the niche of being a generalist in the EA community, since they do provide value in our society.
Great to have this written up. I usually had this experience when talking to senior professors; they tend to have so much cross-referential knowledge that any piece of data just does much more work than for non-experts. I just never explicitly looked at expertise from a model-perspective. Your posts gives good pointers, but are there any things you see good model-builders intuitively do better? Like not just reading but questioning and grokking. I feel like model-building is also not fully learnable but somewhat distributed among people so I’d be interested in possible differences.
What are the sources for your examples? Particularly interested in Al Smith but I imagine other people might be interested in the others as well.
Good question! The Al Smith example is from The Power Broker: Robert Moses and the Fall of New York by Robert Caro. The examples with Sam and Rohin were from personal conversations/interviews. I think I also drew in a bit from Rohin’s FAQ https://rohinshah.com/faq-career-advice-for-ai-alignment-researchers/. The Scott Young example I pieced together from his blog post and book. I tried to link to some of the relevant blog posts above.