I create effective, scalable educational programs. I want them to help people make better decisions, become more empathic, and more effective in their work (esp. their research). I’m an awarded educator, receiving national awards, international senior fellowships, and the highest honour from my university. I also have a strong academic research background: I’m Chief Investigator on $3.7m of competitive, industry-partnered research grants; have published in the Scimago #1 journals for psychology, applied psychology, ageing, paediatrics, education (three times, see #1, #2, #3), and sport science (twice, see #1 & #2); and my work is cited almost 4x the world average (according to InCites; all data as of June 2021).
Michael Noetel
Perfect, thanks!
I understand what you’re saying about the tension. As someone trained in psychology, there’s a litany of papers that ‘solve the problem of not understanding’ with little or no ‘problem solving’ benefit.
Having said that, I think those incentives are changing. In the UK and Australia, universities are now being evaluated and incentivised based on how well they solve problems (e.g. https://www.arc.gov.au/engagement-and-impact-assessment). I think, in general, your motivation and career would not be hurt by doing things that focus on engagement with people who have important problems, and helping to impact their decision-making. Personally, I think even the best ‘solve the problem of not understanding’ questions start with a problem in real life. I think medicine is a good example of where academic success is often directly correlated with how well your research either solves important problems or has promise to solve problems.
As a fledgling economist, however, I do see you point. There are strong incentives in some fields to come up with some new theory than to solve some existing problem. I guess this is true in medicine too, where there’s more money on research for male pattern baldness than malaria (https://www.independent.co.uk/news/world/americas/bill-gates-why-do-we-care-more-about-baldness-malaria-8536988.html).
Still, I never dissuade my students from trying to solve important problems. Even if one of their studeis is something more ‘theoretical’, I try to ensure they’re working backwards from the important problem that’s worth solving. To your use example above, even if you do come up with a more general theory of philanthropic portfolio management, you’d hope that your intro and discussion could still speak to how it helps someone answer the policy question: “How much money should I allocate to x vs. y?”
One thing I’d point out is that there are many areas where solving problems does also lead to academic success. Systematic reviews are cited to the hilt because they try to solve an important problem (“what works?”). Knowledge translation is a whole field of taking stuff trapped in universities and getting it out into practice to solve problems. Very very few interventions do economic analyses of their cost-benefit, and those that do often struggle to put a dollar value on the benefit. For example, in this study, we could calculate the cost per bit of childhood cardiovascular health, but couldn’t put a dollar value on the bit of cardiovascular health: https://jamanetwork.com/journals/jamapediatrics/article-abstract/2779446 One of my deepest regrets was trying to focus my PhD on something that was theoretically interesting but practically far from helping the most disadvantaged (i.e., do you need to control or accept your emotions to perform under pressure?). I did this because I thought that was what you were ‘supposed’ to do, and because I thought it was interesting. If I had my time over, I would have started with a bigger problem then worked backwards to find the something interesting and ‘at the frontier of knowledge.’
PhD Scholarships for EA projects in psychology, health, IIDM, or education
I spend a few years as a professional before coming back to do my PhD. I think what you’re describing sounds like a good model. The only criteria some professionals struggle to meet is ‘equivalent of honours’; that is, to get into a PhD you need to have completed a thesis before.
Whatever happens with the discussions about copyright, I really hope this continues to exist. I listened to six forum posts at 5am today while walking a baby around to sleep… very good for parental mental health
3 months on, and this has become one of the most valuable EA/Alignment/Rationality dissemination innovations I’ve seen. Has replaced almost all my more vapid listening. Would get through an extra 10-20 hours of content a week. Thank you Nonlinear/Kat/Emerson
Yeah these are interesting questions Eli. I’ve worked on a few big RCTs and they’re really hard and expensive to do. It’s also really hard to adequately power experiments for small effect sizes in noisy environments (e.g., productivity of remote/in-person work). Your suggestions to massively scale up those interventions and to do things online would make things easier. As Ozzie mentioned, the health ones require such long and slow feedback loops that I think they might not be better than well (statistically) controlled alternatives. I used to think RCTs were the only way to get definitive causal data. The problem is, because of biases that can be almost impossible to eliminate (https://sites.google.com/site/riskofbiastool/welcome/rob-2-0-tool) RCTs are seldom perfect causal data. Conversely, with good adjustment for confounding, observational data can provide very strong causal evidence (think smoking; I recommend my PhD students do this course for this reason https://www.coursera.org/learn/crash-course-in-causality). For the ones with fast feedback loops, I think some combination of “priors + best available evidence + lightweight tests in my own life” works pretty well to see if I should adopt something.
At a meta-level, in an ideal world, the NSF and NIH (and global equivalents) are probably designed to fund people to address questions that are most important and with the highest potential. There are probably dietetics/sleep/organisational psychology experts who have dedicated their careers to questions #1-4 above, and you’d hope that those people are getting funded if those questions are indeed critical to answer. In reality, science funding probably does not get distributed based on criteria that maximises impartial welfare, so maybe that’s why #1-4 would get missed. As mentioned in a recent forum post, I think the mega-org could be better focused nudging scientific incentives to focus on those questions rather than working on those questions ourselves https://forum.effectivealtruism.org/posts/JbddnNZHgySgj8qxj/improving-science-influencing-the-direction-of-research-and
I should clarify: RCTs are obviously generally >> even a very well controlled propensity score matched quasi-experiment, but I just don’t think the former is ‘bulletproof’ anymore. The former should update your priors more but if you look at the variability among studies in meta-analyses, even among low-risk-of-bias RCTs, I’m now much less easily swayed by any single one.
What about ‘card-carrying EAs’? Doesn’t have the same dark connotations as “drank the kool-aid” and does somewhat exemplify the difference you’re hinting at.
Maybe GWWC can start printing cards 😅
https://writingexplained.org/idiom-dictionary/a-card-carrying-member
Argh bugger, this conflicts with EAGx so won’t make this but would love to be kept in the loop
Classic Aird: great collection of links to useful resources. Thanks mate. Looking forward to meta-Aird: a collection of links to the best of Aird’s collection of links.
Done
People new to EA might not know they can get the book for free by signing up to the 80,000 hours mailing list, and it’s also available on Audible
We all teach: here’s how to do it better
This is a useful list of interventions, some of which are mentioned in the post (e.g., quizzes; we’ve summarised the meta-analyses for these here). I think steps 1, 2 and 3 from the summary of the above post are the ‘teacher focused’ versions of how to promote deliberate practice (have a focus, get feedback, fix problems). Deliberate practice literature often tells learners how they should structure their own practice (e.g., how musicians should train). Teaching to others is a useful way to frame collaboration in a way that makes it safe to not know all the answers. Thanks for the nudges.
Thanks for taking the time to add these really useful observation, Seb.
One downside to this approach is that it might lead to Goodharting and leading the teacher to go in “exploitation” mode. E.g., I worry that I might become too attached to a specific outcome on behalf of the students and tacitly start to persuade (similar to some concerns expressed by Theo Hawkins) and/or neglect other important opportunities that might emerge during the program. How do you think of that risk?
It’s been a while since I read Theo’s post so I might be missing the mark here. I agree both explore and exploit are important, especially for young people. I haven’t thought deeply about this but my intuition says “if it’s also important to x, be explicit that you have multiple goals.” For example, to use ‘create personal theory of change’ via Future Academy is the goal, you might also want people to ‘create tentative career plans for 5 distinct careers’, or ‘develop connections so you have 3 people you could call to ask for career advice’. Sure the latter isn’t a ‘learning objective’ and it might be better un-said. Still, I think a generally good way of goodharting might be using multiple goals or criteria for success.
2. Can you say anything about what forms summative assessments are particularly useful? For Future Academy, we’re contemplating pitching project ideas or presentation and discussion of career plans
The word that comes to mind is to make it ‘authentic.’ Basically, make it as close as possible to the real world skill you want people to do. This is rare. Universities expect critical thinking, creativity, and communication, but use recall-based multiple-choice questions. I’ve seen essays and reflections to assess interpersonal skills, instead of videos or presentations. Pitching project ideas and presenting career plans sounds well above average. If I had to nit-pick, I’ve never ‘presented my career plans’, so to make it slightly closer to something people might do anyway would be ‘write a grant application.’
I think there’s a typo under 3a. (“Formative assessments” —> formative activities)?
Both are things. I should have clarified it. Formative assessments are formative activities that count toward a grade or completion status. As mentioned by another commenter, low-stakes quizzes are helpful for providing feedback and accountability to learners, but better fit university courses than fellowships etc.
I worry about this being true for on average for average university students and might not generalize to the subpopulation that some portion of community-building efforts is targetted towards (e.g., people who are in the 90th percentile on various domains, including openness to experience, conscientiousness, need-for-cognition, etc.). How worried are you about this?
This is an important question. I don’t think I know yet how big a problem this is (as I said, people should reach out if they want to work on it). One of the benefits of having worked in sport and performance psychology is that it mostly focuses on people in the top 1–5% of their field. As far as I can tell, the core principles underlying most of the above (psychological needs; deliberate practice; cognitive load limits) still apply to those people. You do need to calibrate the challenge to the person. People in the top 5% are going to be bored if you spend 10 hours explaining a t-test. So, I’m sure some things don’t generalise perfectly, but I think that’s more likely to be the specific techniques (e.g., ‘use quizzes’) than the mechanisms (e.g., ‘provide feedback’).
… being generally good people (or virtuous) in addition to the unique virtues you mentioned appears important as we have some research showing that this might be off-putting. Finally, same-race role-models appear to be particularly important.
Yeah I didn’t go into this much so it’s a good pickup. Both are useful to remember.
So in education ‘agency’ is often defined as ‘agentic engagement’—basically taking ownership over your own learning. I couldn’t find any good systematic reviews on interventions that increase agentic engagement. This is pretty weak evidence and might have a healthy dose of motivated reasoning (my end, and theirs), but people who have thought about agency for longer than I have seem to think...
In conclusion, the answer as to how teachers can support students’ agentic engagement is to adopt a significantly more autonomy-supportive classroom motivating style.
… so I don’t have any better ideas than those described above.
Ahhh! Yes, thanks. Fixed.
Could Nonlinear Library or Perrin Walker do audio versions of these articles? 🙏
Great initiative @MichaelA. I’m not sure what a ‘sequence’ does, but I assume this means there’ll be a series of related posts to follow, is that right?