I create effective, scalable educational programs. I want them to help people make better decisions, become more empathic, and more effective in their work (esp. their research). I’m an awarded educator, receiving national awards, international senior fellowships, and the highest honour from my university. I also have a strong academic research background: I’m Chief Investigator on $3.7m of competitive, industry-partnered research grants; have published in the Scimago #1 journals for psychology, applied psychology, ageing, paediatrics, education (three times, see #1, #2, #3), and sport science (twice, see #1 & #2); and my work is cited almost 4x the world average (according to InCites; all data as of June 2021).
Michael Noetel 🔸
We all teach: here’s how to do it better
People new to EA might not know they can get the book for free by signing up to the 80,000 hours mailing list, and it’s also available on Audible
Done
Classic Aird: great collection of links to useful resources. Thanks mate. Looking forward to meta-Aird: a collection of links to the best of Aird’s collection of links.
Argh bugger, this conflicts with EAGx so won’t make this but would love to be kept in the loop
What about ‘card-carrying EAs’? Doesn’t have the same dark connotations as “drank the kool-aid” and does somewhat exemplify the difference you’re hinting at.
Maybe GWWC can start printing cards 😅
https://writingexplained.org/idiom-dictionary/a-card-carrying-member
I should clarify: RCTs are obviously generally >> even a very well controlled propensity score matched quasi-experiment, but I just don’t think the former is ‘bulletproof’ anymore. The former should update your priors more but if you look at the variability among studies in meta-analyses, even among low-risk-of-bias RCTs, I’m now much less easily swayed by any single one.
Yeah these are interesting questions Eli. I’ve worked on a few big RCTs and they’re really hard and expensive to do. It’s also really hard to adequately power experiments for small effect sizes in noisy environments (e.g., productivity of remote/in-person work). Your suggestions to massively scale up those interventions and to do things online would make things easier. As Ozzie mentioned, the health ones require such long and slow feedback loops that I think they might not be better than well (statistically) controlled alternatives. I used to think RCTs were the only way to get definitive causal data. The problem is, because of biases that can be almost impossible to eliminate (https://sites.google.com/site/riskofbiastool/welcome/rob-2-0-tool) RCTs are seldom perfect causal data. Conversely, with good adjustment for confounding, observational data can provide very strong causal evidence (think smoking; I recommend my PhD students do this course for this reason https://www.coursera.org/learn/crash-course-in-causality). For the ones with fast feedback loops, I think some combination of “priors + best available evidence + lightweight tests in my own life” works pretty well to see if I should adopt something.
At a meta-level, in an ideal world, the NSF and NIH (and global equivalents) are probably designed to fund people to address questions that are most important and with the highest potential. There are probably dietetics/sleep/organisational psychology experts who have dedicated their careers to questions #1-4 above, and you’d hope that those people are getting funded if those questions are indeed critical to answer. In reality, science funding probably does not get distributed based on criteria that maximises impartial welfare, so maybe that’s why #1-4 would get missed. As mentioned in a recent forum post, I think the mega-org could be better focused nudging scientific incentives to focus on those questions rather than working on those questions ourselves https://forum.effectivealtruism.org/posts/JbddnNZHgySgj8qxj/improving-science-influencing-the-direction-of-research-and
3 months on, and this has become one of the most valuable EA/Alignment/Rationality dissemination innovations I’ve seen. Has replaced almost all my more vapid listening. Would get through an extra 10-20 hours of content a week. Thank you Nonlinear/Kat/Emerson
Academic research (see noetel.com.au)
I’m sure you’ve read this paper that guides young psychologists like you in some useful directions: https://psyarxiv.com/8dw59/
If you’re committed to mental health then the research agenda for the Happier Lives Institute is useful to consider: https://www.happierlivesinstitute.org/ or scaleable online interventions like those of Spencer Greenberg’s team (see MindEase and UpLift: https://www.sparkwave.tech/)
If you’re more flexible, then your skills from counselling psychology would be useful in movement building (because basically you learn how to be a warm, supportive person who helps people change behaviour [in this case, maybe careers]): https://80000hours.org/problem-profiles/promoting-effective-altruism/
Personally, I switched out of clinical/counselling work because it was so difficult to scale. There are also many more psychologists (etc.) in wealthy countries per head than there are overseas, so many other options are more neglected https://founderspledge.com/stories/mental-health-report-summary
Whatever happens with the discussions about copyright, I really hope this continues to exist. I listened to six forum posts at 5am today while walking a baby around to sleep… very good for parental mental health
I spend a few years as a professional before coming back to do my PhD. I think what you’re describing sounds like a good model. The only criteria some professionals struggle to meet is ‘equivalent of honours’; that is, to get into a PhD you need to have completed a thesis before.
PhD Scholarships for EA projects in psychology, health, IIDM, or education
I understand what you’re saying about the tension. As someone trained in psychology, there’s a litany of papers that ‘solve the problem of not understanding’ with little or no ‘problem solving’ benefit.
Having said that, I think those incentives are changing. In the UK and Australia, universities are now being evaluated and incentivised based on how well they solve problems (e.g. https://www.arc.gov.au/engagement-and-impact-assessment). I think, in general, your motivation and career would not be hurt by doing things that focus on engagement with people who have important problems, and helping to impact their decision-making. Personally, I think even the best ‘solve the problem of not understanding’ questions start with a problem in real life. I think medicine is a good example of where academic success is often directly correlated with how well your research either solves important problems or has promise to solve problems.
As a fledgling economist, however, I do see you point. There are strong incentives in some fields to come up with some new theory than to solve some existing problem. I guess this is true in medicine too, where there’s more money on research for male pattern baldness than malaria (https://www.independent.co.uk/news/world/americas/bill-gates-why-do-we-care-more-about-baldness-malaria-8536988.html).
Still, I never dissuade my students from trying to solve important problems. Even if one of their studeis is something more ‘theoretical’, I try to ensure they’re working backwards from the important problem that’s worth solving. To your use example above, even if you do come up with a more general theory of philanthropic portfolio management, you’d hope that your intro and discussion could still speak to how it helps someone answer the policy question: “How much money should I allocate to x vs. y?”
One thing I’d point out is that there are many areas where solving problems does also lead to academic success. Systematic reviews are cited to the hilt because they try to solve an important problem (“what works?”). Knowledge translation is a whole field of taking stuff trapped in universities and getting it out into practice to solve problems. Very very few interventions do economic analyses of their cost-benefit, and those that do often struggle to put a dollar value on the benefit. For example, in this study, we could calculate the cost per bit of childhood cardiovascular health, but couldn’t put a dollar value on the bit of cardiovascular health: https://jamanetwork.com/journals/jamapediatrics/article-abstract/2779446 One of my deepest regrets was trying to focus my PhD on something that was theoretically interesting but practically far from helping the most disadvantaged (i.e., do you need to control or accept your emotions to perform under pressure?). I did this because I thought that was what you were ‘supposed’ to do, and because I thought it was interesting. If I had my time over, I would have started with a bigger problem then worked backwards to find the something interesting and ‘at the frontier of knowledge.’
Critical thinking about research, including what biases are most common, why they apply, and how to know if an intervention works: https://training.cochrane.org/handbook https://training.cochrane.org/online-learning
Perfect, thanks!
Great initiative @MichaelA. I’m not sure what a ‘sequence’ does, but I assume this means there’ll be a series of related posts to follow, is that right?
My little dude is only 2 but one of my best mates. Have never had more laughs than as a dad. But, never had more tears either. It’s turbulent, but the highs are high.
This is a useful list of interventions, some of which are mentioned in the post (e.g., quizzes; we’ve summarised the meta-analyses for these here). I think steps 1, 2 and 3 from the summary of the above post are the ‘teacher focused’ versions of how to promote deliberate practice (have a focus, get feedback, fix problems). Deliberate practice literature often tells learners how they should structure their own practice (e.g., how musicians should train). Teaching to others is a useful way to frame collaboration in a way that makes it safe to not know all the answers. Thanks for the nudges.