AI safety governance/strategy research & field-building.
Formerly a PhD student in clinical psychology @ UPenn, college student at Harvard, and summer research fellow at the Happier Lives Institute.
AI safety governance/strategy research & field-building.
Formerly a PhD student in clinical psychology @ UPenn, college student at Harvard, and summer research fellow at the Happier Lives Institute.
Anyone: What do you think of the points I raised? Do you disagree with anything? Do you think I should have explained anything more clearly?
Anyone: This is my first forum post, but it won’t be my last. Do you have any feedback on my writing? Please don’t hold back!
First off, I love this idea. I’ve been thinking about doing a “birthday fundraiser” for my birthday (which is in January) and I will definitely consult this before I post.
A few thoughts/pieces of feedback:
I’d love to hear more about your decision to go with a career-focused post rather than a donation-focused post. I see how someone changing their career could have an immense impact (especially if they are able to find something impactful that they’re also very good at). However, I’m skeptical about the proportion of people who would seriously consider changing their career paths as a result of this. Maybe my forecast is off, though—I wouldn’t have expected 5 messages/calls! Would love to hear more about how those go.
I wonder if a post that had info about careers and donations would be effective. Maybe readers would be left feeling confused and it’s better to focus on one thing. But maybe adding a paragraph about GiveWell and including a quick blurb would be enough for some people, without distracting too much from the focus on 80k hours. What do you think?
At first glance, I think it would’ve been net positive to explicitly mention EA. Personally, I think people would have seen this as a “birthday post” (especially because of your great/clear hook) rather than “just another EA post.”
I think your description of existential risk is great—one of the most accessible/engaging that I’ve seen. I wonder if mentioning existential risk might turn people off, though (then again, it seems like you would’ve had to mention it since you’re working at the Existential Risks Initiative).
I wonder if you could’ve added a sentence or two to say more about what 80k hours is. Right now, it’s just described as “a cool new site called 80000hours.org.” I wonder if some readers would’ve wanted to have a bit more context about what it is, who runs it, how it comes up with its career advice, or why they should trust it.
I agree that it’s pretty long. Not exactly sure what I would shorten/cut though (do I get the “least helpful advice” prize now)? Mayyybe you could’ve shortened the third paragraph (about cracking the code and tech companies) and your description of existential risk.
I wonder if you could conclude by re-emphasizing that this is something you want the reader to do *for your birthday*. I’ve been thinking about adding something like, “If you were thinking about getting me something for my birthday, or calling me, or even just wishing me ‘happy birthday!’, please don’t. Instead, I’d rather you spend a few minutes [reading 80k hours/donating to a GiveWell-approved charity].
Overall, this is fabulous and inspiring. I’m definitely going to consult this as I draft my own birthday post, and I might even post on the forum a few weeks before my birthday for feedback :)
Oops! Here’s the correct link:
Thank you for this post! I want to raise another potential issue with forecasting tournaments: using Brier scores.
My understanding is that Brier scores take the squared difference between your forecast and the true value. For example, if I say there’s a 70% chance something will happen, and then it happens, my brier score is 1-0.7 squared.
I think the fact that Brier scores use the squared difference (as opposed to the absolute difference) is non-trivial. I’ll illustrate this a bit with a simple example.
Consider two forecasters who forecast on three events. Let’s also say that all three events happen.
Forecaster A believed that all three events had a 70% chance of happening.
Forecaster B believed that two events had a 80% chance of happening, and one event had a 50% chance of happening.
Who is the better forecaster? I think this answer is pretty unclear. If we use absolute differences, the forecasters are tied:
Forecaster A-- (1-0.7) + (1-0.7) + (1-0.7) = 0.9
Forecaster B-- (1-0.8) + (1-0.8) + (1-0.5) = 0.9
But if we use Brier scores, Forecaster A has the edge (lower Brier scores are better):
Forecaster A-- (1-0.7)^2 + (1-0.7)^2 + (1-0.7)^2 =0.27
Forecaster B-- (1-0.8)^2 + (1-0.8)^2+ (1-0.5)^2 = 0.33
In other words, Brier scores penalize you for being “very wrong” (relative to the scoring system that uses absolute differences). You could make an argument that this is justified, because people ought to be penalized more for being “more wrong.” But I haven’t seen this argument laid out—and I especially haven’t seen an argument to suggest that the penalty should be a “squared” penalty.
I haven’t considered all of the implications of this, but I imagine that people who are trying to win forecasting tournaments could find some ways to “game” the scoring system. At first glance, for instance, it seems like Brier scores penalize people for making “risky” forecasts (because being off by a lot is much worse than being off by a little bit).
I’m curious if others think this is a problem or think there are solutions.
This is fantastic! I know several EAs who “feel guilty about not being able to work on [AI safety or one of the other top cause areas] and compare themselves negatively to others who can,” and I think this could be a great resource for them.
Upon skimming the article and the Google Doc worksheet, I’m struck by how long/involved the process is. On one hand, this makes sense—this is about career planning, and people who are serious about changing their careers should be willing to put in the effort. On the other hand, I wonder if there could be shorter/easier/lower-effort versions of some of these tools.
In its current form, I think the length and user interface of the tool will appeal to some highly dedicated EAs with a lot of spare time on their hands. The tool might be tremendously helpful for them, but I think there are some motivated-but-not-quite-as-dedicated EAs who would benefit from shorter, more streamlined versions.
Some specific ideas include:
Changing some of the open-ended questions in the worksheet to multiple-choice questions (e.g., having a list of common priorities in the “Top 3-6 personal priorities” section).
Including more examples in the worksheet (there seem to be examples for some sections but not others; it might be helpful for nearly every question to have 1-2 examples).
Creating a “Sparknotes” version of this that is at least 50% shorter. Broadly, I think the guide can be divided into two parts: “Thinking about what matters to you” (sections 1, 2, and 4) and “Brainstorming/Evaluating concrete options” (sections 3, 5, 6, 7, and 8). I’d predict that EAs, especially those familiar with 80k hours, have done more of the “thinking about what matters” and less about the “concretely brainstorming/evaluating options.” With that in mind, I think it could be useful to develop (shorter/more streamlined) tools that target the “applied” sections (e.g., generating career options, determining your next steps, brainstorming how to get feedback and committing to it).
Developing some of these sections into standalone modules (similar to those on https://www.clearerthinking.org/, which do a great job of prompting serious reflection in relatively short amounts of time).
Including a vignette (either in the main tool or maybe as a separate file altogether) in which the reader gets to see how someone (real or hypothetical) goes through the whole process. (I think this is similar to the comment about case studies).
I think the main objection to several of these suggestions is that it might lead to shallower reflection than the longer/more effortful version. I think this is fair—some people who otherwise might have went with the more effortful version might instead go with the “lazier” version (and thus not benefit as much as they could have).
To mitigate this risk, I think you could recommend/nudge people toward the higher effort version, but still have complementary lower-effort versions for the (many) EAs who would be willing to do bite-sized versions of this but not the “18 page Google Doc with many open-ended questions & a complementary article” version. I also think there could be “foot-in-the-door” benefits—if someone likes the shorter version, they might be more inclined to think seriously about devoting several hours or weeks to more in-depth reflection.
Nonetheless, I think is a fantastic tool and I will be recommending it to several friends :) thank you for making it!
Ah, I completely missed that paragraph. Thank you for pointing it out, and best of luck as you create more digestible versions!
After reading the paragraph, I have a few additional thoughts:
I like the idea of a “just the key messages” version that focuses on spreading the ideas rather than why/how to apply them. But I wonder if it’d be even more important to release a version that focuses on the application. My guess is that most EAs who follow 80k hours would benefit more from tools that help them apply these concepts than readings that explain the content to them. My confidence is low, though—I’m going off of some interactions with EA friends & some general theories of behavior change. What do you think about this assessment (that it’s more important to get EAs to apply these concepts in their lives than to explain the key concepts)?
A book seems like a great idea, though I also expect that it’d appeal to the “high-effort” crowd. The more I think about it, the more I think that I really hope some of these become https://www.clearerthinking.org modules :) (in addition to a tool version like the one in the 2017 guide).
I wonder if creating shorter versions might also help you get more feedback, as well as feedback from a different audience. Dismantling the guide into smaller chunks could be helpful at figuring out which parts are most helpful/clear (and perhaps which parts are most worth developing/refining further). Also, if the shorter tools attract a different crowd (i.e., those who aren’t as willing to spends days or more making a career plan), the feedback on the “low-effort” version might differ in meaningful ways from the feedback on the high-effort version.
I’m sure there are plenty of initiatives going on at 80k, and I have no idea where “creating new short modules/interactive tools for career planning” would rank on the list. Nonetheless, I think it’d be a valuable idea (potentially more valuable than long guides or “key points” materials that are more informational than applied), and I’d be excited to see/share them if you decide to pursue them.
Thank you for sharing this post! It’s definitely useful to think about different ways of conceptualizing/measuring well-being. Here’s one part of the post I wasn’t fully convinced by:
“While life satisfaction theories of well-being are usually understood as distinct from desire theories (Haybron, 2016), life satisfaction might instead be taken as an aggregate of one’s global desires: I am satisfied with my life to the extent that it achieves my overall desires about it.”
From a measurement perspective, is there evidence suggesting that peoples’ judgments of life satisfaction are highly correlated with their achievement of overall desires? I would guess that life satisfaction (at least the way it’s operationalized on Diener’s scale) would only correlate modestly with one’s appraisal of specific desires.
Measurement aside, I still think it may be important to distinguish between “life satisfaction” (i.e., an individual’s subjective appraisal of how well their life is going—which could be influenced by positive affective, desire fulfillment, or other factors) and “satisfaction of global desires.”
The post seems to suggest that “satisfaction of global desires” should be equated with “life satisfaction.” I disagree. It seems like having a construct that refers to “an individual’s subjective appraisal of their life” is useful, and it seems like people are currently using the term “life satisfaction” to refer to this. Perhaps a new term could be created to refer to “satisfaction of global desires” (for instance, maybe we would call this “objective life satisfaction” as opposed to “subjective life satisfaction”, which is what popular life satisfaction scales currently measure).
Note: You don’t have to answer to follow this structure or answer these questions. The point is just to share information that might be helpful/informative to other EAs!
With that in mind, here are my answers:
Where do you work, and what do you do?
I am a PhD student studying psychology at the University of Pennsylvania.
What are things you’ve worked on that you consider impactful?
I’m trying to focus my research on topics that are impactful and neglected (e.g., digital mental health, global mental health).
I co-developed a mental health intervention for Kenyan adolescents and tested it in a randomized controlled trial.
I’ve published papers reviewing smartphone apps for depression and anxiety (here and here) and developed a new method for analyzing digital health interventions (here).
I developed an online mental health intervention designed to teach skills from CBT and positive psychology in <1 hour. We’re currently evaluating it in Kenya, India, and the US.
I recently started performing research on promoting effective giving. I’ve received funding from the EA Meta Fund and from UPenn to support this work. Through the project, we’re aiming to evaluate an intervention that applies psychological theories to improve effective giving. We’ll also be spreading information about EA to 1k+ people, and much of the funding from the project will be donated to effective charities.
What are a few ways in which you bring EA ideas/mindsets to your current job?
I work with many undergraduate students. I try to introduce them to EA concepts (e.g., thinking about importance, neglectedness, and solvability when considering projects) and refer them to EA sources (e.g., 80,000 Hours).
Several of these students have changed their independent study projects as a result of learning about EA (mostly to work on the effective giving project mentioned earlier).
I’ve casually mentioned effective altruism to graduate students professors I work with, many of whom weren’t familiar with EA previously. (Bringing this up “casually” has become easier to do now that I’m doing research relating to effective giving).
I’ve been connecting with members of the EA community who are doing similar work, like members of Spark Wave and the Happier Lives Institute.
Thank you, Michael! I think this hypothetical is useful & makes the topic easier to discuss.
Short question: What do you mean by “user error?”
Longer version of the question:
Let’s assume that I fill out weights for the various categories of desire (e.g., health, wealth, relationships) & my satisfaction in each of those areas.
Then, let’s say you erase that experience from my mind, and then you ask me to rate my global life satisfaction.
Let’s now assume there was a modest difference between the two ratings. It is not instinctively clear to me why I should prefer judgment #1 to judgment #2. That is, I think it’s an open question whether the “desire-based life satisfaction judgment” or the “desire-free life satisfaction judgment” is the more “valid” response.
To me, “user error” could mean several things:
The “desire-free” judgment is flawed because the user is not thinking holistically enough or reflecting enough. They are not thinking carefully about what they care about & how those things have actually went.
The “desire-based” judgment is flawed because the list of desires misses some things that the user actually finds important (i.e., it’s impossible to create a comprehensive list)
The “desire-based” judgment is flawed because the user is not assigning weights properly (i.e., I might report that wealth matters twice as much to my life satisfaction than friendship, but I might be misperceiving my true preferences, which are better reflected in the “desire-free” case).
In other words, if we could eliminate these forms of user error, I would probably agree with you that this distinction is arbitrary. In practice, though, I think these “desire-based” and “desire-free” versions of life satisfaction ought to be considered distinct (albeit I’d expect them to be modestly correlated). I also don’t think it’s clear to me that the “desire-based” judgment should be considered better (i.e., more valid). And even if it should be considered better, I think I’d still want to know about the
Furthermore, when making decisions, I would probably want to see both judgments. For example, let’s assume:
Intervention A improves “desire-based life satisfaction judgments” by 15% and “desire-free life satisfaction judgments” by 5%
Intervention B improves “desire-based life satisfaction judgments” by 10% and “desire-free life satisfaction judgments” by 10%
Intervention C improves “desire-based life satisfaction judgments” by 15% and “desire-free life satisfaction judgments” by 15%.
I would prefer Intervention C over intervention A, even though they both improve “desire-based satisfaction judgments” by the same amount. I also think reasonable people would disagree when comparing Intervention A to Intervention B.
For these reasons, I wonder if it’s practically useful to consider “desire-based” and “desire-free” life satisfactions as separate constructs.
I read this post before I encountered this comment. I didn’t recall seeing anything unkind or uncivil. I then re-read the post to see if I missed anything.
I still haven’t been able to find anything problematic. In fact, I notice a few things that I really appreciate from Mark. Some of these include:
Acknowledging explicitly that he’s sometimes rude to his opponents (and explaining why)
Acknowledging certain successes of those he disagrees with (e.g., “I’ll give this win to Tristan and Roose.”)
Citing specific actions/quotes when criticizing others (e.g., the quote from the Joe Rogan podcast)
Acknowledging criticisms of his own work
Overall, I found the piece to be thoughtfully written & in alignment with the community guidelines. I’m also relatively new to the forum, though, so please point out if I’m misinterpreting the guidelines.
I’ll also add that I appreciate/support the guideline of “approaching disagreements with curiosity” and “aim to explain, not persuade.” But I also think that it would be a mistake to overapply these. In some contexts, it makes sense for a writer to “aim to persuade” and approach a disagreement from the standpoint of expertise rather than curiosity.
Like any post, I’m sure this post could have been written in a way that was more kind/curious/community-normsy. But I’m struggling to see any areas in which this post falls short. I also think “over-correcting” could have harms (e.g., causing people to worry excessively about how to phrase things, deterring people from posting, reducing the clarity of posts, making writers feel like they have to pretend to be super curious when they’re actually trying to persuade).
Denise, do you mind pointing out some parts of the post that violate the writing guidelines? (It’s not your responsibility, of course, and I fully understand if you don’t have time to articulate it. If you do, though, I think I’d find it helpful & it might help me understand the guidelines better.)
Thank you for this post, Mark! I appreciate that you included the graph, though I’m not sure how to interpret it. Do you mind explaining what the “recommendation impression advantage” is? (I’m sure you explain this in great detail in your paper, so feel free to ignore me or say “go read the paper” :D).
The main question that pops out for me is “advantage relative to what?” I imagine a lot of people would say “even if YouTube’s algorithm is less likely to recommend [conspiracy videos/propaganda/fake news] than [traditional media/videos about cats], then it’s still a problem! Any amount of recommending [bad stuff that is harmful/dangerous/inaccurate] should not be tolerated!”
What would you say to those people?
Thank you, Denise! I think this gives me a much better sense of some specific parts of the post that may be problematic. I still don’t think this post, on balance, is particularly “bad” discourse (my judgment might be too affected by what I see on other online discussion platforms—and maybe as I spend more time on the EA forum, I’ll raise my standards!). Nonetheless, your comment helped me see where you’re coming from.
I’ll add that I appreciated that you explained why you downvoted, and it seems like a good norm to me. I think some of the downvotes might just be people who disagree with you. However, I also think some people may be reacting to the way you articulated your explanation. I’ll explain what I mean below:
In the first comment, it seemed to me (and others) like you assumed Mark intentionally violated the norms. You also accused him of being unkind and uncurious without offering additional details.
In the second comment, you linked to the guidelines, but you didn’t engage with Mark’s claim (“I think this was kind and curious given the context.”). This seemed a bit dismissive to me (akin to when people assume that a genuine disagreement is simply due to a lack of information/education on the part of the person they disagree with).
In the third comment (which I upvoted), you explained some specific parts of the post that you found excessively unkind/uncivil. This was the first comment where I started to understand why you downvoted this post.
To me, this might explain why your most recent post has received a lot of upvotes. In terms of “what to make of this,” I hope you don’t conclude “users should not explain why they downvote.” Rather, I wonder if a conclusion like “users should explain why they downvote comments, and they should do so in ways that are kind & curious, ideally supported by specific examples when possible” would be accurate. Of course, the higher the bar to justify a downvote, the fewer people will do it, and I don’t think we should always expect downvote-explainers to write up a thorough essay on why they’re downvoting.
Finally, I’ll briefly add that upvotes/downvotes are useful metrics, but I wouldn’t place too much value in them. I’m guessing that upvotes/downvotes often correspond to “do I agree with this?” rather than “do I think this is a valuable contribution?” Even if your most recent comment had 99 downvotes, I would still find it helpful and appreciate it!
What are the things you look for when hiring? What are some skills/experiences that you wish more EA applicants had? What separates the “top 5-10%” of EA applicants from the median applicant?
Super exciting work! Sharing a few quick thoughts:
1. I wonder if you’ve explored some of the reasons for effect size heterogeneity in ways that go beyond formal moderator analyses. In other words, I’d be curious if you have a “rough sense” of why some programs seem to be so much better than others. Is it just random chance? Study design factors? Or could it be that some CT programs are implemented much better than others, and there is a “real” difference between the best CT programs and the average CT programs?
This seems important because, in practice, donors are rarely deciding between funding the “average” CT program or the “average” [something else] program. Instead, they’d ideally want to choose between the “best” CT program to the “best” [something else] program. In other words, when I go to GiveWell, I don’t want to know about the “average” Malaria program or the “average” CT program—I want to know the best program for each category & how they compare to each other.
This might become even more important in analyses of other kinds of interventions, where the implementation factors might matter more. For instance, in the psychotherapy literature, I know a lot of people are cautious about making too many generalizations based on “average” effect sizes (which can be weighed down by studies that had poor training procedures, recruited populations that were unlikely to benefit, etc.).
With this in mind, what do you think is currently the “best” CT program, and how effective is it?
2. I’d be interested in seeing the measures that the studies used to measure life satisfaction, depression, and subjective well-being.
I’m especially interested in the measurement of life satisfaction. My impression is that the most commonly used life satisfaction measure (this one) might lead to an overestimation of the relationship between CTs and life satisfaction. I think two (of the five) the items could prime people to think more about their material conditions than their “happiness.” Items listed below:
The conditions of my life are excellent (when people think about “conditions,” I think many people might think about material/economic conditions moreso than affective/emotional conditions).
So far I have gotten the important things I want in life (when people think about things they want, I think many people will consider material/economic things moreso than affective/emotional things)
I have no data to suggest that this is true, so I’m very open to being wrong. Maybe these don’t prime people toward thinking in material/economic terms at all. But if they do, I think they could inflate the effect size of CT programs on life satisfaction (relative to the effect size that would be found if we used a measure of life satisfaction that was less likely to prime people to think materialistically).
Also, a few minor things I noticed:
1. “The average effect size (Cohen’s d) of 38 CT studies on our composite outcome of MH and SWB is 0.10 standard deviations (SDs) (95% CI: 0.8, 0.13).”
I believe there might be a typo here—was it supposed to be “0.08, 0.13”?
2. I believe there are two “Figure 5”s—the forest plot should probably be Figure 6.
Best of luck with next steps—looking forward to seeing analyses of other kinds of interventions!
What a great opportunity! I wonder if people at SparkWave (e.g., Spencer Greenberg), Effective Thesis, or the Happier Lives Institute would have some ideas. All three organizations are aligned with EA and seem to be in the business of improving/applying/conducting social science research.
Also, I have no idea who your advisor is, but I think a lot of advisors would be open to having this kind of conversation (i.e., “Hey, there’s this funding opportunity. We’re not eligible for it, but I’m wondering if you have any advice...”). [Context: I’m a PhD student in psychology at UPenn.]
If that’s not a good option, you could consider asking your advisor (and other academics you respect) if they know about any metascience/open science organizations that are highly effective [without mentioning anything about your relative and their interest in donating].
Finally, it’s not clear to me if the donor is only interested in metascience or if they would also be open to funding “basic science” projects. “Basic science” is broad enough that I imagine it could open up a lot of alternative paths (many of which might be more explicitly EA-aligned than metascience). Examples include basic scientific research on effective giving, animal advocacy, mental health, AI safety, etc. Do you have a sense of how open to “basic science” your relative is, or was basic science just meant as a synonym for metascience?
Finally, good luck on this! :)
I’d be curious to learn more about the “types” of EAs that might be best-suited for this work, or how the “EA perspective” could enhance ongoing efforts.
As it stands, the case for scale (i.e., the magnitude of the problem) is very clear. However, I think scale is usually the strongest part of most cause area analyses (i.e., there are a lot of really big problems and it’s usually not too difficult to articulate the bigness of those problems, especially using words rather than models). I think the role that EAs would play is less clear (as has been reflected in other comments relating to neglectedness). So, I wonder:
Are there some clear gaps or limitations in the current anti-War-on-drugs movement that could be filled by EA perspectives/skills? (As an example, one of the commentators emphasized that global efforts to legalize drugs may be neglected, and EAs who have skills/interests related to global advocacy might be especially helpful).
I think the steelman of the neglectedness argument would be something like: “The less neglected something is, the less likely it is that we would be able to make them do it slightly better.”
This is both because (a) it is harder to change the direction of the movement and (b) it is harder to genuinely find meaningful ways to improve the movement.
In (b), I wonder if there are some specific limitations of the current War-on-Drugs movement that would match the skills/interests of (some) EAs.
Terrific overview! I’ll offer some feedback with the hope that some of it may be helpful:
Big Picture Thoughts
In general, I thought the report did a great job summarizing some of the major themes/ideas that are fairly well-established in global mental health. I wonder if it could be useful to include a section on more experimental/novel/unestablished/speculative ideas. Sort of like a “higher risk, higher potential reward” section.
Relatedly, I’d be interested in seeing bolder and more specific recommendations for future work. As an example, Box 2 (“Promising Research Directions”) lists important goals, but they’re too broad to really know how to act on (e.g., “improve treatments and expand access to care.”). I’d be more curious to see HLI’s subjective opinions on the most impactful next steps (more similar to the list of project ideas that you have, rather than the goals in Box 2).
I’d love to see more analysis on key issues/controversies (see last section for examples).
Potentially useful points that I didn’t see in the report:
A lot of suffering is caused by subclinical/subsyndromal mental health problems. In the case of mood disorders, “subsyndromal symptoms are impairing, predict syndrome
onset and relapse, and account for more doctor’s visits and suicide attempts
than the full syndromes.” (Ruscio, 2019). This point is especially important because there are debates about how funding should be allocated (e.g., how much should we spending on treatments that target people with diagnosable disorders vs. mental health promotion strategies and prevention programs that reach broader audiences?)
Recent work has suggested that the “latent disease” view of depression (and other mental disorders) may be flawed (e.g., Borsboom, 2017). A related body of work has suggested that some depressive symptoms may be more impairing than others (e.g., Fried & Nesse, 2014). This could have important implications for measuring the effectiveness of interventions—e.g., estimating SWB weights for each symptom, rather than using sum-scores.
The evidence on task-sharing/task-shifting is strong, so I understand why you spent a lot of space covering it. At the same time, it could be useful to spend more time discussing some of the more novel approaches. Some examples include unguided self-help interventions and single-session interventions (Schleider & Weisz, 2017). Although the evidence for guided interventions and longer interventions is stronger, unguided interventions are substantially cheaper. This might make them more cost-effective, even if longer/guided interventions are more effective (discussed further in this preprint).
The digital interventions studied in meta-analyses and reviews are very different than those that have been disseminated widely. We know a lot about the effectiveness of digital interventions developed by professors, but much less about the effectiveness of Headspace, Calm, and other popular apps (Wasil et al., 2019).
There’s are some important gaps in the digital mental health space: popular interventions tend to focus on relaxation/mindfulness and rarely include other empirically supported treatment elements (Wasil et al., 2020). This reminds me that I really should write up a digital mental health forum post at some point :)
Examples of questions/controversies that HLI could address:
Broadly, what does HLI see as some of the most important open questions in the mental health space?
What content should be included in interventions? Does HLI believe that specific elements should be the focus of interventions? Or are common factors driving effects?
Which delivery formats be used? Is HLI optimistic or pessimistic about unguided self-help interventions? Are they likely to be more cost-effective than task-sharing interventions?
Does HLI see mental disorders as diseases, networks of symptoms, or something else? Do you think this matters, or not really?
Broadly, what does HLI think that a lot of people interested in mental health “get wrong” or “don’t yet know” about the most cost-effective ways to make an impact?
How long do the effects of interventions last? How should the uncertainty around this estimate affect our cost-effectiveness calculations? (assuming that the effects of an intervention will last <1 year seems like it would yield radically different conclusions than assuming it would 1-3 years, 3+ years, 10+ years 30+ years, etc.)
I hope that some of this was helpful & I’m looking forward to seeing future reports!
Others who attended: What were some of your takeaways? Were there any parts of the summit that stood out for you? And, perhaps most importantly, did the summit get you to think or act differently?