There can be highly neglected solutions to less-neglected problems
This post is based on an old draft that I wrote years ago, but never quite finished. The specific references might be a bit outdated, but think the main point is still relevant.
Thanks to Amber Dawn Ace for turning my draft into something worth posting.
-----------------------------
EAs are often interested in how ‘neglected’ problems are: that is, how much money is being spent on solving them, and/or how many people are working on them. 80,000 Hours, for example, prioritizes pressing problems using a version of the importance, tractability, neglectedness (ITN) framework, where a problem’s ‘neglectedness’ is one of three factors that determine how much we should prioritize it relative to other problems.
80,000 Hours define neglectedness as:
“How many people, or dollars, are currently being dedicated to solving the problem?
I suggest that a better definition would be:
“How many people, or dollars, are currently being dedicated to this particular solution?”
I think it makes more sense to assess solutions for neglectedness, rather than problems. Sometimes, a problem is not neglected, but effective solutions to that problem are neglected. A lot of people are working on the problem and a lot of money is being spent on it, but in ineffective (or less effective) ways.
Here’s the example that 80,000 Hours uses to illustrate neglectedness:
“[M]ass immunisation of children is an extremely effective intervention to improve global health, but it is already being vigorously pursued by governments and several major foundations, including the Gates Foundation. This makes it less likely to be a top opportunity for future donors.”
Note that mass immunisation of children is a solution, not a problem. But it makes sense for 80K to think about this: we can imagine a world in which charities spent just as much money on preventing or curing diseases, but they spent it on less effective solutions. In that world, even though global disease would not be a neglected problem, mass vaccination would be a neglected (effective) solution, and it would make sense for donors to prioritize it.
There’s something fractal about solutions and problems. Every solution to a problem presents its own problem: how best to implement it. Mass vaccination is an effective solution to the problem of disease. But then there’s the problem of ‘how can we best achieve mass vaccination?’. And solutions to that problem—for example, different methods of vaccine distribution, or interventions to address vaccine skepticism—pose their own, more granular problems.
Here’s another example: hundreds of millions of dollars is spent each year on preventing nuclear war and nuclear winter. However, only a small fraction of this is spent on interventions intended to mitigate the negative impacts of nuclear winter—for example, ALLFED’s work researching food sources that are not dependent on sunlight. But in the event that there is a nuclear winter, these sorts of interventions will be extremely effective: they’ll enable us to produce much more food, so far fewer people will starve. Mitigation interventions are thus a highly neglected class of solutions for a problem—nuclear war—that is comparatively less neglected.
As another example: climate change is not a neglected problem. But different solutions get very different amounts of attention. One of the most effective interventions seems to be preserving the rainforests (and other bodies of biomass), yet only a lonely few organizations (e.g. Cool Earth and the Carbon Fund) are working on this.
Evaluating the neglectedness of solutions has its own problems. It means that neglectedness no longer lines up so neatly with solvability/tractability and scale/importance, the two other components of the ITN framework. There are more solutions than problems, so it’s harder to list them. And how do you identify all the solutions to a problem? What about solutions no-one has thought of yet?
However, I think it’s still worth focussing more on neglected solutions, if it means fewer good solutions fall through the cracks. Prioritizing problems doesn’t help much unless we also prioritize the most effective solutions to those problems.
I don’t think that ITN is a bad framework; I’m just not sure that it should dominate the discussion of cause prioritization in the way that it does. I haven’t seen any alternatives or updates, despite the fact that the ITN framework has been around since at least 2016 (when I first became involved in EA). Even if the critique in this post turns out to be wrong, I’d still like to see more challenges to ITN, or suggestions of alternative frameworks. The movement might be healthier if there were multiple competing models for how to assess the priority of problems.
- Are systems change nonprofits effective? A shallow estimate of the efficacy of nonprofit legal work by 30 Jun 2023 16:07 UTC; 24 points) (
- Please help me sense-check my assumptions about the needs of the AI Safety community and related career plans by 27 Mar 2023 8:11 UTC; 23 points) (
- EA & LW Forum Weekly Summary (6th − 19th Feb 2023) by 21 Feb 2023 0:26 UTC; 17 points) (
- EA & LW Forum Weekly Summary (6th − 19th Feb 2023) by 21 Feb 2023 0:26 UTC; 8 points) (LessWrong;
I also think that EA sometimes dismisses categories of problems out of an assumption that most solutions currently proposed to those problems are either not neglected or have a low expected value, despite the likelihood that high-value opportunities are lurking amidst the chaff.
After all, EA’s original focus was sorting through the labyrinth of ineffective direct global health and poverty reduction interventions. In theory, we should now be sorting through other broad fields like public policy, climate change, and so on, to find interventions comparable to the best direct aid/global development opportunities.
In the climate change realm, environmental law groups like Earthjustice appear on paper to be competitive with top GiveWell nonprofits. Much more thorough research would be needed, but napkin calculations seem promising.
I think those things sounds like good suggestions. What’s your bottleneck for making this research?
There needs to be more willingness by grantwriters and other funders to bear search-costs for new ideas. It seems like there is a strong emphasis on skepticism within EA, which is great, but it usually translates to, we should not fund this because of perceived issues X, Y, and Z or uncertainty regarding the benefits of A, B, and C, when these issues and benefits are better addressed through empirical testing than a skeptic’s intuitions. We need a community that will bear the discovery costs of promising interventions, but this seldom happens unless the proponent of the idea already has clout and/or connections within EA.
If we don’t have the information to evaluate the effectiveness of a possible solution, the answer is not to discard a potential solution, but rather evaluate the information costs, and the potential value associated with the array of reasonably possible outcomes.
What would be helpful, if this doesn’t exist, would be aggregating sets of potential solutions, listing the resources currently directed toward evaluating their EV, determining bottlenecks (often money) in assessing EV, and making reasonable estimates of potential exploitation values given various hypothesize EV. Then those with resources in EA could ensure that promising paths have the resources to be explored, and we can exploit the best solutions fully.
I am rather pessimistic about EA’s prospects for this.
Why are you pessimistic? I assume it has something to do with perceived power dynamics or incentive misalignment?
It seems like a crowd sourcing mechanism for potential solutions, plus a small team to manage the data and make estimates on expected info cost, actual cost, and impact, would be fairly simple to implement.
Maybe one could even lean harder into crowdsourcing the info/actual costs and impact by operating a sort of prediction market on it, so if a solution is indeed researched at some point and your prediction of the info and actual estimated costs was correct/close, you get points and higher weighting in future crowdsourced estimates?
Although I don’t necessarily endorse it, the argument against investing in new cause areas that aren’t a significant existential risk within a few decades is straightforward. Therefore, any cause prioritization project should address this objection with a counter-rebuttal, or else pessimism is warranted.
Good point. I’d imagine that this objection stems from the perspective “basically all the highest utility/dollar interventions are in x-risk, but continuing global health interventions costs us little because we already have those systems in place, so it’s not worth abandoning them.”
From this perspective, one might think that even maintaining existing global health interventions is a bad util/dollar proposition in a vacuum (as those resources would be better spent on x-risk), but for external reasons, splintering EA is not worth pressuring people to abandon global health.
Let’s imagine splintering EA to mean nearly only x-riskers being left in EA, and maybe a group of dissidents creating a competing movement.
These are the pros for x-riskers post-split:
Remaining EAs are laser-focused on x-risk, and perhaps more people have shifted their focus from partly global health and partly x-risk to fully x-risk than vice versa. (More x-risk EAs and x-risk EAs are more effective).
These are the cons for x-riskers post-split:
Remaining EAs have less broad public support and less money going into “general EA stuff” like community building and conferences, because some of the general EA money and influence was coming from people who mostly cared about global health. As a related consequence, it becomes harder to attract people initially interested in global health and convert them into x-riskers. (Less x-risk EAs and x-risk EAs are less effective).
It seems that most x-riskers think the cons outweigh the pros, or a split would have occurred—at least there would be more talk of one.
The thing is, refraining from adding climate change as an EA focus would likely have a similar pro/con breakdown to removing global health as an EA focus:
Pros: No EAs are persuaded to put money/effort that might have gone to x-risk into climate change.
Cons:
Loss of utils due to potentially EA-compatible people who expend time or money on climate change prevention/mitigation not joining the movement and adopting EA methods.
Loss of potential general funding and support for EA from people who think that the top climate change interventions can compete with the util/dollar rates of top global health and x-risk interventions, plus the hordes of people who aren’t necessarily thinking in terms of utils/dollar yet and just instinctively feel climate change is so important that a movement ignoring it can’t possibly know what they’re doing. Even if someone acting on instinct rather than utils/dollar won’t necessarily improve the intellectual richness of EA, their money and support would be pretty unequivocally helpful.
These are basically the same pros and cons to kicking out global health people, plus an extra cost to not infiltrating another cause area with EA methods.
Therefore, I would argue that any x-risker that does not want to splinter EA should also support EA branching out into new areas.
I think this is a totally reasonable argument, and you can add a piece that’s about personal fit. Like 80,000 hours has pretty arbitrarily guesstimated how heavily one ought to weigh personal fit in career choice, slapped some numbers on it and published it in a prioritization scale that people take too seriously sometimes and doesn’t actually make sense if you actually look at what some of the numbers imply (ie everybody should work on AI safety even if an actively bad fit).
But if you start by weighing personal fit more highly, there becomes a case for saying “ok, I happen to care a lot about climate change, so even if it’s not the highest priority cause, it’s still what I personally ought to work on. And I need guidance about how to move the needle on climate change.” And if you start from there you can still do a whole EA solutions prioritization analysis just taking for granted that it will be 100% climate change focused.
Personally I suspect that’s a good solution—we have great ideas about how to prioritize stuff but I’m not very optimistic about our ability to fundamentally change what causes people work on. Like maybe people should work on AI x risk even if they’re an actively bad fit, but they won’t, and we shouldn’t waste time trying to convince them to. Instead we should just create a great on-ramp for people who are interested in AI safety and a second great on-ramp for people who are interested in climate change. Figure out where people are and are not flexible in what they work on and target that.
I’m pretty new to the movement and generally have never done research at a high formal level, so I suppose expertise. Is there a link somewhere to a sort of guide for doing research at the level of detail expected?
Welcome to EA Sam!
I actually don’t know, I’ve never done that type of research either. I mostly thing about AI risk.
But I did scroll though the list of EA Forum tags for you and found these:
Research—EA Forum (effectivealtruism.org)
Global priorities research—EA Forum (effectivealtruism.org)
Independent research—EA Forum (effectivealtruism.org)
Research methods—EA Forum (effectivealtruism.org)
Research training programs—EA Forum (effectivealtruism.org)
Maybe there’s something helpful in there?
I found your systems change post
A Newcomer’s Critique of EA—Underprioritizing Systems Change? - EA Forum (effectivealtruism.org)
I have to admit I only read the title and a few sentences here and there. But you are right that EAs are not much into systems change. Part of this is founders effect. But I also believe part of it is because of a misuse of the neglectedness framework. Systems change is basically politics, which is not a neglected area. But my prior is that there are lots of neglected interventions.
For example, this is super cool:
Audrey Tang on what we can learn from Taiwan’s experiments with how to do democracy − 80,000 Hours (80000hours.org)
I remember a few years ago, there seemed to be a small but growing interest in systems change in EA. I found the facebook group, but it’s mostly dead now. Scrolling back it seems like it was taken over by memetic warfare rather than discussion sadly.
Effective Altruism: System Change | Facebook
Maybe the real reason EA has not been able to have an ongoing systems change discussion is because this is always how it ends?
Related:
Politics is the Mind-Killer—LessWrong
LW is sort of like a sister community to EA, with lots of overlap in membership and influence going both ways. I believe that that the above post is part of the founders effect that has kept EA away from politics, but I also think the argument are not wrong.
I don’t believe Facebook’s structure and people’s prior associations with the quality of discussion that occurs on Facebook would enable rational debate at the level of the EA forum, but on any platform, I would agree that if a line in the sand is crossed and discussions of specific policies become conceived of as “Politics”, and tribalism creeps in, the results are usually quite bad.
I can’t imagine that political tribalism would fly on the EA forum, although of course it is necessary to be vigilant to maintain that. Indeed, if I were to rewrite that post today I would revise it to express much less confidence in a particular view of global systems, and focus more on the potential for thinking about global systems to offer opportunities for large impacts.
I think there is evidence EA is capable of doing this without damaging epistemics. It is currently widely accepted to talk about AI or nuclear regulations that governments might adopt, and I haven’t seen anything concerning in those threads. My argument is essentially just that policy interventions of high neglectedness and tractability should not be written off reflexively.
Earthjustice and other law groups (there’s a YIMBY Law group as well that is probably less impactful but at least worth looking into) are nice because they improve de facto systems, but don’t need to engage with the occasional messiness of overt system change. Instead, they ensure local governments follow the laws that are already in place.
This does not matter much for the overall context of this post, but this is very wrong empirically—almost nothing is as well-funded as natural solutions within climate philanthropy (see e.g. founderspledge.com/landscape for more) and , IIRC, conservation philanthropy is larger than all of climate philanthropy.
Here’s a review of ITN critiques I put together a couple years ago. I just dumped it in an editable Google doc. It’s set up to alert me if anyone makes changes so I can import them into the main post.
I think the main issue is the conceptual distinction between a “cause” and an “intervention,” or what you’re calling here a “solution.” Originally, I think they were using ITN as a quick first-pass heuristic for considering different cause areas on the scale of “global health” or “X-risk.” But lots of people also use it to look at specific interventions. As you point out, it’s totally possible to find an ITN solution within a non-ITN cause area. I think the idea of applying ITN to cause areas is to find issues where there are a lot of ITN solutions to be found in a probabilistic sense.
One reason, I think, to be a little suspicious of a “neglected” solution in a non-neglected cause area is that there’s more reason to think the particular solution is neglected for a reason. That’s not an absolute, just a heuristic. For example, I’m in biotech, and I work with a particular technology for molecular recognition, called “aptamers.” I’m interviewing for a job at a startup that works with this technology. If you looked at them as an aptamer biosensor company, they have little-no competition. But if you look at them as an assay company, they face massive competition. Thiel makes this point eloquently in Zero to One. It’s important to be aware enough of the dimensions of the issue to understand when a potential solution is truly neglected versus when it’s just lost among a sea of alternatives.
A couple of comments that might help readers of the thread separate problems and solutions:
1) If you’re aiming to do good in the short-term, I think this framework is useful:
I think problem effectiveness varies more than solution effectiveness, and is also far less commonly discussed in normal doing good discourse, so it makes sense for EA to emphasise it a lot.
However, solution effectiveness also matters a lot too. It seems plausible that EAs neglect it too much.
80k covers both in the key ideas series: https://80000hours.org/articles/solutions/
If you can find a great solution to a second tier problem area, that could be more effective than working on the average solution in a top tier area.
This circumstance could arise if you’re comparing a cause with a lot of effectiveness-focused people working on it (where all the top solutions are taken already) vs. a large cause with lots of neglected pockets; or due to personal fit considerations.
Personally, I don’t think solution effectiveness varies enough to make climate change the top thing to work on for people focused on existential risk, but I’d be keen to see 1-5% focused on the highest-upside and most neglected solutions to climate change.
2) If you’re doing longer-term career planning, however, then I think thinking in terms of specific solutions is often too narrow.
A cause is broad enough that you can set out to work on one in 5, 10 or even 20 years, and usefully aim towards it. But which solutions are most effective is normally going to change too much.
For longer-term planning, 80k uses the framework: problem effectiveness x size of contribution x fit
Size of contribution includes solution effectiveness, but we don’t emphasise it – the emphasise is on finding a good role or aptitude instead.
3) Causes or problem areas can just be thought of as clusters of solutions.
Causes are just defined instrumentally, as whatever clusters of solutions are useful for the particular type of planning you’re doing (because require common knowledge and connections).
E.g. 80k chooses causes that seem useful for career planning; OP chooses causes based on what it’s useful for their grantmakers to specialise in.
You can divide up the space into however many levels you like e.g.
International development → global health → malaria → malaria nets → malaria nets in a particular village.
Normally we call the things on the left ‘problem areas’ and on the left ‘solutions’ or ‘interventions’, but you can draw the line in different places.
Narrower groups let you be more targeted, but are more fragile for longer-term planning.
4) For similar reasons, you can compare solutions with similar frameworks to cause areas, including by using INT.
I talk more about that here: https://80000hours.org/articles/solutions/
Strong agree, though I hadn’t thought about it this exact way before. IMO the EA movement actually came from this conception - ‘global health’ is not neglected, but ‘schistosomiasis treatments’ and ‘malaria nets’ are. So it’s always seemed weird to me when EAs dismiss something like climate change work on the grounds that it’s ‘not neglected’. [ETA I just saw that Sam Battis said the same thing. Somehow in 15ish years in the movement, I don’t think I’ve ever heard anyone else say this!]
Additionally, I think most solutions follow an S-curve in terms of amount of resources put in vs number of people actually helped. Doing research on economic growth policies, RCTs, energy technology etc is all basically worthless until you have the capacity to deploy something at scale.
Helping the global poor is neglected, and that accounts for most bednet outperformance. GiveDirectly, just giving cash, is thought by GiveWell/GHW to be something like 100x better on direct welfare than rich country consumption (although indirect effects reduce that gap), vs 1000x+ for bednets. So most of the log gains come from doing stuff with the global poor at all. Then bednets have a lot of their gains as positive externalities (protecting one person also protects others around them), and you’re left with a little bit of ’being more confident about bednets than some potential users based on more investigation of the evidence (like vaccines), and some effects like patience/discounting.
Really exceptional intervention-within-area picks can get you a multiplier, but it’s hard to get to the level of difference you see on cause selection, and especially so when you compare attempts to pick out the best in different causes.
Official development assistance is nearly $200billion annually. I think if that’s going to be called ‘neglected’, the term needs some serious refinement.
People have compared various development interventions like antiretroviral drugs for HIV, which have the same positive externalities and (at least according to a presentation Toby Ord gave a few years ago), still something like 100fold difference in expected outcomes from AMF.
$200B includes a lot of aid aimed at other political goals more than humanitarian impact, , with most of a billion people living at less than $700/yr, while the global economy is over $100,000B and cash transfer programs in rich countries are many trillions of dollars. That’s the neglectedness that bumps of global aid interventions relative to local rich country help to the local relative poor.
You can get fairly arbitrarily bad cost-effectiveness in any area by taking money and wasting on it things that generate less value than the money. E.g. spending 99.9% on digging holes and filling them in, and 0.1% on GiveDirectly. But just handing over the money to the poor is a relevant attainable baseline.
Calling an area neglected because a lot of money is spent badly sounds like a highly subjective evaluation that’s hard to turn into a useful principle. Sure, $200B annually is a small proportion of the global economy, but so is almost any cause area you can describe. From a quick search, the World Bank explicitly spends slightly more than a tenth of that on climate change, one of the classically ‘non-neglected’ evaluands of EA. It’s hard to know how to compare these figures, since they obviously omit a huge number of other projects, but I doubt the WB constitutes much less than 10% of explicit climate spend. This article advocates a ~$180bn annual budget, so it’s hard to believe it’s not currently less than that.
Conversely, Alphabet alone had operating expenses for 2022 of $203B, and they’re fairly keen not to end the world, so you could view all of that as AI safety expenditure.
So by what principle would you say AI’s neglectedness > global development’s neglectedness > climate change’s neglectedness?
In this 2022 ML survey the median credence on extinction-level catastrophe from AI is 5%, with 48% of respondents giving 10%. Some generalist forecaster platforms put numbers significantly lower, some forecasting teams or researchers with excellent forecasting records and more knowledge of the area put more (with I think the tendency being for more information to yield higher forecasts, and my own expectation). This scale looks like hundreds of millions of deaths or equivalent this century to me, although certainly many disagree. The argument below goes through with 1%.
Expected damages from climate over the century in the IPCC and published papers (which assume no drastic technological advance, which is in tension with forecasts about AI development) give damages of several percent of world product and order 100M deaths.
Global absolute poverty affects most of a billion people, with larger numbers somewhat above those poverty lines, and life expectancy many years shorter than wealthy country averages, so it gets into the range of hundreds of millions of lives lost equivalent. Over half a million die from malaria alone each year.
So without considering distant future generations or really large populations or the like, the scales look similar to me, with poverty and AI ahead of climate change but not vastly (with a more skeptical take on AI risk, poverty ahead of the other two).
”Conversely, Alphabet alone had operating expenses for 2022 of $203B, and they’re fairly keen not to end the world, so you could view all of that as AI safety expenditure.”
How exactly could that be true? Total FTEs working on AI alignment, especially scalable alignment are a tiny, tiny fraction. Google Deepmind has a technical safety team with a few handfuls of people, central Alphabet has none as such. Safety teams at OAI and Anthropic are on the same order of magnitude. Aggregate expenditure on AI safety is a few hundreds of millions of dollars, orders of magnitude lower.
I’m not sure this is super relevant to our core disagreement (if we have one), but how are you counting this? Glancing at that article, it looks like a pessimistic take on climate change’s harm puts excess deaths at around 10m per year, and such damage would persist much more than 10 years.
But I don’t see why we’re talking about scale. Are you defining neglectedness as a ratio of <people potentially killed in worst case>/<dollars spent>?
Because coders who don’t work explicitly on AI alignment still spend their working lives trying to get code to do what they want. The EA/rat communities tend not to consider that ‘AI safety’, but it seems prejudicial not to do so in the widest sense of the concept.
We might consider ‘jobs with “alignment” or “safety” in the title’ to be a neglected and/or more valuable subfield, but to do so IMO we have to acknowledge the OP’s point.
I was going from this: “The DICE baseline emissions scenario results in 83 million cumulative excess deaths by 2100 in the central estimate. Seventy-four million of these deaths can be averted by pursuing the DICE-EMR optimal emissions path.” I didn’t get into deaths vs DALYs (excess deaths among those with less life left to live), chances of scenarios, etc, and gave ‘on the order of’ for slack.
“But I don’t see why we’re talking about scale. Are you defining neglectedness as a ratio of <people potentially killed in worst case>/<dollars spent>?”
Mean not worst case and not just death. That’s the shape of the most interesting form to me. You could say that that cash transfers in every 1000 person town in a country with a billion people (and a uniform cash transfer program) are a millionfold less impact and a million times more neglected than cash transfers to the country as a whole, cancelling out, but the semantics aren’t really going to be interesting to me.
I think it’s fairly clear that there is a vast difference between the work that those concerned with catastrophic AI safety as such have been doing vs random samples of Google staff, and that in relevant fields (e.g. RLHF,LLM red-teaming, or AI forecasting) they are quite noticeable as a share of global activity. You may disagree. I’ll leave the thread at that.
I’m happy to leave it there, but to clarify I’m not claiming ‘no difference in the type of work they do’, but rather ‘no a priori reason to write one group off as “not concerned with safety”’.
Just a note that GiveWell started out with many cause areas including “US Equality of Opportunity” and it was only after a few years of work that they realized that their other causes (besides global health and development) were not really justifying continued research.
Thanks for mentioning ALLFED! We touched on this in one of our papers:
Or another way of saying this is that different solutions could solve different parts or percentages of the problem. So really I think we should be doing more actual cost-effectiveness analyses, and only using ITN for initial screening.
Party what made me think of this was talking to you about ALLFED in Gothenbug in 2018.
Ultimately, I think this is just a band-aid solution to the fundamental problem of the INT framework: as I and others have written elsewhere, the INT framework is just a heuristic for thinking about overall cause areas; it is invalid (or prone to mislead, or inefficient) when it comes to evaluating specific decisions.
In contrast, I’ve spent a sizable amount of time developing an alternative framework which I think is actually reliable for evaluating specific decisions, here: https://forum.effectivealtruism.org/posts/gwQNdY6Pzr6DF9HKK/the-coils-framework-for-decision-analysis-a-shortened-intro
Counterpoint: If nucleur war is made less likely, the expected value of ALLFED decreases because the likelihood of its work being useful decreases. There is definitely value in considering neglectedness at high levels of abstraction (i.e. cause areas)
Agree with this. It seems like this was a framework that made sense many years ago when the movement was small and had to be selective in where it searched, but makes less sense now that we’re big and have domain level experts in most fields to work with to find neglected solutions.
Point of confusion/disagreement: I don’t think EA is big (15k globally?). I don’t think EA has domain level experts in most fields to work with to find neglected solutions. EAs typically have (far) less than 15 years work experience in any field and in my experience, they don’t have extensive professional networks outside of EA.
We have a lot more than we did ten years ago! And I agree ITN has flaws regardless, but I wanted to point out that if those are someone’s 2 main objections to using ITN today, it might not apply.
I agree with the thrust of this. To me, much of the issue has to do with the coarseness of the ontology of interventions that we use.
Thins like Discernment can help break this down.
I really like this. I knew I had a bunch of problems with the ITN framework, some of them I could even flesh out, but I’ve never thought about this one with such clarity.
Thanks for publishing that, I also had a draft lying somewhere on that!
Thanks for the write up!
I totally agree that when working on particular solutions the neglectedness of the solution is the important factor not just the problem area.
But I am slightly hesitant to be in full agreement with the meaning change because it is dependent on working on specific solutions, rather than working on a problem area more broadly, or just building career capital around a problem.