(Just my personal, current, non-expert thoughts, as always. Also, I’m not sure I’m addressing precisely the question you had in mind.)
A summary of my recommendations in this vicinity:
If people want to do research and want a menu of ideas/questions to work on, including ideas/questions that seem like they obviously should have a bunch of work on them but don’t yet, they could check out this central directory for open research questions, and/or an overlapping 80,000 Hours post.
If people want to discover “new” instances of such ideas/questions, one option might be to just try to notice ideas/variables/assumptions that seem important to some people’s beliefs, but that seem debatable and vague, have been contested by others, and/or haven’t been stated explicitly and fleshed out.
One way to do this might be to have a go at rigorously, precisely writing out the arguments that people seem to be acting as if they believe, in order to spot the assumptions that seem required but that those people haven’t stated/emphasised.
One could then try to explore those assumptions in detail, either just through more fleshed-out “armchair reasoning”, or through looking at relevant empirical evidence and academic work, or through some mixture of those things.
I think this is a big part of what I’ve done this year.
Here’s one example of a piece of my own work which came from roughly that sort of process.
I’ll add more detailed thoughts below.
---
I interpret this question as being focused on cases in which an idea/open question seems like it should’ve been obvious, or seems obvious in retrospect, yet it has been neglected so far. (Or the many cases we should assume still exist in which the idea/question is still neglected, but would—if and when finally tackled—seem obvious.)
It seems to me that there are two major types of such cases:
Unnoticed: Cases in which the ideas/open questions haven’t even been noticed by almost anyone
Or at least, almost anyone in the relevant community/field.
So I’d still say an idea counts as “unnoticed” for these purposes even if, for example, a very similar ideas has been explored thoroughly in sociology, but no one in longtermism has noticed that that idea is relevant to some longtermist issue, nor independently arrived at a similar idea.
Noticed yet neglected: Cases in which the ideas/open questions have been noticed, but no one has really fleshed them out or tackled them much
E.g., a fair number of longtermists have noticed that the question of how likely various types of recovery are from various types of civilizational collapse. But as far as I’m aware, there was nothing even approaching a thorough analysis of the question until some recent still-in-progress work, and there’s still room for much more work here.
Another example is questions related to how likely global, stable totalitarianism is; what factors could increase or decrease the odds of that; and what to do about this. Some people have highlighted such questions (including but not only in the context of advanced AI), but I’m not aware of any detailed work on them.
This is really more a continuum than a binary distinction. In almost all cases, there’s probably been someone in a relevant community who’s at least briefly noticed something relevant. But sometimes it’ll just be that something kind-of relevant has been discussed verbally a few times and then forgotten, while other times it’ll be that people have prominently highlighted pretty precisely the relevant open question, yet no one has actually worked on it. (And of course there’ll be many cases in between.)
---
For “noticed yet neglected” ideas/questions, recommendation 1 from above will be more relevant: people could find many ideas/questions of this type in this central directory for open research questions, and just get cracking on them.
That directory is like a map pointing the way to many trees that might be full of low-hanging fruit that would’ve been plucked by now in a better world. And I really would predict that a lot of EAs could do valuable work by just having a go at those questions. (I’m less confident that this is the most valuable thing lots of EAs could be doing, and each person would have to think that through for themselves, in light of their specific circumstances. See also.)
So we don’t necessarily need all EA-aligned researchers to try to cultivate a skill of “noticing the ideas that should’ve been tackled/fleshed out already” (though I’m sure some should). Some could just focus on actually exploring the ideas that have been noticed but still haven’t been tackled/fleshed out.
---
For “unnoticed” ideas/questions, recommendation 2 from above will be more relevant.
I think this dovetails somewhat with Ben Garfinkel calling for[1] more people to just try to rigorously write up more detailed versions of arguments about AI risk that often float around in sketchier or briefer form. (Obviously brevity is better than length, all else held equal, but often a few pages isn’t enough to give an idea proper treatment.)
---
There are at least two other approaches for finding “unnoticed” ideas/questions which seem to have sometimes worked for me, but which I’m less sure would often be useful for many people, and less sure I’ll describe clearly. These are:
Trying to sketch out causal diagrams of the pathway to something (e.g., an existential catastrophe) happening
I think that doing something like this has sometimes helped me notice there there are:
assumptions or steps missing in the standard/fleshed-out stories of how something might happen,
alternative pathways by which something could happen, and/or
Trying to define things precisely, and/or to precisely distinguish concepts from each other, and seeing if anything interesting falls out
Here’s an abstract example, but one which matches various real examples that have happened for me:
I try to define X, but then notice that that definition would fail tocover some cases of what I’d usually think of as X, and/or that it would cover some cases of what I’d usually think of as Y (which is a distinct concept).
This makes me realise that X and/or Y might be able to take somewhat different forms or occur via different pathways to what was typically considered, or that there’s actually an extra requirement for X or Y to happen that was typically ignored.
I feel like it’d be easy to misinterpret my stance here.
I actually think that definitions will never or almost never really be “perfect”, and I agree with the ideas in this post (see also family resemblance). And I think that many debates over definitions are largely nitpicking and wasting time.
But I also think that, in many case, being clearer about definitions can substantially benefit both thought and communication.
---
I should again mention that I’m only ~1.5 years into my research career, so maybe I’ll later change my mind about a bunch of those points, and there are probably a lot of useful things that could be said on this that I haven’t said.
[1] See the parts of the transcript after Howie asks “Do you know what it would mean for the arguments to be more sussed out?”
10. “Obvious questions”
(Just my personal, current, non-expert thoughts, as always. Also, I’m not sure I’m addressing precisely the question you had in mind.)
A summary of my recommendations in this vicinity:
If people want to do research and want a menu of ideas/questions to work on, including ideas/questions that seem like they obviously should have a bunch of work on them but don’t yet, they could check out this central directory for open research questions, and/or an overlapping 80,000 Hours post.
If people want to discover “new” instances of such ideas/questions, one option might be to just try to notice ideas/variables/assumptions that seem important to some people’s beliefs, but that seem debatable and vague, have been contested by others, and/or haven’t been stated explicitly and fleshed out.
One way to do this might be to have a go at rigorously, precisely writing out the arguments that people seem to be acting as if they believe, in order to spot the assumptions that seem required but that those people haven’t stated/emphasised.
One could then try to explore those assumptions in detail, either just through more fleshed-out “armchair reasoning”, or through looking at relevant empirical evidence and academic work, or through some mixture of those things.
I think this is a big part of what I’ve done this year.
Here’s one example of a piece of my own work which came from roughly that sort of process.
I’ll add more detailed thoughts below.
---
I interpret this question as being focused on cases in which an idea/open question seems like it should’ve been obvious, or seems obvious in retrospect, yet it has been neglected so far. (Or the many cases we should assume still exist in which the idea/question is still neglected, but would—if and when finally tackled—seem obvious.)
It seems to me that there are two major types of such cases:
Unnoticed: Cases in which the ideas/open questions haven’t even been noticed by almost anyone
Or at least, almost anyone in the relevant community/field.
So I’d still say an idea counts as “unnoticed” for these purposes even if, for example, a very similar ideas has been explored thoroughly in sociology, but no one in longtermism has noticed that that idea is relevant to some longtermist issue, nor independently arrived at a similar idea.
Noticed yet neglected: Cases in which the ideas/open questions have been noticed, but no one has really fleshed them out or tackled them much
E.g., a fair number of longtermists have noticed that the question of how likely various types of recovery are from various types of civilizational collapse. But as far as I’m aware, there was nothing even approaching a thorough analysis of the question until some recent still-in-progress work, and there’s still room for much more work here.
More thoughts and notes on this here and here.
Another example is questions related to how likely global, stable totalitarianism is; what factors could increase or decrease the odds of that; and what to do about this. Some people have highlighted such questions (including but not only in the context of advanced AI), but I’m not aware of any detailed work on them.
This is really more a continuum than a binary distinction. In almost all cases, there’s probably been someone in a relevant community who’s at least briefly noticed something relevant. But sometimes it’ll just be that something kind-of relevant has been discussed verbally a few times and then forgotten, while other times it’ll be that people have prominently highlighted pretty precisely the relevant open question, yet no one has actually worked on it. (And of course there’ll be many cases in between.)
---
For “noticed yet neglected” ideas/questions, recommendation 1 from above will be more relevant: people could find many ideas/questions of this type in this central directory for open research questions, and just get cracking on them.
That directory is like a map pointing the way to many trees that might be full of low-hanging fruit that would’ve been plucked by now in a better world. And I really would predict that a lot of EAs could do valuable work by just having a go at those questions. (I’m less confident that this is the most valuable thing lots of EAs could be doing, and each person would have to think that through for themselves, in light of their specific circumstances. See also.)
So we don’t necessarily need all EA-aligned researchers to try to cultivate a skill of “noticing the ideas that should’ve been tackled/fleshed out already” (though I’m sure some should). Some could just focus on actually exploring the ideas that have been noticed but still haven’t been tackled/fleshed out.
---
For “unnoticed” ideas/questions, recommendation 2 from above will be more relevant.
I think this dovetails somewhat with Ben Garfinkel calling for[1] more people to just try to rigorously write up more detailed versions of arguments about AI risk that often float around in sketchier or briefer form. (Obviously brevity is better than length, all else held equal, but often a few pages isn’t enough to give an idea proper treatment.)
---
There are at least two other approaches for finding “unnoticed” ideas/questions which seem to have sometimes worked for me, but which I’m less sure would often be useful for many people, and less sure I’ll describe clearly. These are:
Trying to sketch out causal diagrams of the pathway to something (e.g., an existential catastrophe) happening
I think that doing something like this has sometimes helped me notice there there are:
assumptions or steps missing in the standard/fleshed-out stories of how something might happen,
alternative pathways by which something could happen, and/or
alternative/additional outcomes that may occur
See also
Trying to define things precisely, and/or to precisely distinguish concepts from each other, and seeing if anything interesting falls out
Here’s an abstract example, but one which matches various real examples that have happened for me:
I try to define X, but then notice that that definition would fail to cover some cases of what I’d usually think of as X, and/or that it would cover some cases of what I’d usually think of as Y (which is a distinct concept).
This makes me realise that X and/or Y might be able to take somewhat different forms or occur via different pathways to what was typically considered, or that there’s actually an extra requirement for X or Y to happen that was typically ignored.
I feel like it’d be easy to misinterpret my stance here.
I actually think that definitions will never or almost never really be “perfect”, and I agree with the ideas in this post (see also family resemblance). And I think that many debates over definitions are largely nitpicking and wasting time.
But I also think that, in many case, being clearer about definitions can substantially benefit both thought and communication.
---
I should again mention that I’m only ~1.5 years into my research career, so maybe I’ll later change my mind about a bunch of those points, and there are probably a lot of useful things that could be said on this that I haven’t said.
[1] See the parts of the transcript after Howie asks “Do you know what it would mean for the arguments to be more sussed out?”