The goal for Elicit is for it to be a research assistant, leading to more and higher quality research. Literature review is only one small part of that: we would like to add functionality like brainstorming research directions, finding critiques, identifying potential collaborators, …
Beyond that, we believe that factored cognition could scale to lots of knowledge work. Anywhere the tasks are fuzzy, open-ended, or have long feedback loops, we think Elicit (or our next product) could be a fit. Journalism, think-tanks, policy work.
It is, very much. Answering so-called strength of evidence questions accounts for big chunks of researchers’ time today.
Thank you, this was super informative! My understanding of Ought just improved a lot
Once you’re able to answer questions like that, what do you build next?
Is “Was this a double-blind study?” an actual question that your users/customers are very interested in?
If not, could you give me some other example that is?
You’re welcome!
The goal for Elicit is for it to be a research assistant, leading to more and higher quality research. Literature review is only one small part of that: we would like to add functionality like brainstorming research directions, finding critiques, identifying potential collaborators, …
Beyond that, we believe that factored cognition could scale to lots of knowledge work. Anywhere the tasks are fuzzy, open-ended, or have long feedback loops, we think Elicit (or our next product) could be a fit. Journalism, think-tanks, policy work.
It is, very much. Answering so-called strength of evidence questions accounts for big chunks of researchers’ time today.