Great question! Yes, this is definitely on our minds as a potential harm of Elicit.
Of the people who end up with one-sided evidence right now, we can probably form two loose groups:
People who accidentally end up with it because good reasoning is hard and time-consuming to do.
People who seek it out because they want to bolster a pre-existing belief.
For the first group – the accidental ones – we’re aiming to make good reasoning as easy (and ideally easier than) finding one-sided evidence. Work we’ve done so far:
We have a “possible critiques” feature in Elicit which looks for papers which arrive at different conclusions. These critiques are surfaced – if available – whenever a user clicks in to see more information on a paper.
We have avoided using social standing cues such as citations when evaluating papers. We do expose those data in the app, but don’t – for example – boost papers cited by others. In this way, we hope to surface relevant and diverse papers from a range of authors, whether or not they happen to be famous.
At the same time, our chosen initial set of users (professional researchers) are relatively immune to accidentally doing one-sided research, because they care a lot about careful and correct reasoning.
For the second group – the intentional ones – we expect that Elicit might have a slight advantage right now over alternative tools, but longer-term probably won’t be more useful than other search tools that use language models with retrieval (e.g. this chatbot). And the better Elicit is, and the better other tools that care about good epistemics are, the easier it will be to reveal misleading arguments by this second group.
Great question! Yes, this is definitely on our minds as a potential harm of Elicit.
Of the people who end up with one-sided evidence right now, we can probably form two loose groups:
People who accidentally end up with it because good reasoning is hard and time-consuming to do.
People who seek it out because they want to bolster a pre-existing belief.
For the first group – the accidental ones – we’re aiming to make good reasoning as easy (and ideally easier than) finding one-sided evidence. Work we’ve done so far:
We have a “possible critiques” feature in Elicit which looks for papers which arrive at different conclusions. These critiques are surfaced – if available – whenever a user clicks in to see more information on a paper.
We have avoided using social standing cues such as citations when evaluating papers. We do expose those data in the app, but don’t – for example – boost papers cited by others. In this way, we hope to surface relevant and diverse papers from a range of authors, whether or not they happen to be famous.
At the same time, our chosen initial set of users (professional researchers) are relatively immune to accidentally doing one-sided research, because they care a lot about careful and correct reasoning.
For the second group – the intentional ones – we expect that Elicit might have a slight advantage right now over alternative tools, but longer-term probably won’t be more useful than other search tools that use language models with retrieval (e.g. this chatbot). And the better Elicit is, and the better other tools that care about good epistemics are, the easier it will be to reveal misleading arguments by this second group.