Luke Thorburn
Karma: 18
Paperclip Club (AI Safety Meetup)
Capabilities like automated reasoning and improved literature search have the potential to reinforce or strengthen the effects of confirmation bias. For example, people can more easily find research to support their beliefs, or generate new reasons to support their beliefs. Have you done much thinking about this? Is it possible this risk outweighs the benefits of tools like Elicit? How might this risk be mitigated?
What are your plans for making Elicit financially sustainable? Do you intend to commercialise it? If so, what pricing model are you leaning towards?