Language models and search
Large Language Models (LLMs) are revolutionizing how machines understand and generate human-like text. They can be used for text generation, translation, classification, summarization, and question answering, among others. Search is one of the fields being highly impacted by this new technology.
This event give you an overview of how search typically works, and what innovative approaches derive from the use of large language models in search.
Tekna invited two great speakers James Briggs (Pinecone) and Jo Kristian Bergum (Vespa.ai) who helped us understand how information retrieval can be implemented with LLMs, scenarios when LLMs can help you, and when they miserably fail.
You will achieve a better understanding of the latest developments within LLMs and search, and how they can benefit you for your particular use case.
- Welcome and introduction,
Marco Bertani-Økland, chair of Tekna Big Data
- Making Retrieval Work for LLMs,
Exploration of the different ways in which we can implement information retrieval for use with Large Language Models.
- Neural Search using Language models
Jo Kristian Bergum
We have witnessed a surge of interest in retrieval methods and systems to enhance generations by generative large language models (LLMs), such as chatGPT or LLAMA 2. This retrieval augmentation aims to bridge knowledge gaps and establish connections between pre-trained generative language models and private data beyond their training corpus. In such settings, the retrieval quality sets the upper bound of the quality of the generated answer. So, how do you optimize a system for retrieval quality?
A different class of neural language models (non-generative) and training objectives have proven to be effective retrievers. These neural retrieval methods are called semantic search and have been around longer than high-quality generative LLMs, so we better understand how to evaluate them and what works and doesn't.
This presentation explores two model architectures for implementing semantic search. It highlights the benefits of neural search but also presents scenarios where neural models fail miserably compared to more straightforward methods, particularly in new domains or languages.