Features

We keep up with the latest research and methods so you can focus on making your product better

1

1

Managed Processing of Documents

Managed Processing of Documents

Auto-chunking - we have a default method suggested based on your data structure, and you have the option to experiment at any time

Multi-indexing - we leverage LLMs to create higher level summaries and vector embeddings from the documents (think vectors for sentences, then for paragraphs, and then chapters, etc)

Multi-Level Document Structure - The latest generation of LLMs have larger context window. Retrieve the whole doc, not just the matching chunk for better performance


Simply upload your docs using our API

2

2

Retrieve the "Right" Documents

Retrieve the "Right" Documents

Hybrid Search - use the familiar SQL syntax to filter the documents before running vector similarity search

Pre-query Transformation - Vesana transforms the user query into a vector format that is optimized for your dataset

Attribution - the search function not only returns the document matching the query, but also highlights the reason why it matched

Data Analysis and Dynamic Fields Creation - In addition to the searchable fields you add manually, our system extracts quantitative and keyword fields from the texts, surfacing patterns in the dataset and making them available for the hybrid search

3

3

Integrated Evaluation Framework

Integrated Evaluation Framework

Guided Console Interface - helps you create evaluation criteria, so that you can evaluate and iterate the performance of the semantic search

On-Demand Reindexing - As you apply configuration changes, the data will be reindexed and re-evaluated by the framework

Want more information?

For any inquiry, contact us at team@vesana.dev