How to Fix the Main Problem in Conversational AI Search
The Problem
The crisis that search industry is facing is beyond changing business models. More and more users are losing the power to control the information they consume. In traditional search interfaces, users were presented with a list of results - a UX model that although was not perfect and unbiased - but still empowered researchers, journalists, historians, and everyday users to explore different points of views, and decide for themselves what was relevant. Whether the answer appeared on page one or five, users could weigh their options, compare sources, and form their own opinion if they choose to do so.
Traditional Search UX : A Balance of Algorithms and Intuitive UX
When I began my career as a search engineer, my initial approach was to improve user experience through better algorithms - enhancing relevance with NLP techniques like Named Entity Recognition (NER) and marrying them with retrieval algorithms such as tf-idf and PageRank.
Resources like"Building Search Applications: Lucene, Lingpipe, and Gate" by Manu Konchady helped me to bridge theory and practice for building better retrieval systems.

But even back then, it became clear to me that a great search experience isn’t just about better algorithms, it’s about thoughtful UX design. So I began reading about search UX.
Books like "Search Patterns: Design for Discovery" by Peter Morville and Jeffery Callender. introduced and cataloged almost all the UI/UX search patterns which were mostly around list-based results.

The central UI/UX element was a list of results. And that meant the choice of choosing the answer was given to the user. The user had the power to choose the answer or use the information gained in his/her exploration to conduct more research.
And yes, there were ranking and reranking algorithms that re-ordered the list of results based on the business model (e.g. an advertising model), location, and even personalized list of results based on the previous choices of the users or his/her preferences. But nevertheless - even if not perfect and unbiased - the user had a list of results and the power to choose an element from a list - even if it was on the 10th. page. This is no longer the case in the conversational AI search and/or Question Answering UX.
Conversational AI & RAG UX
In a chatbot and conversational AI UX setup, users have no power to choose their answers.
While Retrieval-Augmented Generation (RAG) aims to ground responses in retrieved documents and by doing so reducing the risk of hallucination, it still removes a key element: user's choice. The missing list of results in the UX means users cannot page through and have to rely on the answer that is coming from the LLM. The information retrieval of part of the RAG can do a great job of finding a set of results that matches the user's question, and yet users cannot see how the answers is generated. We are literally giving up our agency.
The Real Fix: Make RAG Search UX Dynamic and Transparent
Ranking and Reranking algorithms are not anything new in search applications. Learning To Rank (LTR) models have been employed to improve or change the list of results returned by the main retrieval algorithms. Business models and personalization can also be implemented as part of this crucial step. In fact, with the advent of LLMs, LLM themselves are trained as specialized semantic rerankers. (Such as the one being developed by Cohere).
The fix isn’t just about better ranking models or semantic rerankers (though those help), or fine-tuning and training LLMs that have less bias (an enormously hard task) similar to what Perplexity did with DeepSeek to inject omitted historical events back into the LLM.
It’s about building user-centric, transparent search experiences that give users back some control.
Here’s how:
Build Dynamic RAG Settings and Parameters by design: All settings should be modifiable and configurable at runtime and query time.
Expose Dynamic RAG Settings in the UI: Allow users to tune RAG settings such as reranking parameters (e.g., relevance thresholds, source list, number of sources to be included, sorting order etc.) at query-time.
Make Ranking Adjustable: Integrate sliders, filters, or toggles that allow users to influence how results are prioritized.
Show Source Alternatives: Let users see not just the final generated response, but the documents or passages that informed it if user requested it.
Support Exploration: Offer interactive tools to refine queries, dive deeper into source documents, or pivot based on user curiosity to somehow simulate the faceted search UI/UX.
Key Takeaway:
A successful chatbot is a user-centric one.
To have a user-centric chatbot or conversational AI, parameters of the RAG application and in particular reranker parameters need to be dynamic at query time, and they need to be exposed to the UI/UX where users can at least change these parameters to regain some control.
Research from Microsoft has already shown that the critical thinking in students is already in decline. If humans are not stochastic parrots , they might become as such.
Sometimes a simple UX enabled by the underlying technologies will have great impact on human behavior. Please expose and open all of your RAG settings to your UI and build thoughtful UX around it.


