Knowlesys

LLM Agent Architecture for a Product Documentation Chatbot

An interesting variation from typical LLM application architecture

LLM is used twice:

· First for summarizing the conversation so far into a standalone question

· Then again with this standalone question and relevant docs for finding the answer

Though overall architecture follows the typical workflow:

· Mine data/documents, create embedding, and save in a vector db

· Construct prompts for user's queries along with matching docs retrieved from vector db

· Execute the prompt as an inference on pre-trained LLM