1

Step 1: Install the required dependencies

bash
pip install pandas             # Used to load CSV datasets
pip install langchain          # Base library for prompts, chains, message templates, etc.
pip install langchain-ollama   # Connects FastAPI → LangChain → Ollama → your local LLM
pip install langchain-chroma   # for Vector database for RAG.
2

Step 2: Install ollama models and load them as models. My models are as below:

💡 Run this in terminal (Ctrl + Alt + T)

bash
ollama pull qwen2.5:1.5b         # required in main.py
ollama pull mxbai-embed-large    # required in vector.py
3

Step 3: Prepare your csv file and load it.

4

Step 4: Run the vector.py page to create vector embeddings for your csv document

💡 If you already have a db, or are creating a new db with a new model, delete the previous one with this command:

bash
rm -rf YOURDATABASENAME
# eg: rm -rf chrome_langchain_db
5

Step 5: Run main.py to run your chatbot