
Building Agents with LangGraph and Needle
Learn how to build a powerful RAG agent that combines Needle's document processing with LangGraph's workflow orchestration
10 min read
In this tutorial, we'll build a powerful RAG agent that performs four key tasks:
- Uploads documents to a Needle collection.
- Waits for the documents to finish indexing.
- Searches those documents based on a user's query.
- Responds with an answer using a Large Language Model (LLM).
We'll build this agent using two tools:
- Needle, a “RAG API and Knowledge Threading” platform for document ingestion, chunking, embedding, and semantic search.
- LangGraph, a library that lets you create stateful, multi-step LLM workflows (or “agents“) that can maintain state between steps.
Why Needle and LangGraph?
- Needle handles the heavy lifting of storing and indexing your documents. Its built-in chunking, embedding, and semantic search features make it easy to build RAG applications without managing complex infrastructure.
- LangGraph specializes in creating stateful workflows for LLM-powered agents. It helps you chain together steps, like uploading files, waiting for indexing, searching, and more, while keeping track of important context along the way.
Together, these tools provide a clean, modular approach to building powerful RAG workflows.
Prerequisites
Before you begin, make sure you have:
- Python 3.9+ installed.
- A Needle account and API key.
- An OpenAI account and API key.
- Dependencies installed:
needle langgraph langchain_community openai
- A Needle collection already created.
How It Works (In a Nutshell)
- Initialize: You specify your API keys, collection ID, and create a loader/retriever.
- Add a File: The agent calls
add_file_to_collection
. - Indexing Delay: We wait 30 seconds to let Needle index the file.
- Search: The agent queries the newly added file for the user's question.
- Multi-Step: LangGraph tracks these steps so each one executes at the right time.
Key Takeaways
- Minimal Dependencies:
- Needle: For document ingestion and retrieval.
- LangGraph: For orchestrating a stateful workflow.
- OpenAI: For LLM responses.
- Stateful Workflows: LangGraph helps you split your process into small, manageable steps.
- Beginner-Friendly: The code waits for indexing automatically, so you don't have to remember to do it manually.
- Easily Extensible: You can add more steps without altering the rest of the workflow.
Conclusion
Combining Needle and LangGraph allows you to build RAG agents that gracefully handle multi-step processes, such as uploading documents, waiting for indexing, and searching. This example is just a starting point: customize the wait time, build more elaborate logic, or add new tools to your workflow.
Check out the Needle documentation for more details on advanced indexing options and LangGraph on GitHub for building more sophisticated multi-step LLM workflows.