LangGraph, a powerful framework built on LangChain, allows developers to build robust and stateful AI applications. If you’re new to the LangChain ecosystem, including LangGraph, LangFlow, and LangSmith, we highly recommend reviewing this introductory post to grasp the fundamentals:
Understanding LangChain, LangGraph, LangFlow, and LangSmith: Building Robust AI Applications
Now, let’s dive into a practical example of integrating LangGraph using JavaScript. For this walkthrough, we’ll be following the official LangGraph.js tutorials, specifically the workflows section, which you can find here:
LangGraph.js Tutorials: Workflows
Setting Up Your JavaScript Project
Let’s begin by setting up our project environment. Create a new folder named langchain-node-js
and navigate into it using your terminal.
1 2 | mkdir langchain-node-js cd langchain-node-js |
Next, initialize a new Node.js project. You can accept the default values by pressing Enter for each prompt, or customize them as needed.
1 | npm init |
You’ll see output similar to this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 | C:\vscode-node-workspace\langchain-node-js>npm init This utility will walk you through creating a package.json file. It only covers the most common items, and tries to guess sensible defaults. See `npm help init` for definitive documentation on these fields and exactly what they do. Use `npm install <pkg>` afterwards to install a package and save it as a dependency in the package.json file. Press ^C at any time to quit. package name: (langchain-node) version: (1.0.0) description: entry point: (index.js) test command: git repository: keywords: author: license: (ISC) About to write to C:\vscode-node-js-workspace\langchain-node-js\package.json: { "name": "langchain-node", "version": "1.0.0", "main": "index.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "author": "", "license": "ISC", "description": "" } Is this OK? (yes) |
Important: To avoid the MODULE_TYPELESS_PACKAGE_JSON
warning when running your code, add "type": "module"
to your package.json
file. This specifies that your project uses ES modules. Your package.json
should look like this:
1 2 3 4 5 6 7 8 9 10 11 12 | { "name": "langchain-node", "version": "1.0.0", "main": "index.js", "type": "module", // Add this line "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "author": "", "license": "ISC", "description": "" } |
Now, let’s install the necessary packages for our LangGraph application:
1 | npm install @langchain/langgraph @langchain/core zod @langchain/google-genai dotenv |
You’ll see output indicating the packages being added and audited, similar to this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | C:\vscode-node-workspace\langchain-node-js>npm install @langchain/core added 29 packages, and audited 30 packages in 10s ... C:\vscode-node-workspace\langchain-node-js>npm install zod up to date, audited 30 packages in 2s ... C:\vscode-node-workspace\langchain-node-js>npm install @langchain/langgraph added 5 packages, and audited 35 packages in 5s ... C:\vscode-node-workspace\langchain-node-js>npm install @langchain/google-genai added 3 packages, and audited 38 packages in 3s ... C:\vscode-node-workspace\langchain-node-js>npm install dotenv added 1 package, and audited 39 packages in 2s ... |
(Note: You might see messages about funding or new npm versions, which you can typically ignore for this tutorial.)
Building Your LangGraph Agent
Create a file named index.js
in your langchain-node-js
folder and add the following code. This code defines a simple LangGraph agent that can perform arithmetic operations (add, multiply, divide) using tools.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 | import { tool } from "@langchain/core/tools"; import { z } from "zod"; import { ChatGoogleGenerativeAI } from "@langchain/google-genai"; import { StateGraph, MessagesAnnotation } from "@langchain/langgraph"; import { ToolNode } from "@langchain/langgraph/prebuilt"; import { config } from "dotenv"; config(); // Define tools for arithmetic operations const multiply = tool( async ({ a, b }) => { return (a * b).toString(); }, { name: "multiply", description: "Multiply two numbers together", schema: z.object({ a: z.coerce.number().describe("first number"), b: z.coerce.number().describe("second number"), }), } ); const add = tool( async ({ a, b }) => { return (a + b).toString(); }, { name: "add", description: "Add two numbers together", schema: z.object({ a: z.coerce.number().describe("first number"), b: z.coerce.number().describe("second number"), }), } ); const divide = tool( async ({ a, b }) => { if (b === 0) { return "Error: Cannot divide by zero."; } return (a / b).toString(); }, { name: "divide", description: "Divide two numbers", schema: z.object({ a: z.coerce.number().describe("first number"), b: z.coerce.number().describe("second number"), }), } ); // Augment the LLM with the defined tools const tools = [add, multiply, divide]; const toolsByName = Object.fromEntries(tools.map((tool) => [tool.name, tool])); // Not directly used in this example, but useful for mapping tools by name const gemini_api_key = process.env.GEMINI_API_KEY; // Ensure you have GEMINI_API_KEY set in your .env file const model = new ChatGoogleGenerativeAI({ model: "gemini-1.5-flash-latest", apiKey: gemini_api_key, maxOutputTokens: 2048, }); const llmWithTools = model.bindTools(tools); // Define the LLM call node async function llmCall(state) { // LLM decides whether to call a tool or not based on the input const result = await llmWithTools.invoke([ { role: "system", content: "You are a helpful assistant tasked with performing arithmetic on a set of inputs.", }, ...state.messages, ]); return { messages: [result], }; } // Define the Tool Node const toolNode = new ToolNode(tools); // Conditional edge function to route to the tool node or end the workflow function shouldContinue(state) { const messages = state.messages; const lastMessage = messages.at(-1); // If the LLM suggests a tool call, route to the 'tools' node if (lastMessage?.tool_calls?.length) { return "Action"; } // Otherwise, the LLM has provided a final answer, so end the workflow return "__end__"; } // Build the LangGraph workflow const agentBuilder = new StateGraph(MessagesAnnotation) .addNode("llmCall", llmCall) // Add the LLM call node .addNode("tools", toolNode) // Add the tools node .addEdge("__start__", "llmCall") // Start the workflow with the LLM call .addConditionalEdges( "llmCall", // From the LLM call node shouldContinue, // Use the shouldContinue function to determine the next step { "Action": "tools", // If 'Action', go to the 'tools' node "__end__": "__end__", // If '__end__', terminate the workflow } ) .addEdge("tools", "llmCall") // After tools execute, return to the LLM for further processing or a final answer .compile(); // Invoke the agent with a sample message const messages = [{ role: "user", content: "Add 3 and 4 then multiply that by 10 and divide by 2." }]; const result = await agentBuilder.invoke({ messages }); console.log(result.messages); |
Before running the code, ensure you have a .env
file in your langchain-node-js
directory with your GEMINI_API_KEY
.
1 | GEMINI_API_KEY="YOUR_GEMINI_API_KEY_HERE" |
Now, execute the index.js
file:
1 | node index.js |
You will observe output similar to the following, showcasing the agent’s step-by-step execution:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 | C:\vscode-node-workspace\langchain-node-js>node index.js [ HumanMessage { "id": "4bafe76b-e7cf-4634-8c22-2602d36a5b62", "content": "Add 3 and 4", "additional_kwargs": {}, "response_metadata": {} }, AIMessage { "id": "aedbd68a-d3d8-4774-a30e-1840f68fb91c", "content": [ { "functionCall": { "name": "add", "args": "[Object]" } } ], "additional_kwargs": { "finishReason": "STOP", "avgLogprobs": -0.00762864276766777 }, "response_metadata": { "tokenUsage": { "promptTokens": 68, "completionTokens": 5, "totalTokens": 73 }, "finishReason": "STOP", "avgLogprobs": -0.00762864276766777 }, "tool_calls": [ { "name": "add", "args": { "b": 4, "a": 3 }, "type": "tool_call", "id": "64cbc615-325b-4211-bff6-7d29c4291b35" } ], "invalid_tool_calls": [], "usage_metadata": { "input_tokens": 68, "output_tokens": 5, "total_tokens": 73 } }, ToolMessage { "id": "d6ef6ed4-c7e8-4106-89f8-3f67f80da7ab", "content": "7", "name": "add", "additional_kwargs": {}, "response_metadata": {}, "tool_call_id": "64cbc615-325b-4211-bff6-7d29c4291b35" }, AIMessage { "id": "56b262a0-0cbf-42cb-b66c-78a3a9477558", "content": "7\n", "additional_kwargs": { "finishReason": "STOP", "avgLogprobs": -0.0387737900018692 }, "response_metadata": { "tokenUsage": { "promptTokens": 76, "completionTokens": 2, "totalTokens": 78 }, "finishReason": "STOP", "avgLogprobs": -0.0387737900018692 }, "tool_calls": [], "invalid_tool_calls": [], "usage_metadata": { "input_tokens": 76, "output_tokens": 2, "total_tokens": 78 } } ] |
The output clearly shows the agent’s thought process. Initially, it identifies the need to “add 3 and 4,” then executes the add
tool, resulting in 7
. For different inputs, the agent leverages the appropriate tools:
- For “Multiply 3 and 4”:
AIMessage { "id": "4a5aef53-b844-4623-b99e-024eeaccebe1", "content": "The result is 12.\n", // ... }
- For “Divide 21 and 7”:
AIMessage { "id": "76ba91e9-c734-4b30-9a0c-119d683939c5", "content": "The result is 3.\n", // ... }
When we provide a multi-step instruction like “Add 3 and 4 then multiply the result by 10,” the agent correctly processes it:
1 2 3 4 5 | AIMessage { "id": "450bb6e5-56c0-4fed-b32c-885d50cf86de", "content": "The result of adding 3 and 4 is 7. Multiplying 7 by 10 gives 70. Therefore, the final answer is 70.\n", // ... } |
However, if the prompt becomes ambiguous or complex for the LLM to directly map to a sequence of tool calls, you might encounter issues. For example, with “Add 3 and 4 then divide the result by 7”:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | C:\vscode-node-workspace\langchain-node-js>node index.js [ HumanMessage { "id": "f130cb1e-ddae-47eb-a6bf-e806a4c4bd7b", "content": "Add 3 and 4 then divide the result by 7", "additional_kwargs": {}, "response_metadata": {} }, AIMessage { "id": "c7a9c1b9-d16c-499a-b99e-1bd396ab9399", "content": [], "additional_kwargs": { "finishReason": "MALFORMED_FUNCTION_CALL" }, // ... } ] |
As you can see, the agent returns a MALFORMED_FUNCTION_CALL
error. Debugging such issues solely from console logs can be challenging. This is where LangSmith becomes invaluable!
Enhancing Debugging with LangSmith
LangSmith is an observability platform for LLM applications that provides detailed tracing, logging, and evaluation capabilities. It’s a game-changer for understanding and debugging complex LangGraph workflows.
Here’s how to integrate LangSmith:
- Sign up for a Free Account: Visit the LangSmith website and sign up for a free account.
- Generate an API Key: Once logged in, generate a new API key from your settings or dashboard.
- Install LangChain (for LangSmith Tracing):
npm install -S langchain
- Configure Environment Variables: Add the following environment variables to your
.env
file, replacing the placeholders with your actual keys and a suitable project name:LANGSMITH_TRACING=true LANGSMITH_ENDPOINT="https://api.smith.langchain.com" LANGSMITH_API_KEY="<your-langsmith-api-key>" LANGSMITH_PROJECT="your-langgraph-project-name" # e.g., "arithmetic-agent" GOOGLE_API_KEY="<your-gemini-api-key>"
- Run Your Application: Now, when you execute your Node.js application, all the telemetry data (traces, LLM calls, tool invocations) will be sent to your LangSmith project.
node index.js
You’ll be able to visualize the entire flow of your agent within the LangSmith UI, identifying exactly where issues occur and what the LLM’s reasoning was at each step. This visual debugging is incredibly powerful.

Let’s re-run the slightly more complex query, “Add 3 and 4 then multiply it by 10 and then divide the result by 7,” after setting up LangSmith:
1 | node index.js |
With LangSmith enabled, even if errors occur, you’ll see a detailed trace of the agent’s attempt to execute the multi-step operation. For instance, you might still see initial Error: Received tool input did not match expected schema
messages, but LangSmith will show the subsequent attempts and the eventual successful chain of operations.
The final output for the correctly processed “Add 3 and 4 then multiple it by 10 and then divide the result by 7” query will show the complete trace of successful tool calls, leading to the final answer of 10:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 | C:\vscode-node-workspace\langchain-node-js>node index.js [ HumanMessage { /* ... */ }, AIMessage { "id": "run-dade0200-185c-467c-8842-630668b5fde3", "content": [ { "functionCall": { "name": "divide", "args": "[Object]" } } ], // ... }, ToolMessage { /* Error initially, but LangSmith helps debug why */ }, AIMessage { "id": "run-f36c1b8a-27b7-4e19-8f0d-3c83d8ee1aaf", "content": [ { "functionCall": { "name": "add", "args": "[Object]" } }, { "functionCall": { "name": "multiply", "args": "[Object]" } }, { "functionCall": { "name": "divide", "args": "[Object]" } } ], "tool_calls": [ { "name": "add", "args": { "a": 3, "b": 4 }, /* ... */ }, { "name": "multiply", "args": { "b": 10, "a": "[Object]" }, /* ... */ }, { "name": "divide", "args": { "b": 7, "a": "[Object]" }, /* ... */ } ], // ... }, ToolMessage { "content": "7", "name": "add", /* ... */ }, ToolMessage { "content": "70", "name": "multiply", /* ... */ }, ToolMessage { "content": "10", "name": "divide", /* ... */ }, AIMessage { "id": "run-e668350f-7622-42bd-9f3b-ce722f6f9b10", "content": "The answer is 10.\n", // ... } ] |
This output, coupled with the visual traces in LangSmith, clearly demonstrates how the agent first adds 3 and 4 to get 7, then multiplies 7 by 10 to get 70, and finally divides 70 by 7 to arrive at the correct answer of 10. LangSmith effectively allows you to “see” the internal reasoning and tool orchestration of your LangGraph agents, making development and debugging much more efficient.



Code :- https://github.com/shdhumale/langgraph-node-js.git
No comments:
Post a Comment