Langgraph, a powerful library for building stateful, multi-actor applications with LLMs, can be even more versatile when integrated with external services. This blog post explores langgraph-restapi
, a GitHub repository that provides a seamless way to expose your Langgraph agents as REST APIs, specifically tailored for use with Google Gemini.
What is langgraph-restapi
?
The langgraph-restapi
repository offers a robust REST API wrapper for Langgraph agents, leveraging the power of BentoML and FastAPI. This integration allows your Langgraph-powered agents to interact with other applications and systems through standard RESTful endpoints, supporting both synchronous and streaming interaction modes.
Key Features
- RESTful Endpoints: Easily expose your Langgraph agents as web services accessible via HTTP requests.
- Synchronous and Streaming Modes: Supports both immediate responses and real-time streaming of agent outputs, providing flexibility for different application needs.
- Stateful Agents with Threaded Memory: Maintain conversation context and agent state across multiple interactions, enabling more natural and coherent dialogues.
- Flexible Tool Integration: Seamlessly integrate various tools, including Large Language Models (LLMs) like Google Gemini, and custom-built tools to extend the agent’s capabilities.
- Deployment Readiness: Built with BentoML and Docker, the solution is designed for easy deployment to various cloud platforms, making it production-ready.
Getting Started
To begin working with langgraph-restapi
, you’ll need Python 3.10 or newer, and optionally Docker for containerized deployment. Access to an LLM API (such as Google Gemini) is also required.
Here are the basic steps to get started:
- Clone the Repository:
bash git clone https://github.com/shdhumale/langgraph-restapi.git cd langgraph-restapi
- Install Requirements:
bash pip install -r requirements.txt
- Configure Environment Variables:
Copy the example environment file and update it with your LLM API keys and other configurations.bash cp .env.example .env
Edit the.env
file to provide your specific environment variables.
run this file using this command
python langgraph-restapi.py
This Python script demonstrates how to create an LLM-powered agent using LangGraph and Google’s Gemini model to interact with a REST API (https://api.restful-api.dev/objects
) in a conversational way.
Let’s break it down section by section:
1. Import Dependencies
1 2 3 4 5 | import requests, os, json, re from langgraph.prebuilt import create_react_agent from dotenv import load_dotenv from langchain_google_genai import ChatGoogleGenerativeAI from typing import List, Dict, Any |
requests
: For HTTP calls to the REST API.dotenv
: To load the.env
file containing the Google API key.ChatGoogleGenerativeAI
: LLM interface to Google’s Gemini via LangChain.create_react_agent
: A prebuilt agent from LangGraph using the ReAct framework (Reasoning + Acting).typing
: For better type hinting.
2. Load Google API Key
1 2 | load_dotenv() api_key = os.getenv("GOOGLE_API_KEY") |
Loads the API key from a .env
file.
3. Initialize LLM (Gemini 2.5 Flash)
1 2 3 4 | llm = ChatGoogleGenerativeAI( model="gemini-2.5-flash-preview-05-20", google_api_key=api_key ) |
Uses Gemini 2.5 Flash (fast inference) to act as the language model for the agent.
4. Define Tool Functions (REST API Wrappers)
Each function below maps to a tool the agent can use:
Function | Purpose |
---|---|
get_all_objects() | Fetches all items |
get_objects_by_ids(ids) | Fetches specific objects |
get_single_object(object_id) | Fetches one object |
add_object(object_data_json_str) | Creates a new object |
update_object(object_id, data) | Full update (PUT) |
patch_object(object_id, data) | Partial update (PATCH) |
delete_object(object_id) | Deletes object by ID |
5. Create LangGraph Agent
1 2 3 4 5 | agent = create_react_agent( model=llm, tools=[...], prompt="You are an assistant that manages objects via a REST API." ) |
- Creates a ReAct-based agent.
- The agent can reason through text and call tools (functions) as needed based on user messages.
6. Example Agent Usage
a. Get Multiple Objects
1 2 3 | agent.invoke({ "messages": [{"role": "user", "content": "Get objects for IDs: {1,2,3}"}] }) |
b. Get Single Object
1 2 3 | agent.invoke({ "messages": [{"role": "user", "content": "Get single objects for ID: {1}"}] }) |
c. Add a New Object
1 2 3 4 | add_object_input = '{"name": "MyTestObject", ...}' agent.invoke({ "messages": [{"role": "user", "content": f"Add a new object with data: {add_object_input}"}] }) |
- Extracts the newly created object’s ID from the agent response.
d. Update the Object (if created)
1 2 | update_prompt = f"Update the object with ID: ..." agent.invoke({"messages": [{"role": "user", "content": update_prompt}]}) |
e. Delete the Object
1 2 3 | agent.invoke({ "messages": [{"role": "user", "content": f"delete the details of object with ID: ..."}] }) |
f. Get All Objects
1 2 3 | agent.invoke({ "messages": [{"role": "user", "content": "Get all objects"}] }) |
Key Features
- You can talk to the REST API via natural language.
- LangGraph’s
create_react_agent
uses LLM + Tool selection to determine the best function to invoke based on the prompt. - Gemini (via
langchain_google_genai
) is the brain of the system.
Example Prompt to Agent
User: “Add a new object with data: {“name”: “Phone”, “data”: {“year”: 2024}}”
Agent: Callsadd_object()
and returns the new object ID.
Final Notes
- This is a powerful example of combining LLMs with APIs, especially useful in building AI-powered back office agents, API testers, or data managers.
- You can replace the Gemini model or API endpoint to suit your needs.
Let me know if you want this turned into a Python notebook, FastAPI app, or LangGraph workflow diagram!
Output:-
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 | (env) C:\vscode-python-workspace\langgraph-restapi>python langgraph-restapi.py Fetching object with ID: 1 Fetching object with ID: 2 Fetching object with ID: 3 --- Result of getting mutiple ID objects: {'messages': [HumanMessage(content='Get objects for IDs: {1,2,3}', additional_kwargs={}, respons e_metadata={}, id='29277f97-4f44-47db-906a-5f34d327ff7b'), AIMessage(content='', additional_kwargs={'function_call': {'name': 'get_objects _by_ids', 'arguments': '{"ids": ["1", "2", "3"]}'}}, response_metadata={'prompt_feedback': {'block_reason': 0, 'safety_ratings': []}, 'fin ish_reason': 'STOP', 'model_name': 'models/gemini-2.5-flash-preview-05-20', 'safety_ratings': []}, id='run--caece8a3-093f-42eb-bfbc-b525c7 9465d2-0', tool_calls=[{'name': 'get_objects_by_ids', 'args': {'ids': ['1', '2', '3']}, 'id': 'c053881a-56f1-4c96-a969-ea5b3109f5f4', 'typ e': 'tool_call'}], usage_metadata={'input_tokens': 608, 'output_tokens': 25, 'total_tokens': 767, 'input_token_details': {'cache_read': 0} , 'output_token_details': {'reasoning': 134}}), ToolMessage(content='[{"id": "1", "name": "Google Pixel 6 Pro", "data": {"color": "Cloudy White", "capacity": "128 GB"}}, {"id": "2", "name": "Apple iPhone 12 Mini, 256GB, Blue", "data": null}, {"id": "3", "name": "Apple iPhone 12 Pro Max", "data": {"color": "Cloudy White", "capacity GB": 512}}]', name='get_objects_by_ids', id='e31ca3ad-25f3-47b4-9500-beb487ba1b53 ', tool_call_id='c053881a-56f1-4c96-a969-ea5b3109f5f4'), AIMessage(content='I have retrieved the following objects:\n* Object with ID 1: " Google Pixel 6 Pro" in "Cloudy White" with "128 GB" capacity.\n* Object with ID 2: "Apple iPhone 12 Mini, 256GB, Blue".\n* Object with ID 3: "Apple iPhone 12 Pro Max" in "Cloudy White" with "512 GB" capacity.', additional_kwargs={}, response_metadata={'prompt_feedback': {'blo ck_reason': 0, 'safety_ratings': []}, 'finish_reason': 'STOP', 'model_name': 'models/gemini-2.5-flash-preview-05-20', 'safety_ratings': [] }, id='run--5c390692-b7ba-44fb-8cf0-854088e5ee06-0', usage_metadata={'input_tokens': 755, 'output_tokens': 93, 'total_tokens': 942, 'input _token_details': {'cache_read': 0}, 'output_token_details': {'reasoning': 94}})]} --- Result of getting single objects: {'messages': [HumanMessage(content='Get single objects for ID: {1}', additional_kwargs={}, response_ metadata={}, id='0e2e4532-c15b-441f-be07-e6b7dabd1d6d'), AIMessage(content='', additional_kwargs={'function_call': {'name': 'get_single_ob ject', 'arguments': '{"object_id": "1"}'}}, response_metadata={'prompt_feedback': {'block_reason': 0, 'safety_ratings': []}, 'finish_reaso n': 'STOP', 'model_name': 'models/gemini-2.5-flash-preview-05-20', 'safety_ratings': []}, id='run--b6b79445-3e60-4ba7-b489-32d86f11d73d-0' , tool_calls=[{'name': 'get_single_object', 'args': {'object_id': '1'}, 'id': '7a0b0ba4-da37-4eb2-b890-6a55027e6050', 'type': 'tool_call'} ], usage_metadata={'input_tokens': 605, 'output_tokens': 19, 'total_tokens': 684, 'input_token_details': {'cache_read': 0}, 'output_token_ details': {'reasoning': 60}}), ToolMessage(content='{"id": "1", "name": "Google Pixel 6 Pro", "data": {"color": "Cloudy White", "capacity" : "128 GB"}}', name='get_single_object', id='1c0956bc-d014-4b32-bae6-1a90c14a59ae', tool_call_id='7a0b0ba4-da37-4eb2-b890-6a55027e6050'), AIMessage(content='', additional_kwargs={'function_call': {'name': 'get_single_object', 'arguments': '{"object_id": "1"}'}}, response_meta data={'prompt_feedback': {'block_reason': 0, 'safety_ratings': []}, 'finish_reason': 'STOP', 'model_name': 'models/gemini-2.5-flash-previe w-05-20', 'safety_ratings': []}, id='run--0945b6ad-54ba-41bd-9486-99353e906bc1-0', tool_calls=[{'name': 'get_single_object', 'args': {'obj ect_id': '1'}, 'id': '15faacb2-73d5-4b0a-b444-f94912605fc9', 'type': 'tool_call'}], usage_metadata={'input_tokens': 671, 'output_tokens': 19, 'total_tokens': 758, 'input_token_details': {'cache_read': 0}, 'output_token_details': {'reasoning': 68}}), ToolMessage(content='{"id" : "1", "name": "Google Pixel 6 Pro", "data": {"color": "Cloudy White", "capacity": "128 GB"}}', name='get_single_object', id='78afd91d-8f2 c-491c-ae97-53a49302e9b0', tool_call_id='15faacb2-73d5-4b0a-b444-f94912605fc9'), AIMessage(content='The object with ID 1 is: {"data": {"ca pacity": "128 GB", "color": "Cloudy White"}, "id": "1", "name": "Google Pixel 6 Pro"}', additional_kwargs={}, response_metadata={'prompt_f eedback': {'block_reason': 0, 'safety_ratings': []}, 'finish_reason': 'STOP', 'model_name': 'models/gemini-2.5-flash-preview-05-20', 'safe ty_ratings': []}, id='run--6798fcf3-59b1-4266-a4d9-1d476758799b-0', usage_metadata={'input_tokens': 737, 'output_tokens': 44, 'total_token s': 862, 'input_token_details': {'cache_read': 0}, 'output_token_details': {'reasoning': 81}})]} response add_object============= ff8081819782e69e0197a2461eda5668 --- Result of adding object (ID): ff8081819782e69e0197a2461eda5668 --- Result of updating object: {'messages': [HumanMessage(content='Update the object with ID: \'ff8081819782e69e0197a2461eda5668\' with th e following data: {"name": "MyTestObject updated", "data": {"year": 2024 , "price": 99.99, "color": "blue updated"}}', additional_kwargs={ }, response_metadata={}, id='8cc3083f-3537-475d-b9b6-42dbeef998fa'), AIMessage(content='', additional_kwargs={'function_call': {'name': 'u pdate_object', 'arguments': '{"data": {"name": "MyTestObject updated", "data": {"price": 99.99, "year": 2024.0, "color": "blue updated"}}, "object_id": "ff8081819782e69e0197a2461eda5668"}'}}, response_metadata={'prompt_feedback': {'block_reason': 0, 'safety_ratings': []}, 'fi nish_reason': 'STOP', 'model_name': 'models/gemini-2.5-flash-preview-05-20', 'safety_ratings': []}, id='run--1e8f4644-36c6-483f-aef5-1b0b7 966dde1-0', tool_calls=[{'name': 'update_object', 'args': {'data': {'name': 'MyTestObject updated', 'data': {'price': 99.99, 'year': 2024. 0, 'color': 'blue updated'}}, 'object_id': 'ff8081819782e69e0197a2461eda5668'}, 'id': '4c6fb529-f43e-4a91-b91b-2bb0fac718e3', 'type': 'too l_call'}], usage_metadata={'input_tokens': 676, 'output_tokens': 85, 'total_tokens': 851, 'input_token_details': {'cache_read': 0}, 'outpu t_token_details': {'reasoning': 90}}), ToolMessage(content='{"id": "ff8081819782e69e0197a2461eda5668", "name": "MyTestObject updated", "up datedAt": "2025-06-24T14:09:57.859+00:00", "data": {"price": 99.99, "year": 2024.0, "color": "blue updated"}}', name='update_object', id=' 4aa1c421-f999-4b0e-b338-e0b4c7e28bf7', tool_call_id='4c6fb529-f43e-4a91-b91b-2bb0fac718e3'), AIMessage(content='The object with ID \'ff808 1819782e69e0197a2461eda5668\' has been successfully updated. The updated details are:\n- ID: ff8081819782e69e0197a2461eda5668\n- Name: MyT estObject updated\n- Data: {"year": 2024, "price": 99.99, "color": "blue updated"}\n- Last updated at: 2025-06-24T14:09:57.859+00:00', add itional_kwargs={}, response_metadata={'prompt_feedback': {'block_reason': 0, 'safety_ratings': []}, 'finish_reason': 'STOP', 'model_name': 'models/gemini-2.5-flash-preview-05-20', 'safety_ratings': []}, id='run--ace57c6a-325e-48e7-8826-4cc1d3f55337-0', usage_metadata={'input_ tokens': 877, 'output_tokens': 152, 'total_tokens': 1176, 'input_token_details': {'cache_read': 0}, 'output_token_details': {'reasoning': 147}})]} --- Result of after deleting objects: {'messages': [HumanMessage(content='delete the details of object with ID: ff8081819782e69e0197a2461e da5668', additional_kwargs={}, response_metadata={}, id='0f5f89af-e34c-4750-b16c-bf7b0bffac2b'), AIMessage(content='', additional_kwargs={ 'function_call': {'name': 'delete_object', 'arguments': '{"object_id": "ff8081819782e69e0197a2461eda5668"}'}}, response_metadata={'prompt_ feedback': {'block_reason': 0, 'safety_ratings': []}, 'finish_reason': 'STOP', 'model_name': 'models/gemini-2.5-flash-preview-05-20', 'saf ety_ratings': []}, id='run--dbf38999-7d36-4e40-a8d5-f42689adf8c3-0', tool_calls=[{'name': 'delete_object', 'args': {'object_id': 'ff808181 9782e69e0197a2461eda5668'}, 'id': '192bef0f-8fba-4727-8176-cc96e1e2f34c', 'type': 'tool_call'}], usage_metadata={'input_tokens': 633, 'out put_tokens': 45, 'total_tokens': 779, 'input_token_details': {'cache_read': 0}, 'output_token_details': {'reasoning': 101}}), ToolMessage( content='200', name='delete_object', id='3655c260-0738-4ee3-8034-d18ed47e3602', tool_call_id='192bef0f-8fba-4727-8176-cc96e1e2f34c'), AIMe ssage(content="The object with ID 'ff8081819782e69e0197a2461eda5668' has been successfully deleted.", additional_kwargs={}, response_metad ata={'prompt_feedback': {'block_reason': 0, 'safety_ratings': []}, 'finish_reason': 'STOP', 'model_name': 'models/gemini-2.5-flash-preview -05-20', 'safety_ratings': []}, id='run--ff641837-d12c-4dde-8541-ad0062c6201d-0', usage_metadata={'input_tokens': 695, 'output_tokens': 40 , 'total_tokens': 784, 'input_token_details': {'cache_read': 0}, 'output_token_details': {'reasoning': 49}})]} --- Result of getting all objects: {'messages': [HumanMessage(content='Get all objects', additional_kwargs={}, response_metadata={}, id='4 0ca6531-7560-48bf-846c-8478444839a5'), AIMessage(content='', additional_kwargs={'function_call': {'name': 'get_all_objects', 'arguments': '{}'}}, response_metadata={'prompt_feedback': {'block_reason': 0, 'safety_ratings': []}, 'finish_reason': 'STOP', 'model_name': 'models/ge mini-2.5-flash-preview-05-20', 'safety_ratings': []}, id='run--4774bf0f-fce9-44b6-8d93-36027fb56b50-0', tool_calls=[{'name': 'get_all_obje cts', 'args': {}, 'id': '2705043f-519f-41cd-a940-ecbd3018b9c7', 'type': 'tool_call'}], usage_metadata={'input_tokens': 599, 'output_tokens ': 12, 'total_tokens': 660, 'input_token_details': {'cache_read': 0}, 'output_token_details': {'reasoning': 49}}), ToolMessage(content='[{ "id": "1", "name": "Google Pixel 6 Pro", "data": {"color": "Cloudy White", "capacity": "128 GB"}}, {"id": "2", "name": "Apple iPhone 12 Mi ni, 256GB, Blue", "data": null}, {"id": "3", "name": "Apple iPhone 12 Pro Max", "data": {"color": "Cloudy White", "capacity GB": 512}}, {" id": "4", "name": "Apple iPhone 11, 64GB", "data": {"price": 389.99, "color": "Purple"}}, {"id": "5", "name": "Samsung Galaxy Z Fold2", "d ata": {"price": 689.99, "color": "Brown"}}, {"id": "6", "name": "Apple AirPods", "data": {"generation": "3rd", "price": 120}}, {"id": "7", "name": "Apple MacBook Pro 16", "data": {"year": 2019, "price": 1849.99, "CPU model": "Intel Core i9", "Hard disk size": "1 TB"}}, {"id": "8", "name": "Apple Watch Series 8", "data": {"Strap Colour": "Elderberry", "Case Size": "41mm"}}, {"id": "9", "name": "Beats Studio3 Wir eless", "data": {"Color": "Red", "Description": "High-performance wireless noise cancelling headphones"}}, {"id": "10", "name": "Apple iPa d Mini 5th Gen", "data": {"Capacity": "64 GB", "Screen size": 7.9}}, {"id": "11", "name": "Apple iPad Mini 5th Gen", "data": {"Capacity": "254 GB", "Screen size": 7.9}}, {"id": "12", "name": "Apple iPad Air", "data": {"Generation": "4th", "Price": "419.99", "Capacity": "64 GB "}}, {"id": "13", "name": "Apple iPad Air", "data": {"Generation": "4th", "Price": "519.99", "Capacity": "256 GB"}}]', name='get_all_objec ts', id='2a876e6b-6d9e-4722-ba6b-607a91525ade', tool_call_id='2705043f-519f-41cd-a940-ecbd3018b9c7'), AIMessage(content='Here are all the objects:\n\n* **Google Pixel 6 Pro** (ID: 1) - Data: {"capacity": "128 GB", "color": "Cloudy White"}\n* **Apple iPhone 12 Mini, 256GB, Blue** (ID: 2) - No additional data.\n* **Apple iPhone 12 Pro Max** (ID: 3) - Data: {"capacity GB": 512, "color": "Cloudy White"}\n* **Apple iPhone 11, 64GB** (ID: 4) - Data: {"color": "Purple", "price": 389.99}\n* **Samsung Galaxy Z Fold2** (ID: 5) - Data: {"color": " Brown", "price": 689.99}\n* **Apple AirPods** (ID: 6) - Data: {"generation": "3rd", "price": 120}\n* **Apple MacBook Pro 16** (ID: 7) - Data: {"CPU model": "Intel Core i9", "Hard disk size": "1 TB", "price": 1849.99, "year": 2019}\n* **Apple Watch Series 8** (ID: 8) - D ata: {"Case Size": "41mm", "Strap Colour": "Elderberry"}\n* **Beats Studio3 Wireless** (ID: 9) - Data: {"Color": "Red", "Description": " High-performance wireless noise cancelling headphones"}\n* **Apple iPad Mini 5th Gen** (ID: 10) - Data: {"Capacity": "64 GB", "Screen si ze": 7.9}\n* **Apple iPad Mini 5th Gen** (ID: 11) - Data: {"Capacity": "254 GB", "Screen size": 7.9}\n* **Apple iPad Air** (ID: 12) - Data: {"Capacity": "64 GB", "Generation": "4th", "Price": "419.99"}\n* **Apple iPad Air** (ID: 13) - Data: {"Capacity": "256 GB", "Gener ation": "4th", "Price": "519.99"}', additional_kwargs={}, response_metadata={'prompt_feedback': {'block_reason': 0, 'safety_ratings': []}, 'finish_reason': 'STOP', 'model_name': 'models/gemini-2.5-flash-preview-05-20', 'safety_ratings': []}, id='run--7b3af20c-eb87-4f62-9eea-e 0349e865184-0', usage_metadata={'input_tokens': 1141, 'output_tokens': 520, 'total_tokens': 1788, 'input_token_details': {'cache_read': 0} , 'output_token_details': {'reasoning': 127}})]} (env) C:\vscode-python-workspace\langgraph-restapi> |

Lets run the siddhulangraphsingleapi.y
This Python code defines and runs a simple data flow graph using LangGraph, which integrates with LangChain. The purpose of this code is to call a REST API, store its result, and print out part of the response. Here’s a breakdown of what each part does:
Imports
1 2 3 4 | import requests from langgraph.graph import StateGraph, END from langchain_core.runnables import RunnableLambda from typing import TypedDict, Any |
requests
: Used to make the HTTP API call.StateGraph
,END
: LangGraph tools to define a graph and its endpoint.RunnableLambda
: Converts a normal function into a runnable step in LangGraph.TypedDict
: Defines structured types for dictionary-like objects.
Defining Shared State
1 2 | class GraphState(TypedDict): api_response: dict[str, Any] |
- This class defines the shape of the graph’s shared state.
- It has a single key:
api_response
, which is a dictionary (expected to hold the JSON response from the API).
API Call Function
1 2 3 4 5 6 7 8 9 10 | def call_rest_api(state: GraphState) -> GraphState: response = requests.get("https://jsonplaceholder.typicode.com/posts/1") state = GraphState(api_response={}) # Re-initialize state if response.status_code == 200: state["api_response"] = response.json() else: state["api_response"] = {"error": "API call failed"} return state |
- Makes a
GET
request to a fake API (jsonplaceholder.typicode.com
). - If successful, stores the response in
state["api_response"]
. - If it fails, stores an error message.
Wrapping in a LangGraph Node
1 | call_api_node = RunnableLambda(call_rest_api) |
- Wraps the function so it can be added to a LangGraph as a runnable node.
Building the Graph
1 2 3 4 | builder = StateGraph(GraphState) builder.add_node("call_api", call_api_node) builder.set_entry_point("call_api") builder.add_edge("call_api", END) |
StateGraph(GraphState)
: Initializes a graph with the defined state structure.add_node
: Adds the API call function to the graph.set_entry_point
: Sets"call_api"
as the first step.add_edge
: Connects"call_api"
node to theEND
, meaning it’s the only step.
Compiling and Running the Graph
1 2 3 4 5 | graph = builder.compile() output = graph.invoke(GraphState(api_response={})) print("Title from call_api:", output.get("api_response", {}).get("title")) print("API Response:", output) |
compile()
: Finalizes the graph.invoke(...)
: Runs the graph with an emptyapi_response
dictionary.print(...)
: Displays thetitle
field and the full API response.
Example Output
If the API call is successful, you might see something like:
1 2 | Title from call_api: sunt aut facere repellat provident occaecati excepturi optio reprehenderit API Response: {'api_response': {'userId': 1, 'id': 1, 'title': '...', 'body': '...'}} |
Summary
Part | Purpose |
---|---|
GraphState | Defines the shape of the graph’s shared data. |
call_rest_api() | Function to fetch data from a REST API. |
RunnableLambda | Wraps the function for use in LangGraph. |
StateGraph | Builds a flowchart-like structure for processing data. |
invoke() | Runs the graph and collects the final state. |
This is a minimal example of how to integrate an external API into a LangGraph workflow.
Output :-python siddhulangraphsingleapi.py
1 2 3 4 5 | (env) C:\vscode-python-workspace\langgraph-restapi>python siddhulangraphsingleapi.py Title from call_api: sunt aut facere repellat provident occaecati excepturi optio reprehenderit API Response: {'api_response': {'userId': 1, 'id': 1, 'title': 'sunt aut facere repellat provident occaecati excepturi optio reprehenderit', 'body': ' quia et suscipit\nsuscipit recusandae consequuntur expedita et cum\nreprehenderit molestiae ut ut quas totam\nnostrum rerum est autem sunt rem eveniet architecto'}} |

Lets run the siddhulangraphmultpileapi.py
This Python code demonstrates how to build a simple two-step workflow using LangGraph that consumes data from two REST APIs in sequence. Let’s go step by step.
Imports
1 2 3 4 5 | import requests from langgraph.graph import StateGraph, END from langchain_core.runnables import RunnableLambda from typing import TypedDict, Any from typing import NotRequired |
requests
: Used to make HTTP GET calls to REST APIs.langgraph.graph.StateGraph
: Defines the flow of computation in a LangGraph.END
: Represents the endpoint of a graph.RunnableLambda
: Wraps a function to be used as a LangGraph step.TypedDict
,Any
,NotRequired
: Used to define and type the state passed between steps.
Shared Graph State
1 2 3 | class GraphState(TypedDict): api_response: dict[str, Any] # Required api2_response: NotRequired[dict[str, Any]] # Optional |
This defines the shape of the state passed between graph steps:
api_response
: Stores the response from the first API call.api2_response
: Optionally stores response from the second API if a condition is met.
Node 1: Call First API
1 2 3 4 5 6 7 8 | def call_first_api(state: GraphState) -> GraphState: response = requests.get("https://jsonplaceholder.typicode.com/posts/1") new_state = state.copy() if response.status_code == 200: new_state["api_response"] = response.json() else: new_state["api_response"] = {"error": "API call failed"} return new_state |
- Calls a mock API and fetches post #1.
- Adds the result to
api_response
in the state.
Node 2: Call Second API (Conditional)
1 2 3 4 5 6 7 8 9 10 11 12 | def call_second_api(state: GraphState) -> GraphState: title = state.get('api_response', {}).get('title') print(f"The fetched title is: {title}") new_state = state.copy() if title and "sunt aut facere repellat provident occaecati excepturi optio reprehenderit".lower() in title.lower(): response = requests.get("https://jsonplaceholder.typicode.com/posts/2") if response.status_code == 200: new_state["api2_response"] = response.json() else: new_state["api2_response"] = {"error": "API 2 failed"} return new_state |
- Extracts the title from the first API response.
- If the title contains a specific phrase, it calls a second API (post #2).
- Adds result to
api2_response
in the state.
Build the LangGraph
1 2 3 4 5 6 7 8 9 10 11 12 | step_1 = RunnableLambda(call_first_api) step_2 = RunnableLambda(call_second_api) builder = StateGraph(GraphState) builder.add_node("call_api_1", step_1) builder.add_node("call_api_2", step_2) builder.set_entry_point("call_api_1") builder.add_edge("call_api_1", "call_api_2") builder.add_edge("call_api_2", END) graph = builder.compile() |
This defines a graph with two steps:
- Start at
call_api_1
- Then run
call_api_2
- Then end
Run the Graph
1 2 3 | output = graph.invoke(GraphState(api_response={})) print("Title from API 1:", output.get("api_response", {}).get("title")) print("Response from API 2:", output.get("api2_response")) |
- Executes the LangGraph from an empty initial state.
- Prints the title fetched from API 1.
- Prints the response from API 2 (if fetched).
Summary
Step | What It Does |
---|---|
1 | Calls https://jsonplaceholder.typicode.com/posts/1 and stores the response in api_response . |
2 | If the title from API 1 matches a specific string, it calls API 2 (/posts/2 ) and stores it in api2_response . |
3 | LangGraph manages the flow between steps using nodes and edges. |
Let me know if you’d like this converted into a reusable module or integrated with your own APIs.
Output :-python siddhulangraphmultpileapi.py
1 2 3 4 5 | (env) C:\vscode-python-workspace\langgraph-restapi>python siddhulangraphmultpileapi.py The fetched title is: sunt aut facere repellat provident occaecati excepturi optio reprehenderit Title from API 1: sunt aut facere repellat provident occaecati excepturi optio reprehenderit Response from API 2: {'userId': 1, 'id': 2, 'title': 'qui est esse', 'body': 'est rerum tempore vitae\nsequi sint nihil reprehenderit dolor beatae ea dolores ne que\nfugiat blanditiis voluptate porro vel nihil molestiae ut reiciendis\nqui aperiam non debitis possimus qui neque nisi nulla'} |

This project is licensed under the MIT License, making it open for use and contribution.
By using langgraph-restapi
, developers can unlock new possibilities for integrating intelligent agents into their existing systems, building more dynamic and responsive applications with Google Gemini and Langgraph.
For more details and to explore the code, visit the langgraph-restapi GitHub repository.
No comments:
Post a Comment