Friday, July 11, 2025

Getting Started with Google’s Agent Development Kit (ADK) Python

The Google Agent Development Kit (ADK) provides a robust framework for building and deploying AI agents. This blog post will guide you through the process of setting up your development environment, creating your first agent, and exploring its functionalities, including integrating with external APIs.

Prerequisites: Python Installation

Before we begin, ensure you have Python installed on your system. If not, you can follow the detailed installation steps provided in the official ADK documentation: https://google.github.io/adk-docs/get-started/installation/

Step 1: Setting up a Virtual Environment

It’s a best practice to use a virtual environment for your Python projects to manage dependencies effectively and avoid conflicts.

Open your command prompt or terminal and run the following command to create a virtual environment named .venv:

1
python -m venv .venv

Next, activate the virtual environment. The command varies slightly depending on your operating system:

For Windows (Command Prompt):

1
.venv\Scripts\activate.bat

Step 2: Installing Google ADK

With your virtual environment activated, install the Google ADK package using pip:

1
pip install google-adk

To confirm that the installation was successful and to check the installed version and its dependencies, execute the following command:

1
pip show google-adk

You should see output similar to this, indicating a successful installation:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
(.venv) C:\vscode-python-workspace\adkagent>pip show google-adk
Name: google-adk
Version: 1.6.1
Summary: Agent Development Kit
Author:
Author-email: Google LLC <googleapis-packages@google.com>
License:
Location: C:\vscode-python-workspace\adkagent\.venv\Lib\site-packages
Requires: anyio, authlib, click, fastapi, google-api-python-client, google-cloud-aiplatform, google-cloud-secret-manager, google-clo
ud-speech, google-cloud-storage, google-genai, graphviz, mcp, opentelemetry-api, opentelemetry-exporter-gcp-trace, opentelemetry-sdk
, pydantic, python-dateutil, python-dotenv, PyYAML, requests, sqlalchemy, starlette, tenacity, typing-extensions, tzlocal, uvicorn,
watchdog, websockets
Required-by:

Step 3: Exploring ADK CLI Commands

Now that ADK is installed, let’s explore the functionalities it provides via its command-line interface (CLI). Run the adk command:

1
(.venv) C:\vscode-python-workspace\adkagent>adk

This will display the available commands and options:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Usage: adk [OPTIONS] COMMAND [ARGS]...
 
  Agent Development Kit CLI tools.
 
Options:
  --version  Show the version and exit.
  --help     Show this message and exit.
 
Commands:
  api_server  Starts a FastAPI server for agents.
  create      Creates a new app in the current folder with prepopulated agent template.
  deploy      Deploys agent to hosted environments.
  eval        Evaluates an agent given the eval sets.
  run         Runs an interactive CLI for a certain agent.
  web         Starts a FastAPI server with Web UI for agents.

Step 4: Creating Your First ADK Project

Let’s create a new agent project using the adk create command:

1
(.venv) C:\vscode-python-workspace\adkagent>adk create app

The CLI will prompt you to choose a model and a backend for your agent:

1
2
3
4
5
6
7
Choose a model for the root agent:
1. gemini-2.0-flash-001
2. Other models (fill later)
Choose model (1, 2): 1
1. Google AI
2. Vertex AI
Choose a backend (1, 2): 1

You’ll then be asked to provide your Google API Key. If you don’t have one, you can create it in AI Studio: https://aistudio.google.com/apikey

1
2
3
Don't have API Key? Create one in AI Studio: https://aistudio.google.com/apikey
 
Enter Google API key: <Your_google_api_key>

Upon successful creation, you’ll see a confirmation message indicating the new files generated:

1
2
3
4
Agent created in C:\vscode-python-workspace\adkagent\app:
- .env
- __init__.py
- agent.py

This command creates a new folder named app (or whatever name you provided). Inside this folder, you’ll find:

  • .env: Contains your Google LLM key.
  • agent.py: The main file where your agent’s logic resides.
  • __init__.py: Used for package initialization and agent imports.

Step 5: Running Your Agent in CLI Mode

Navigate out of the app folder (if you are inside it) and run the default agent using the adk run command:

1
(.venv) C:\vscode-python-workspace\adkagent>adk run

You’ll see log messages indicating the agent’s startup. You can then interact with your agent directly in the command line:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Log setup complete: C:\Users\ADMINI~1\AppData\Local\Temp\agents_log\agent.20250711_182030.log
To access latest log: tail -F C:\Users\ADMINI~1\AppData\Local\Temp\agents_log\agent.latest.log
C:\vscode-python-workspace\adkagent\.venv\Lib\site-packages\google\adk\cli\cli.py:134: UserWarning: [EXPERIMENTAL] InMemoryCredentia
lService: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking cha
nges at any time.
  credential_service = InMemoryCredentialService()
C:\vscode-python-workspace\adkagent\.venv\Lib\site-packages\google\adk\auth\credential_service\in_memory_credential_service.py:33: U
serWarning: [EXPERIMENTAL] BaseCredentialService: This feature is experimental and may change or be removed in future versions witho
ut notice. It may introduce breaking changes at any time.
  super().__init__()
Running agent root_agent, type exit to exit.
[user]: what is the capital of india
[root_agent]: The capital of India is New Delhi.
 
[user]: exit
 
(.venv) C:\vscode-python-workspace\adkagent>

Step 6: Running Your Agent with a Web UI

ADK also allows you to run your agent with a web interface. To do this, use the adk web command, specifying your agent’s directory (app in this case):

1
(.venv) C:\vscode-python-workspace\adkagent>adk web app

The server will start, and you’ll get a URL to access the web UI, typically http://localhost:8000:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
C:\vscode-python-workspace\adkagent\.venv\Lib\site-packages\google\adk\cli\fast_api.py:354: UserWarning: [EXPERIMENTAL] InMemoryCred
entialService: This feature is experimental and may change or be removed in future versions without notice. It may introduce breakin
g changes at any time.
  credential_service = InMemoryCredentialService()
C:\vscode-python-workspace\adkagent\.venv\Lib\site-packages\google\adk\auth\credential_service\in_memory_credential_service.py:33: U
serWarning: [EXPERIMENTAL] BaseCredentialService: This feature is experimental and may change or be removed in future versions witho
ut notice. It may introduce breaking changes at any time.
  super().__init__()
?[32mINFO?[0m:      Started server process [?[36m18488?[0m]
?[32mINFO?[0m:      Waiting for application startup.
 
+-----------------------------------------------------------------------------+
| ADK Web Server started                                                      |
|                                                                             |
+-----------------------------------------------------------------------------+
 
?[32mINFO?[0m:      Application startup complete.
?[32mINFO?[0m:      Uvicorn running on ?[1mhttp://127.0.0.1:8000?[0m (Press CTRL+C to quit)

Open your browser and navigate to http://localhost:8000. You will be prompted to select the app you want to run. Choose “app” to proceed.

Once the web UI is loaded, you can explore various options. For instance, clicking on “Call LLM” will present you with an interface to interact directly with the underlying Large Language Model.

Step 7: Consuming a REST API with Google ADK

A powerful feature of ADK is its ability to integrate with external tools and APIs. Let’s demonstrate this by having our agent consume a REST API.

First, you’ll need the requests Python package to make HTTP calls. Install it within your virtual environment:

1
(.venv) C:\vscode-python-workspace\adkagent\app>pip install requests

You should see output indicating that the requirements are already satisfied or that the package is being installed:

1
2
3
4
5
6
7
8
9
10
Requirement already satisfied: requests in c:\vscode-python-workspace\adkagent\.venv\lib\site-packages (2.32.4)
 
Requirement already satisfied: charset_normalizer<4,>=2 in c:\vscode-python-workspace\adkagent\.venv\lib\site-p
ackages (from requests) (3.4.2)
Requirement already satisfied: idna<4,>=2.5 in c:\vscode-python-workspace\adkagent\.venv\lib\site-packages (fro
m requests) (3.10)
Requirement already satisfied: urllib3<3,>=1.21.1 in c:\vscode-python-workspace\adkagent\.venv\lib\site-package
s (from requests) (2.5.0)
Requirement already satisfied: certifi>=2017.4.17 in c:\vscode-python-workspace\adkagent\.venv\lib\site-package
s (from requests) (2025.7.9)

Now, let’s modify your agent.py file (located in the app folder) to include a tool that fetches data from a REST API. Replace the existing content of agent.py with the following code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
from google.adk.agents import Agent
import requests
import json
from typing import Any, Dict, List, Optional
 
#1. Basic Agent
base_agent = Agent(
    model='gemini-2.0-flash-001',
    name='root_agent',
    description='A helpful assistant for user questions.',
    instruction='Answer user questions to the best of your knowledge',
)
 
 
# 2. Basic Agent with Tool
 
def get_all_objects() -> Optional[List[Dict[str, Any]]]:
    """
    Consumes GET List of all objects: https://api.restful-api.dev/objects
    """
    print("\n--- GET All Objects ---")
    try:
        response = requests.get(BASE_URL)
        response.raise_for_status()  # Raise an exception for HTTP errors (4xx or 5xx)
        data = response.json()
        print(json.dumps(data, indent=2))
        return data
    except requests.exceptions.RequestException as e:
        print(f"Error fetching all objects: {e}")
        return None
 
def get_objects_by_ids(ids: List[int]) -> Optional[List[Dict[str, Any]]]:
    """
    Consumes GET List of objects by ids: https://api.restful-api.dev/objects?id=3&id=5&id=10
    Args:
        ids (list): A list of integer IDs.
    """
    print(f"\n--- GET Objects by IDs: {ids} ---")
    params = {'id': ids} # Requests handles list parameters correctly for multiple 'id'
    try:
        response = requests.get(BASE_URL, params=params)
        response.raise_for_status()
        data = response.json()
        print(json.dumps(data, indent=2))
        return data
    except requests.exceptions.RequestException as e:
        print(f"Error fetching objects by IDs: {e}")
        return None
 
def get_single_object(object_id: str) -> Optional[Dict[str, Any]]:
    """
    Consumes GET Single object: https://api.restful-api.dev/objects/7
    Args:
        object_id (str): The ID of the object to retrieve.
    """
    print(f"\n--- GET Single Object: {object_id} ---")
    url = f"{BASE_URL}/{object_id}"
    try:
        response = requests.get(url)
        response.raise_for_status()
        data = response.json()
        print(json.dumps(data, indent=2))
        return data
    except requests.exceptions.RequestException as e:
        print(f"Error fetching single object {object_id}: {e}")
        return None
 
def add_object(name: str, data_payload: Dict[str, Any]) -> Optional[Dict[str, Any]]:
    """
    Consumes POST Add object: https://api.restful-api.dev/objects
    Args:
        name (str): The name of the new object.
        data_payload (dict): A dictionary representing the 'data' field of the object.
    """
    print(f"\n--- POST Add Object: {name} ---")
    payload = {
        "name": name,
        "data": data_payload
    }
    headers = {"Content-Type": "application/json"}
    try:
        response = requests.post(BASE_URL, json=payload, headers=headers)
        response.raise_for_status()
        new_object = response.json()
        print(json.dumps(new_object, indent=2))
        return new_object
    except requests.exceptions.RequestException as e:
        print(f"Error adding object: {e}")
        return None
 
def update_object(object_id: str, name: str, data_payload: Dict[str, Any]) -> Optional[Dict[str, Any]]:
    """
    Consumes PUT Update object: https://api.restful-api.dev/objects/7
    Args:
        object_id (str): The ID of the object to update.
        name (str): The new name for the object.
        data_payload (dict): The new 'data' field for the object.
    """
    print(f"\n--- PUT Update Object: {object_id} ---")
    url = f"{BASE_URL}/{object_id}"
    payload = {
        "name": name,
        "data": data_payload
    }
    headers = {"Content-Type": "application/json"}
    try:
        response = requests.put(url, json=payload, headers=headers)
        response.raise_for_status()
        updated_object = response.json()
        print(json.dumps(updated_object, indent=2))
        return updated_object
    except requests.exceptions.RequestException as e:
        print(f"Error updating object {object_id}: {e}")
        return None
 
def partially_update_object(object_id: str, data_to_update: Dict[str, Any]) -> Optional[Dict[str, Any]]:
    """
    Consumes PATCH Partially update object: https://api.restful-api.dev/objects/7
    Args:
        object_id (str): The ID of the object to partially update.
        data_to_update (dict): A dictionary containing the fields to update (e.g., {"name": "New Name"}).
    """
    print(f"\n--- PATCH Partially Update Object: {object_id} ---")
    url = f"{BASE_URL}/{object_id}"
    headers = {"Content-Type": "application/json"}
    try:
        response = requests.patch(url, json=data_to_update, headers=headers)
        response.raise_for_status()
        patched_object = response.json()
        print(json.dumps(patched_object, indent=2))
        return patched_object
    except requests.exceptions.RequestException as e:
        print(f"Error partially updating object {object_id}: {e}")
        return None
 
def delete_object(object_id: str) -> Optional[Dict[str, Any]]:
    """
    Consumes DELETE object: https://api.restful-api.dev/objects/6
    Args:
        object_id (str): The ID of the object to delete.
    """
    print(f"\n--- DELETE Object: {object_id} ---")
    url = f"{BASE_URL}/{object_id}"
    try:
        response = requests.delete(url)
        response.raise_for_status()
        # A successful DELETE often returns 200 OK with a message, or 204 No Content
        # The restful-api.dev returns 200 OK with a success message for DELETE
        data = response.json()
        print(json.dumps(data, indent=2))
        return data
    except requests.exceptions.RequestException as e:
        print(f"Error deleting object {object_id}: {e}")
        return None
 
 
tool_agent = Agent(
    name="tool_agent",
    model="gemini-2.0-flash-001",
    description="An agent that can interact with a RESTful API for objects",
    instruction="""
    You are a RestAPI service caller assistant. Call the given tools as per the instruction given by the uses and print the data in json format.
    """,
    tools=[
        get_all_objects,
        get_objects_by_ids,
        get_single_object,
        add_object,
        update_object,
        partially_update_object,
        delete_object,
    ],
)
 
 
root_agent=tool_agent

In this updated agent.py:

  • We define a get_all_objects function that uses the requests library to make a GET request to https://api.restful-api.dev/objects.
  • We then create a tool_agent and register get_all_objects as one of its tools.
  • Finally, we set root_agent to tool_agent so our primary agent uses this new configuration.

Now, run the agent again in the CLI:

1
(.venv) C:\vscode-python-workspace\adkagent>adk run

Interact with the agent and observe how it can now utilize the get_all_objects tool. You can also re-run adk web app and check the functionality in the web interface.

Lets store the State of the conversesion between agent and user for that we had implemented

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
# 3. Agent with State
from google.adk.agents import Agent
from google.adk.tools.tool_context import ToolContext
 
 
 
def get_single_object(object_id: str , tool_context: ToolContext) -> Optional[Dict[str, Any]]:
    """
    Consumes GET Single object: https://api.restful-api.dev/objects/7
    Args:
        object_id (str): The ID of the object to retrieve.
    """
    print(f"\n--- GET Single Object: {object_id} ---")
    url = f"{BASE_URL}/{object_id}"
    try:
        response = requests.get(url)
        response.raise_for_status()
        data = response.json()
        print(json.dumps(data, indent=2))
        # Initialize recent_searches if it doesn't exist
        if "recent_searches" not in tool_context.state:
            tool_context.state["recent_searches"] = []
 
        recent_searches = tool_context.state["recent_searches"]
        if object_id not in recent_searches:
            recent_searches.append(object_id)
            tool_context.state["recent_searches"] = recent_searches  
            #print  tool_context.state["recent_searches"]
            print('tool_context.state["recent_searches"] ------------------------->',tool_context.state["recent_searches"])
 
        return data
    except requests.exceptions.RequestException as e:
        print(f"Error fetching single object {object_id}: {e}")
        return None
 
stateful_agent = Agent(
   name="stateful_agent",
    model="gemini-2.0-flash-001",
    description="An agent that can interact with a RESTful API for objects",
    instruction="""
    You are a RestAPI service caller assistant. Call the given tools as per the instruction given by the uses and print the data in json format.
    """,
    tools=[
        get_single_object,       
    ],
)
 
 
 
root_agent=stateful_agent

Output :-

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
LLM Response:
-----------------------------------------------------------
Text:
{
"mobileType": "Google Pixel 7",
"manufactureCompanyName": "Google"
}
-----------------------------------------------------------
Function calls:
 
-----------------------------------------------------------
Raw response:
{"sdk_http_response":{"headers":{"content-type":"application/json; charset=UTF-8","vary":"Origin, X-Origin, Referer","content-encoding":"gzip","date":"Mon, 14 Jul 2025 09:06:59 GMT","server":"scaffolding on HTTPServer2","x-xss-protection":
"0","x-frame-options":"SAMEORIGIN","x-content-type-options":"nosniff","server-timing":"gfet4t7; dur=908","alt-svc":"h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000","transfer-encoding":"chunked"}},"candidates":[{"content":{"parts":[{"te
xt":"{\n\"mobileType\": \"Google Pixel 7\",\n\"manufactureCompanyName\": \"Google\"\n}"}],"role":"model"},"finish_reason":"STOP","avg_logprobs":-0.028583814268526825}],"model_version":"gemini-2.0-flash-001","usage_metadata":{"candidates_to
ken_count":23,"candidates_tokens_details":[{"modality":"TEXT","token_count":23}],"prompt_token_count":189,"prompt_tokens_details":[{"modality":"TEXT","token_count":189}],"total_token_count":212},"automatic_function_calling_history":[],"par
sed":{"mobileType":"Google Pixel 7","manufactureCompanyName":"Google"}}
-----------------------------------------------------------
 
2025-07-14 14:39:55,245 - INFO - fast_api.py:847 - Generated event in agent run streaming: {"content":{"parts":[{"text":"{\n\"mobileType\": \"Google Pixel 7\",\n\"manufactureCompanyName\": \"Google\"\n}"}],"role":"model"},"usageMetadata":{
"candidatesTokenCount":23,"candidatesTokensDetails":[{"modality":"TEXT","tokenCount":23}],"promptTokenCount":189,"promptTokensDetails":[{"modality":"TEXT","tokenCount":189}],"totalTokenCount":212},"invocationId":"e-fbebb6d0-2991-4504-a4b4-
3313744f1437","author":"structured_agent","actions":{"stateDelta":{"mobile_company_name":{"mobileType":"Google Pixel 7","manufactureCompanyName":"Google"}},"artifactDelta":{},"requestedAuthConfigs":{}},"id":"2e5cdf68-d3d8-429a-a04e-c47eb41
35beb","timestamp":1752484193.868741}
?[32mINFO?[0m:     127.0.0.1:51561 - "?[1mGET /apps/app/users/user/sessions/1fea9254-9ab8-47bb-bab9-c06d1668bb4b HTTP/1.1?[0m" ?[32m200 OK?[0m
?[32mINFO?[0m:     127.0.0.1:51583 - "?[1mGET /debug/trace/session/1fea9254-9ab8-47bb-bab9-c06d1668bb4b HTTP/1.1?[0m" ?[32m200 OK?[0m

Lets now check the strucrure output example

4. Structured Output Agent

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
from google.adk.agents import LlmAgent
from pydantic import BaseModel, Field
 
 
 
class MobileType(BaseModel):
    mobileType: str = Field(description="Name of the mobile")
    manufactureCompanyName: str = Field(description="Name of the Company that manufacture that mobile")
 
# Define a function to get value of the objet id enter by the user
def get_single_object(object_id: str , tool_context: ToolContext) -> Optional[Dict[str, Any]]:
    """
    Consumes GET Single object: https://api.restful-api.dev/objects/{object_id}
    Args:
        object_id (str): The ID of the object to retrieve.
    """
    print(f"\n--- GET Single Object: {object_id} ---")
    url = f"{BASE_URL}/{object_id}"
    try:
        response = requests.get(url)
        response.raise_for_status()
        data = response.json()
        print(json.dumps(data, indent=2))       
        return data
    except requests.exceptions.RequestException as e:
        print(f"Error fetching single object {object_id}: {e}")
        return None
 
structured_agent = LlmAgent(
    name="structured_agent",
    model="gemini-2.0-flash-001",
    description="An agent that can interact with a RESTful API for given objects id and can provide structured output",
    instruction="""
    You are a RestAPI service caller assistant. Call the given tools as per the instruction given by the user and print the data in json format.
    Return Name of the Company that manufacture that mobile in JSON format.
 
    For each object_id, look at the name to make a decision.
    If name contains Google return Name of the Company that manufacture that mobile as Google else if name contains Apple return Name of the Company that manufacture that mobile return Apple
    Otherwise: name
    """,
    output_schema=MobileType,
    output_key="mobile_company_name"
)
 
 
root_agent=structured_agent

Output :-

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
LLM Request:
-----------------------------------------------------------
System Instruction:
 
    You are a RestAPI service caller assistant. Call the given tools as per the instruction given by the user and print the data in json format.
    Return Name of the Company that manufacture that mobile in JSON format.
 
    For each object_id, look at the name to make a decision.
    If name contains Google return Name of the Company that manufacture that mobile as Google else if name contains Apple return Name of the Company that manufacture that mobile
 return Apple
    Otherwise: name
 
 
You are an agent. Your internal name is "structured_agent".
 
 The description about you is "An agent that can interact with a RESTful API for given objects id and can provide structured output"
-----------------------------------------------------------
Contents:
{"parts":[{"text":"get value of object 4"}],"role":"user"}
-----------------------------------------------------------
Functions:
 
-----------------------------------------------------------
 
2025-07-14 15:09:05,883 - INFO - models.py:7421 - AFC is enabled with max remote calls: 10.
2025-07-14 15:09:09,208 - INFO - _client.py:1740 - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash-001:generateContent "HTTP/1.1 200
OK"
2025-07-14 15:09:09,210 - INFO - google_llm.py:175 -
LLM Response:
-----------------------------------------------------------
Text:
{
  "mobileType": "Apple",
  "manufactureCompanyName": "Apple"
}
-----------------------------------------------------------
Function calls:
 
-----------------------------------------------------------
Raw response:
{"sdk_http_response":{"headers":{"content-type":"application/json; charset=UTF-8","vary":"Origin, X-Origin, Referer","content-encoding":"gzip","date":"Mon, 14 Jul 2025 09:36:13
GMT","server":"scaffolding on HTTPServer2","x-xss-protection":"0","x-frame-options":"SAMEORIGIN","x-content-type-options":"nosniff","server-timing":"gfet4t7; dur=3278","alt-svc"
:"h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000","transfer-encoding":"chunked"}},"candidates":[{"content":{"parts":[{"text":"{\n  \"mobileType\": \"Apple\",\n  \"manufactur
eCompanyName\": \"Apple\"\n}"}],"role":"model"},"finish_reason":"STOP","avg_logprobs":-0.08084371956911954}],"model_version":"gemini-2.0-flash-001","usage_metadata":{"candidates
_token_count":22,"candidates_tokens_details":[{"modality":"TEXT","token_count":22}],"prompt_token_count":189,"prompt_tokens_details":[{"modality":"TEXT","token_count":189}],"tot
al_token_count":211},"automatic_function_calling_history":[],"parsed":{"mobileType":"Apple","manufactureCompanyName":"Apple"}}
-----------------------------------------------------------
 
2025-07-14 15:09:09,212 - INFO - fast_api.py:847 - Generated event in agent run streaming: {"content":{"parts":[{"text":"{\n  \"mobileType\": \"Apple\",\n  \"manufactureCompanyN
ame\": \"Apple\"\n}"}],"role":"model"},"usageMetadata":{"candidatesTokenCount":22,"candidatesTokensDetails":[{"modality":"TEXT","tokenCount":22}],"promptTokenCount":189,"promptT
okensDetails":[{"modality":"TEXT","tokenCount":189}],"totalTokenCount":211},"invocationId":"e-34f06447-068e-4de9-90cf-43efa809d840","author":"structured_agent","actions":{"state
Delta":{"mobile_company_name":{"mobileType":"Apple","manufactureCompanyName":"Apple"}},"artifactDelta":{},"requestedAuthConfigs":{}},"id":"9a7b24d8-f6f6-4116-90d3-68c17b63ba32",
"timestamp":1752485945.423737}
?[32mINFO?[0m:     127.0.0.1:57512 - "?[1mGET /apps/app/users/user/sessions/0285aa7d-3671-40b5-a494-4e68f9811fa6 HTTP/1.1?[0m" ?[32m200 OK?[0m
?[32mINFO?[0m:     127.0.0.1:57512 - "?[1mGET /debug/trace/session/0285aa7d-3671-40b5-a494-4e68f9811fa6 HTTP/1.1?[0m" ?[32m200 OK?[0m
?[32mINFO?[0m:     127.0.0.1:57933 - "?[1mPOST /run_sse HTTP/1.1?[0m" ?[32m200 OK?[0m
2025-07-14 15:10:52,904 - INFO - envs.py:47 - Loaded .env file for app at C:\vscode-python-workspace\adkagent\app\.env
2025-07-14 15:10:52,905 - WARNING - _api_client.py:105 - Both GOOGLE_API_KEY and GEMINI_API_KEY are set. Using GOOGLE_API_KEY.
2025-07-14 15:10:53,293 - INFO - google_llm.py:90 - Sending out request, model: gemini-2.0-flash-001, backend: GoogleLLMVariant.GEMINI_API, stream: False
2025-07-14 15:10:53,293 - INFO - google_llm.py:96 -
LLM Request:
-----------------------------------------------------------
System Instruction:
 
    You are a RestAPI service caller assistant. Call the given tools as per the instruction given by the user and print the data in json format.
    Return Name of the Company that manufacture that mobile in JSON format.
 
    For each object_id, look at the name to make a decision.
    If name contains Google return Name of the Company that manufacture that mobile as Google else if name contains Apple return Name of the Company that manufacture that mobile
 return Apple
    Otherwise: name
 
 
You are an agent. Your internal name is "structured_agent".
 
 The description about you is "An agent that can interact with a RESTful API for given objects id and can provide structured output"
-----------------------------------------------------------
Contents:
{"parts":[{"text":"get value of object 4"}],"role":"user"}
{"parts":[{"text":"{\n  \"mobileType\": \"Apple\",\n  \"manufactureCompanyName\": \"Apple\"\n}"}],"role":"model"}
{"parts":[{"text":"get object value of 6"}],"role":"user"}
-----------------------------------------------------------
Functions:
 
-----------------------------------------------------------
 
2025-07-14 15:10:53,295 - INFO - models.py:7421 - AFC is enabled with max remote calls: 10.
2025-07-14 15:10:54,374 - INFO - _client.py:1740 - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash-001:generateContent "HTTP/1.1 200
OK"
2025-07-14 15:10:54,376 - INFO - google_llm.py:175 -
LLM Response:
-----------------------------------------------------------
Text:
{
  "mobileType": "Google",
  "manufactureCompanyName": "Google"
}
-----------------------------------------------------------
Function calls:
 
-----------------------------------------------------------
Raw response:
{"sdk_http_response":{"headers":{"content-type":"application/json; charset=UTF-8","vary":"Origin, X-Origin, Referer","content-encoding":"gzip","date":"Mon, 14 Jul 2025 09:37:58
GMT","server":"scaffolding on HTTPServer2","x-xss-protection":"0","x-frame-options":"SAMEORIGIN","x-content-type-options":"nosniff","server-timing":"gfet4t7; dur=1035","alt-svc"
:"h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000","transfer-encoding":"chunked"}},"candidates":[{"content":{"parts":[{"text":"{\n  \"mobileType\": \"Google\",\n  \"manufactu
reCompanyName\": \"Google\"\n}"}],"role":"model"},"finish_reason":"STOP","avg_logprobs":-0.03552465276284651}],"model_version":"gemini-2.0-flash-001","usage_metadata":{"candidat
es_token_count":22,"candidates_tokens_details":[{"modality":"TEXT","token_count":22}],"prompt_token_count":217,"prompt_tokens_details":[{"modality":"TEXT","token_count":217}],"t
otal_token_count":239},"automatic_function_calling_history":[],"parsed":{"mobileType":"Google","manufactureCompanyName":"Google"}}
-----------------------------------------------------------
 
2025-07-14 15:10:54,378 - INFO - fast_api.py:847 - Generated event in agent run streaming: {"content":{"parts":[{"text":"{\n  \"mobileType\": \"Google\",\n  \"manufactureCompany
Name\": \"Google\"\n}"}],"role":"model"},"usageMetadata":{"candidatesTokenCount":22,"candidatesTokensDetails":[{"modality":"TEXT","tokenCount":22}],"promptTokenCount":217,"promp
tTokensDetails":[{"modality":"TEXT","tokenCount":217}],"totalTokenCount":239},"invocationId":"e-510fee63-e11c-49d3-87ec-0752f5143403","author":"structured_agent","actions":{"sta
teDelta":{"mobile_company_name":{"mobileType":"Google","manufactureCompanyName":"Google"}},"artifactDelta":{},"requestedAuthConfigs":{}},"id":"ada0afd6-423c-42e0-ac0a-3797c621f3
91","timestamp":1752486052.905774}
?[32mINFO?[0m:     127.0.0.1:57933 - "?[1mGET /apps/app/users/user/sessions/0285aa7d-3671-40b5-a494-4e68f9811fa6 HTTP/1.1?[0m" ?[32m200 OK?[0m
?[32mINFO?[0m:     127.0.0.1:57935 - "?[1mGET /debug/trace/session/0285aa7d-3671-40b5-a494-4e68f9811fa6 HTTP/1.1?[0m" ?[32m200 OK?[0m
?[32mINFO?[0m:     127.0.0.1:58099 - "?[1mGET /debug/trace/9a7b24d8-f6f6-4116-90d3-68c17b63ba32 HTTP/1.1?[0m" ?[32m200 OK?[0m
build graph
?[32mINFO?[0m:     127.0.0.1:58100 - "?[1mGET /apps/app/users/user/sessions/0285aa7d-3671-40b5-a494-4e68f9811fa6/events/9a7b24d8-f6f6-4116-90d3-68c17b63ba32/graph HTTP/1.1?[0m"
?[32m200 OK?[0m

Note:- Structure output does not support Tool currenly in ADK.

Now lets talk about callback agent that gives us the ability to call the function before and after calling the tools.

5. Callback Agent

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
from google.adk.agents import Agent
from google.adk.tools.tool_context import ToolContext
from google.adk.tools.base_tool import BaseTool
from typing import Dict, Any, Optional
 
# Define a function to get value of the objet id enter by the user
def get_single_object(object_id: str , tool_context: ToolContext) -> Optional[Dict[str, Any]]:
    """
    Consumes GET Single object: https://api.restful-api.dev/objects/{object_id}
    Args:
        object_id (str): The ID of the object to retrieve.
    """
    print(f"\n--- GET Single Object: {object_id} ---")
    url = f"{BASE_URL}/{object_id}"
    try:
        response = requests.get(url)
        response.raise_for_status()
        data = response.json()
        print(json.dumps(data, indent=2))       
        return data
    except requests.exceptions.RequestException as e:
        print(f"Error fetching single object {object_id}: {e}")
        return None
 
def before_tool_callback(tool: BaseTool, args: Dict[str, Any], tool_context: ToolContext) -> Optional[Dict]:
    # Initialize tool_usage if it doesn't exist
    if "tool_usage" not in tool_context.state:
        tool_context.state["tool_usage"] = {}
 
    # Track tool usage count
    tool_usage = tool_context.state["tool_usage"]
    tool_name = tool.name
    tool_usage[tool_name] = tool_usage.get(tool_name, 0) + 1
    tool_context.state["tool_usage"] = tool_usage
 
    print(f"[LOG] Running tool: {tool_name}")
    return None
 
def after_tool_callback(tool: BaseTool, args: Dict[str, Any], tool_context: ToolContext, tool_response: Dict) -> Optional[Dict]:
    print(f"[LOG] Tool {tool.name} completed")
    return None
 
# Initialize state before creating the agent
initial_state = {"tool_usage": {}}
 
callback_agent = Agent(
     name="callback_agent",
    model="gemini-2.0-flash-001",
    description="An agent that can interact with a RESTful API for objects",
    instruction="""
    You are a RestAPI service caller assistant. Call the given tools as per the instruction given by the uses and print the data in json format.
    """,
    tools=[get_single_object],
    before_tool_callback=before_tool_callback,
    after_tool_callback=after_tool_callback,
)
 
# Choose which agent to run
root_agent = callback_agent

Output :-

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
LLM Request:
-----------------------------------------------------------
System Instruction:
 
    You are a RestAPI service caller assistant. Call the given tools as per the instruction given by the uses and print the data in json format.
 
 
You are an agent. Your internal name is "callback_agent".
 
 The description about you is "An agent that can interact with a RESTful API for objects"
-----------------------------------------------------------
Contents:
{"parts":[{"text":"get values of object id 5"}],"role":"user"}
-----------------------------------------------------------
Functions:
get_single_object: {'object_id': {'type': <Type.STRING: 'STRING'>}}
-----------------------------------------------------------
 
2025-07-14 15:27:24,444 - INFO - models.py:7421 - AFC is enabled with max remote calls: 10.
2025-07-14 15:27:25,280 - INFO - _client.py:1740 - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash-001:generateContent "HTTP/1.1 200
OK"
2025-07-14 15:27:25,281 - WARNING - types.py:5169 - Warning: there are non-text parts in the response: ['function_call'], returning concatenated text result from text parts. Che
ck the full candidates.content.parts accessor to get the full model response.
2025-07-14 15:27:25,282 - INFO - google_llm.py:175 -
LLM Response:
-----------------------------------------------------------
Text:
None
-----------------------------------------------------------
Function calls:
name: get_single_object, args: {'object_id': '5'}
-----------------------------------------------------------
Raw response:
{"sdk_http_response":{"headers":{"content-type":"application/json; charset=UTF-8","vary":"Origin, X-Origin, Referer","content-encoding":"gzip","date":"Mon, 14 Jul 2025 09:54:29
GMT","server":"scaffolding on HTTPServer2","x-xss-protection":"0","x-frame-options":"SAMEORIGIN","x-content-type-options":"nosniff","server-timing":"gfet4t7; dur=784","alt-svc":
"h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000","transfer-encoding":"chunked"}},"candidates":[{"content":{"parts":[{"function_call":{"args":{"object_id":"5"},"name":"get_si
ngle_object"}}],"role":"model"},"finish_reason":"STOP","avg_logprobs":-0.00006975871898854773}],"model_version":"gemini-2.0-flash-001","usage_metadata":{"candidates_token_count"
:9,"candidates_tokens_details":[{"modality":"TEXT","token_count":9}],"prompt_token_count":131,"prompt_tokens_details":[{"modality":"TEXT","token_count":131}],"total_token_count"
:140},"automatic_function_calling_history":[]}
-----------------------------------------------------------
 
2025-07-14 15:27:25,283 - INFO - fast_api.py:847 - Generated event in agent run streaming: {"content":{"parts":[{"functionCall":{"id":"adk-83db2615-b3b2-47b2-a25f-5abb4e7a7dc0",
"args":{"object_id":"5"},"name":"get_single_object"}}],"role":"model"},"usageMetadata":{"candidatesTokenCount":9,"candidatesTokensDetails":[{"modality":"TEXT","tokenCount":9}],"
promptTokenCount":131,"promptTokensDetails":[{"modality":"TEXT","tokenCount":131}],"totalTokenCount":140},"invocationId":"e-c1c79e55-ae6e-4b65-9891-b00691926c4c","author":"callb
ack_agent","actions":{"stateDelta":{},"artifactDelta":{},"requestedAuthConfigs":{}},"longRunningToolIds":[],"id":"9d25beca-a8b8-4235-824f-91a935fe296b","timestamp":1752487044.02
1549}
[LOG] Running tool: get_single_object
 
--- GET Single Object: 5 ---
{
  "id": "5",
  "name": "Samsung Galaxy Z Fold2",
  "data": {
    "price": 689.99,
    "color": "Brown"
  }
}
[LOG] Tool get_single_object completed
2025-07-14 15:27:26,121 - INFO - fast_api.py:847 - Generated event in agent run streaming: {"content":{"parts":[{"functionResponse":{"id":"adk-83db2615-b3b2-47b2-a25f-5abb4e7a7d
c0","name":"get_single_object","response":{"id":"5","name":"Samsung Galaxy Z Fold2","data":{"price":689.99,"color":"Brown"}}}}],"role":"user"},"invocationId":"e-c1c79e55-ae6e-4b
65-9891-b00691926c4c","author":"callback_agent","actions":{"stateDelta":{"tool_usage":{"get_single_object":1}},"artifactDelta":{},"requestedAuthConfigs":{}},"id":"542f3ba8-0876-
439b-80a3-a2949cc052cf","timestamp":1752487046.121109}
2025-07-14 15:27:26,123 - WARNING - _api_client.py:105 - Both GOOGLE_API_KEY and GEMINI_API_KEY are set. Using GOOGLE_API_KEY.
2025-07-14 15:27:26,527 - INFO - google_llm.py:90 - Sending out request, model: gemini-2.0-flash-001, backend: GoogleLLMVariant.GEMINI_API, stream: False
2025-07-14 15:27:26,528 - INFO - google_llm.py:96 -
LLM Request:
-----------------------------------------------------------
System Instruction:
 
    You are a RestAPI service caller assistant. Call the given tools as per the instruction given by the uses and print the data in json format.
 
 
You are an agent. Your internal name is "callback_agent".
 
 The description about you is "An agent that can interact with a RESTful API for objects"
-----------------------------------------------------------
Contents:
{"parts":[{"text":"get values of object id 5"}],"role":"user"}
{"parts":[{"function_call":{"args":{"object_id":"5"},"name":"get_single_object"}}],"role":"model"}
{"parts":[{"function_response":{"name":"get_single_object","response":{"id":"5","name":"Samsung Galaxy Z Fold2","data":{"price":689.99,"color":"Brown"}}}}],"role":"user"}
-----------------------------------------------------------
Functions:
get_single_object: {'object_id': {'type': <Type.STRING: 'STRING'>}}
-----------------------------------------------------------
 
2025-07-14 15:27:26,529 - INFO - models.py:7421 - AFC is enabled with max remote calls: 10.
2025-07-14 15:27:27,706 - INFO - _client.py:1740 - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash-001:generateContent "HTTP/1.1 200
OK"
2025-07-14 15:27:27,709 - INFO - google_llm.py:175 -
LLM Response:
-----------------------------------------------------------
Text:

json
{
“get_single_object_response”: {
“data”: {
“color”: “Brown”,
“price”: 689.99
},
“id”: “5”,
“name”: “Samsung Galaxy Z Fold2”
}
}

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
-----------------------------------------------------------
Function calls:
 
-----------------------------------------------------------
Raw response:
{"sdk_http_response":{"headers":{"content-type":"application/json; charset=UTF-8","vary":"Origin, X-Origin, Referer","content-encoding":"gzip","date":"Mon, 14 Jul 2025 09:54:31
GMT","server":"scaffolding on HTTPServer2","x-xss-protection":"0","x-frame-options":"SAMEORIGIN","x-content-type-options":"nosniff","server-timing":"gfet4t7; dur=1144","alt-svc"
:"h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000","transfer-encoding":"chunked"}},"candidates":[{"content":{"parts":[{"text":"```json\n{\n  \"get_single_object_response\": {
\n    \"data\": {\n      \"color\": \"Brown\",\n      \"price\": 689.99\n    },\n    \"id\": \"5\",\n    \"name\": \"Samsung Galaxy Z Fold2\"\n  }\n}\n```"}],"role":"model"},"fi
nish_reason":"STOP","avg_logprobs":-0.009752706521087222}],"model_version":"gemini-2.0-flash-001","usage_metadata":{"candidates_token_count":72,"candidates_tokens_details":[{"mo
dality":"TEXT","token_count":72}],"prompt_token_count":158,"prompt_tokens_details":[{"modality":"TEXT","token_count":158}],"total_token_count":230},"automatic_function_calling_h
istory":[]}
-----------------------------------------------------------
 
2025-07-14 15:27:27,711 - INFO - fast_api.py:847 - Generated event in agent run streaming: {"content":{"parts":[{"text":"```json\n{\n  \"get_single_object_response\": {\n    \"d
ata\": {\n      \"color\": \"Brown\",\n      \"price\": 689.99\n    },\n    \"id\": \"5\",\n    \"name\": \"Samsung Galaxy Z Fold2\"\n  }\n}\n```"}],"role":"model"},"usageMetada
ta":{"candidatesTokenCount":72,"candidatesTokensDetails":[{"modality":"TEXT","tokenCount":72}],"promptTokenCount":158,"promptTokensDetails":[{"modality":"TEXT","tokenCount":158}
],"totalTokenCount":230},"invocationId":"e-c1c79e55-ae6e-4b65-9891-b00691926c4c","author":"callback_agent","actions":{"stateDelta":{},"artifactDelta":{},"requestedAuthConfigs":{
}},"id":"ccb4a8c9-ef13-4c94-818d-81b4ceff22e1","timestamp":1752487046.123729}
?[32mINFO?[0m:     127.0.0.1:61362 - "?[1mGET /apps/app/users/user/sessions/68239ce7-f698-4253-8a4c-783c51a75e72 HTTP/1.1?[0m" ?[32m200 OK?[0m
?[32mINFO?[0m:     127.0.0.1:61363 - "?[1mGET /debug/trace/session/68239ce7-f698-4253-8a4c-783c51a75e72 HTTP/1.1?[0m" ?[32m200 OK?[0m

from above output you can see the tool before_tool_callback is called before get_single_object and tool after_tool_callback is called after the tool and hence we get his out put

[LOG] Running tool: get_single_object

— GET Single Object: 5 —
{
“id”: “5”,
“name”: “Samsung Galaxy Z Fold2”,
“data”: {
“price”: 689.99,
“color”: “Brown”
}
}
[LOG] Tool get_single_object completed

LoopAgent Example

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
import asyncio
import os
from google.adk.agents import LoopAgent, LlmAgent, BaseAgent, SequentialAgent
from google.genai import types
from google.adk.runners import InMemoryRunner
from google.adk.agents.invocation_context import InvocationContext
from google.adk.tools.tool_context import ToolContext
from typing import AsyncGenerator, Optional
from google.adk.events import Event, EventActions
 
# --- Constants ---
APP_NAME = "doc_writing_app_v3" # New App Name
USER_ID = "dev_user_01"
SESSION_ID_BASE = "loop_exit_tool_session" # New Base Session ID
GEMINI_MODEL = "gemini-2.0-flash"
 
# --- State Keys ---
STATE_CURRENT_DOC = "current_document"
STATE_CRITICISM = "criticism"
# Define the exact phrase the Critic should use to signal completion
COMPLETION_PHRASE = "No major issues found."
 
# --- Tool Definition ---
def exit_loop(tool_context: ToolContext):
  """Call this function ONLY when the critique indicates no further changes are needed, signaling the iterative process should end."""
  print(f"  [Tool Call] exit_loop triggered by {tool_context.agent_name}")
  tool_context.actions.escalate = True
  # Return empty dict as tools should typically return JSON-serializable output
  return {}
 
# --- Agent Definitions ---
 
# STEP 1: Initial Writer Agent (Runs ONCE at the beginning)
initial_writer_agent = LlmAgent(
    name="InitialWriterAgent",
    model=GEMINI_MODEL,
    include_contents='none',
    # MODIFIED Instruction: Ask for a slightly more developed start
    instruction=f"""You are a Creative Writing Assistant tasked with starting a story.
    Write the *first draft* of a short story (aim for 2-4 sentences).   
""",
    description="Writes the initial document draft based on the topic, aiming for some initial substance.",
    output_key=STATE_CURRENT_DOC
)
 
# STEP 2a: Critic Agent (Inside the Refinement Loop)
critic_agent_in_loop = LlmAgent(
    name="CriticAgent",
    model=GEMINI_MODEL,
    include_contents='none',
    # MODIFIED Instruction: More nuanced completion criteria, look for clear improvement paths.
    instruction=f"""You are a Constructive Critic AI reviewing a short document draft (typically 2-6 sentences). Your goal is balanced feedback.
    Review the document for clarity and improve to better capture the topic or enhance reader engagement (e.g., "Needs a stronger opening sentence", "Clarify the character's goal"):
   if the document is coherent, addresses the topic adequately for its length, and has no glaring errors or obvious omissions:
    Respond *exactly* with the phrase "{COMPLETION_PHRASE}" and nothing else.
""",
    description="Reviews the current draft, providing critique if clear improvements are needed, otherwise signals completion.",
    output_key=STATE_CRITICISM
)
 
 
# STEP 2b: Refiner/Exiter Agent (Inside the Refinement Loop)
refiner_agent_in_loop = LlmAgent(
    name="RefinerAgent",
    model=GEMINI_MODEL,
    # Relies solely on state via placeholders
    include_contents='none',
    instruction=f"""You are a Creative Writing Assistant refining a document based on feedback OR exiting the process.
    Analyze the 'Critique/Suggestions'.
    IF the critique is *exactly* "{COMPLETION_PHRASE}":
    You MUST call the 'exit_loop' function. Do not output any text.
    ELSE (the critique contains actionable feedback):
    Carefully apply the suggestions to improve the 'Current Document'. Output *only* the refined document text.
 
    Do not add explanations. Either output the refined document OR call the exit_loop function.
""",
    description="Refines the document based on critique, or calls exit_loop if critique indicates completion.",
    tools=[exit_loop], # Provide the exit_loop tool
    output_key=STATE_CURRENT_DOC # Overwrites state['current_document'] with the refined version
)
 
 
# STEP 2: Refinement Loop Agent
refinement_loop = LoopAgent(
    name="RefinementLoop",
    # Agent order is crucial: Critique first, then Refine/Exit
    sub_agents=[
        critic_agent_in_loop,
        refiner_agent_in_loop,
    ],
    max_iterations=5 # Limit loops
)
 
# STEP 3: Overall Sequential Pipeline
# For ADK tools compatibility, the root agent must be named `root_agent`
root_agent = SequentialAgent(
    name="IterativeWritingPipeline",
    sub_agents=[
        initial_writer_agent, # Run first to create initial doc
        refinement_loop       # Then run the critique/refine loop
    ],
    description="Writes an initial document and then iteratively refines it with critique using an exit tool."
)

OutPut:-

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
(.venv) C:\vscode-python-workspace\adkagent>adk web
C:\vscode-python-workspace\adkagent\.venv\Lib\site-packages\google\adk\cli\fast_api.py:354: UserWarning: [EXPERIMENTAL
] InMemoryCredentialService: This feature is experimental and may change or be removed in future versions without noti
ce. It may introduce breaking changes at any time.
  credential_service = InMemoryCredentialService()
C:\vscode-python-workspace\adkagent\.venv\Lib\site-packages\google\adk\auth\credential_service\in_memory_credential_se
rvice.py:33: UserWarning: [EXPERIMENTAL] BaseCredentialService: This feature is experimental and may change or be remo
ved in future versions without notice. It may introduce breaking changes at any time.
  super().__init__()
?[32mINFO?[0m:     Started server process [?[36m6672?[0m]
?[32mINFO?[0m:     Waiting for application startup.
 
+-----------------------------------------------------------------------------+
| ADK Web Server started                                                      |
|                                                                             |
+-----------------------------------------------------------------------------+
 
?[32mINFO?[0m:     Application startup complete.
?[32mINFO?[0m:     Uvicorn running on ?[1mhttp://127.0.0.1:8000?[0m (Press CTRL+C to quit)
?[32mINFO?[0m:     127.0.0.1:64107 - "?[1mGET /dev-ui/?app=app&session=1b4be865-c870-48eb-8131-cd123c66f656 HTTP/1.1?[
0m" ?[33m304 Not Modified?[0m
?[32mINFO?[0m:     127.0.0.1:64115 - "?[1mGET /list-apps?relative_path=./ HTTP/1.1?[0m" ?[32m200 OK?[0m
?[32mINFO?[0m:     127.0.0.1:64115 - "?[1mGET /apps/app/users/user/sessions/1b4be865-c870-48eb-8131-cd123c66f656 HTTP/
1.1?[0m" ?[31m404 Not Found?[0m
?[32mINFO?[0m:     127.0.0.1:64115 - "?[1mGET /apps/app/eval_sets HTTP/1.1?[0m" ?[32m200 OK?[0m
?[32mINFO?[0m:     127.0.0.1:64115 - "?[1mGET /apps/app/eval_results HTTP/1.1?[0m" ?[32m200 OK?[0m
2025-07-21 13:55:24,974 - INFO - fast_api.py:475 - New session created
?[32mINFO?[0m:     127.0.0.1:64115 - "?[1mPOST /apps/app/users/user/sessions HTTP/1.1?[0m" ?[32m200 OK?[0m
?[32mINFO?[0m:     127.0.0.1:64115 - "?[1mGET /apps/app/users/user/sessions HTTP/1.1?[0m" ?[32m200 OK?[0m
?[32mINFO?[0m:     127.0.0.1:64115 - "?[1mGET /apps/app/users/user/sessions HTTP/1.1?[0m" ?[32m200 OK?[0m
?[32mINFO?[0m:     127.0.0.1:64129 - "?[1mPOST /run_sse HTTP/1.1?[0m" ?[32m200 OK?[0m
2025-07-21 13:55:31,931 - INFO - envs.py:47 - Loaded .env file for app at C:\vscode-python-workspace\adkagent\app\.env
 
2025-07-21 13:55:31,932 - INFO - envs.py:47 - Loaded .env file for app at C:\vscode-python-workspace\adkagent\app\.env
 
2025-07-21 13:55:31,947 - WARNING - llm_agent.py:477 - Invalid config for agent structured_agent: output_schema cannot
 co-exist with agent transfer configurations. Setting disallow_transfer_to_parent=True, disallow_transfer_to_peers=Tru
e
2025-07-21 13:55:31,949 - INFO - agent_loader.py:95 - Found root_agent in app.agent
2025-07-21 13:55:31,952 - WARNING - _api_client.py:105 - Both GOOGLE_API_KEY and GEMINI_API_KEY are set. Using GOOGLE_
API_KEY.
2025-07-21 13:55:33,446 - INFO - google_llm.py:90 - Sending out request, model: gemini-2.0-flash, backend: GoogleLLMVa
riant.GEMINI_API, stream: False
2025-07-21 13:55:33,447 - INFO - google_llm.py:96 -
LLM Request:
-----------------------------------------------------------
System Instruction:
You are a Creative Writing Assistant tasked with starting a story.
    Write the *first draft* of a short story (aim for 2-4 sentences).
 
 
You are an agent. Your internal name is "InitialWriterAgent".
 
 The description about you is "Writes the initial document draft based on the topic, aiming for some initial substance
."
-----------------------------------------------------------
Contents:
{"parts":[{"text":"man in a jungle"}],"role":"user"}
-----------------------------------------------------------
Functions:
 
-----------------------------------------------------------
 
2025-07-21 13:55:33,448 - INFO - models.py:7421 - AFC is enabled with max remote calls: 10.
2025-07-21 13:55:34,661 - INFO - google_llm.py:175 -
LLM Response:
-----------------------------------------------------------
Text:
The jungle breathed, a humid lung expanding and contracting around him. He hacked at a thick vine with his machete, sw
eat plastering his tattered shirt to his back. Each step was a negotiation with the tangled undergrowth, a silent plea
 to be allowed passage.
 
-----------------------------------------------------------
Function calls:
 
-----------------------------------------------------------
Raw response:
{"sdk_http_response":{"headers":{"Content-Type":"application/json; charset=UTF-8","Vary":"Origin, X-Origin, Referer","
Content-Encoding":"gzip","Date":"Mon, 21 Jul 2025 08:22:27 GMT","Server":"scaffolding on HTTPServer2","X-XSS-Protectio
n":"0","X-Frame-Options":"SAMEORIGIN","X-Content-Type-Options":"nosniff","Server-Timing":"gfet4t7; dur=1142","Alt-Svc"
:"h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000","Transfer-Encoding":"chunked"}},"candidates":[{"content":{"parts
":[{"text":"The jungle breathed, a humid lung expanding and contracting around him. He hacked at a thick vine with his
 machete, sweat plastering his tattered shirt to his back. Each step was a negotiation with the tangled undergrowth, a
 silent plea to be allowed passage.\n"}],"role":"model"},"finish_reason":"STOP","avg_logprobs":-0.24684382384678102}],
"model_version":"gemini-2.0-flash","usage_metadata":{"candidates_token_count":53,"candidates_tokens_details":[{"modali
ty":"TEXT","token_count":53}],"prompt_token_count":76,"prompt_tokens_details":[{"modality":"TEXT","token_count":76}],"
total_token_count":129},"automatic_function_calling_history":[]}
-----------------------------------------------------------
 
2025-07-21 13:55:34,664 - INFO - fast_api.py:847 - Generated event in agent run streaming: {"content":{"parts":[{"text
":"The jungle breathed, a humid lung expanding and contracting around him. He hacked at a thick vine with his machete,
 sweat plastering his tattered shirt to his back. Each step was a negotiation with the tangled undergrowth, a silent p
lea to be allowed passage.\n"}],"role":"model"},"usageMetadata":{"candidatesTokenCount":53,"candidatesTokensDetails":[
{"modality":"TEXT","tokenCount":53}],"promptTokenCount":76,"promptTokensDetails":[{"modality":"TEXT","tokenCount":76}]
,"totalTokenCount":129},"invocationId":"e-395cca8d-8c7c-497c-ac95-0b95bf2672f5","author":"InitialWriterAgent","actions
":{"stateDelta":{"current_document":"The jungle breathed, a humid lung expanding and contracting around him. He hacked
 at a thick vine with his machete, sweat plastering his tattered shirt to his back. Each step was a negotiation with t
he tangled undergrowth, a silent plea to be allowed passage.\n"},"artifactDelta":{},"requestedAuthConfigs":{}},"id":"e
d006c08-ea66-4d53-b632-5c9fb690f935","timestamp":1753086331.952605}
2025-07-21 13:55:34,670 - WARNING - _api_client.py:105 - Both GOOGLE_API_KEY and GEMINI_API_KEY are set. Using GOOGLE_
API_KEY.
2025-07-21 13:55:36,005 - INFO - google_llm.py:90 - Sending out request, model: gemini-2.0-flash, backend: GoogleLLMVa
riant.GEMINI_API, stream: False
2025-07-21 13:55:36,006 - INFO - google_llm.py:96 -
LLM Request:
-----------------------------------------------------------
System Instruction:
You are a Constructive Critic AI reviewing a short document draft (typically 2-6 sentences). Your goal is balanced fee
dback.
    Review the document for clarity and improve to better capture the topic or enhance reader engagement (e.g., "Needs
 a stronger opening sentence", "Clarify the character's goal"):
   if the document is coherent, addresses the topic adequately for its length, and has no glaring errors or obvious om
issions:
    Respond *exactly* with the phrase "No major issues found." and nothing else.
 
 
You are an agent. Your internal name is "CriticAgent".
 
 The description about you is "Reviews the current draft, providing critique if clear improvements are needed, otherwi
se signals completion."
-----------------------------------------------------------
Contents:
{"parts":[{"text":"For context:"},{"text":"[InitialWriterAgent] said: The jungle breathed, a humid lung expanding and
contracting around him. He hacked at a thick vine with his machete, sweat plastering his tattered shirt to his back. E
ach step was a negotiation with the tangled undergrowth, a silent plea to be allowed passage.\n"}],"role":"user"}
-----------------------------------------------------------
Functions:
 
-----------------------------------------------------------
 
2025-07-21 13:55:36,008 - INFO - models.py:7421 - AFC is enabled with max remote calls: 10.
2025-07-21 13:55:36,980 - INFO - google_llm.py:175 -
LLM Response:
-----------------------------------------------------------
Text:
No major issues found.
 
-----------------------------------------------------------
Function calls:
 
-----------------------------------------------------------
Raw response:
{"sdk_http_response":{"headers":{"Content-Type":"application/json; charset=UTF-8","Vary":"Origin, X-Origin, Referer","
Content-Encoding":"gzip","Date":"Mon, 21 Jul 2025 08:22:29 GMT","Server":"scaffolding on HTTPServer2","X-XSS-Protectio
n":"0","X-Frame-Options":"SAMEORIGIN","X-Content-Type-Options":"nosniff","Server-Timing":"gfet4t7; dur=926","Alt-Svc":
"h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000","Transfer-Encoding":"chunked"}},"candidates":[{"content":{"parts"
:[{"text":"No major issues found.\n"}],"role":"model"},"finish_reason":"STOP","avg_logprobs":-0.25941423575083417}],"m
odel_version":"gemini-2.0-flash","usage_metadata":{"candidates_token_count":6,"candidates_tokens_details":[{"modality"
:"TEXT","token_count":6}],"prompt_token_count":211,"prompt_tokens_details":[{"modality":"TEXT","token_count":211}],"to
tal_token_count":217},"automatic_function_calling_history":[]}
-----------------------------------------------------------
 
2025-07-21 13:55:36,982 - INFO - fast_api.py:847 - Generated event in agent run streaming: {"content":{"parts":[{"text
":"No major issues found.\n"}],"role":"model"},"usageMetadata":{"candidatesTokenCount":6,"candidatesTokensDetails":[{"
modality":"TEXT","tokenCount":6}],"promptTokenCount":211,"promptTokensDetails":[{"modality":"TEXT","tokenCount":211}],
"totalTokenCount":217},"invocationId":"e-395cca8d-8c7c-497c-ac95-0b95bf2672f5","author":"CriticAgent","actions":{"stat
eDelta":{"criticism":"No major issues found.\n"},"artifactDelta":{},"requestedAuthConfigs":{}},"id":"aa30d223-6840-48f
4-a697-4f69765877b4","timestamp":1753086334.670207}
2025-07-21 13:55:36,987 - WARNING - _api_client.py:105 - Both GOOGLE_API_KEY and GEMINI_API_KEY are set. Using GOOGLE_
API_KEY.
2025-07-21 13:55:38,757 - INFO - google_llm.py:90 - Sending out request, model: gemini-2.0-flash, backend: GoogleLLMVa
riant.GEMINI_API, stream: False
2025-07-21 13:55:38,758 - INFO - google_llm.py:96 -
LLM Request:
-----------------------------------------------------------
System Instruction:
You are a Creative Writing Assistant refining a document based on feedback OR exiting the process.
    Analyze the 'Critique/Suggestions'.
    IF the critique is *exactly* "No major issues found.":
    You MUST call the 'exit_loop' function. Do not output any text.
    ELSE (the critique contains actionable feedback):
    Carefully apply the suggestions to improve the 'Current Document'. Output *only* the refined document text.
 
    Do not add explanations. Either output the refined document OR call the exit_loop function.
 
 
You are an agent. Your internal name is "RefinerAgent".
 
 The description about you is "Refines the document based on critique, or calls exit_loop if critique indicates comple
tion."
-----------------------------------------------------------
Contents:
{"parts":[{"text":"For context:"},{"text":"[CriticAgent] said: No major issues found.\n"}],"role":"user"}
-----------------------------------------------------------
Functions:
exit_loop: {}
-----------------------------------------------------------
 
2025-07-21 13:55:38,760 - INFO - models.py:7421 - AFC is enabled with max remote calls: 10.
2025-07-21 13:55:39,647 - WARNING - types.py:5169 - Warning: there are non-text parts in the response: ['function_call
'], returning concatenated text result from text parts. Check the full candidates.content.parts accessor to get the fu
ll model response.
2025-07-21 13:55:39,647 - INFO - google_llm.py:175 -
LLM Response:
-----------------------------------------------------------
Text:
None
-----------------------------------------------------------
Function calls:
name: exit_loop, args: {}
-----------------------------------------------------------
Raw response:
{"sdk_http_response":{"headers":{"Content-Type":"application/json; charset=UTF-8","Vary":"Origin, X-Origin, Referer","
Content-Encoding":"gzip","Date":"Mon, 21 Jul 2025 08:22:32 GMT","Server":"scaffolding on HTTPServer2","X-XSS-Protectio
n":"0","X-Frame-Options":"SAMEORIGIN","X-Content-Type-Options":"nosniff","Server-Timing":"gfet4t7; dur=833","Alt-Svc":
"h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000","Transfer-Encoding":"chunked"}},"candidates":[{"content":{"parts"
:[{"function_call":{"args":{},"name":"exit_loop"}}],"role":"model"},"finish_reason":"STOP","avg_logprobs":6.0172751545
90607e-6}],"model_version":"gemini-2.0-flash","usage_metadata":{"candidates_token_count":3,"candidates_tokens_details"
:[{"modality":"TEXT","token_count":3}],"prompt_token_count":193,"prompt_tokens_details":[{"modality":"TEXT","token_cou
nt":193}],"total_token_count":196},"automatic_function_calling_history":[]}
-----------------------------------------------------------
 
2025-07-21 13:55:39,650 - INFO - fast_api.py:847 - Generated event in agent run streaming: {"content":{"parts":[{"func
tionCall":{"id":"adk-ff44ef6e-e72e-4142-983e-bf034c1a7d63","args":{},"name":"exit_loop"}}],"role":"model"},"usageMetad
ata":{"candidatesTokenCount":3,"candidatesTokensDetails":[{"modality":"TEXT","tokenCount":3}],"promptTokenCount":193,"
promptTokensDetails":[{"modality":"TEXT","tokenCount":193}],"totalTokenCount":196},"invocationId":"e-395cca8d-8c7c-497
c-ac95-0b95bf2672f5","author":"RefinerAgent","actions":{"stateDelta":{},"artifactDelta":{},"requestedAuthConfigs":{}},
"longRunningToolIds":[],"id":"2232515c-5689-4b2f-9bc5-b44905d82c94","timestamp":1753086336.987549}
  [Tool Call] exit_loop triggered by RefinerAgent
2025-07-21 13:55:39,653 - INFO - fast_api.py:847 - Generated event in agent run streaming: {"content":{"parts":[{"func
tionResponse":{"id":"adk-ff44ef6e-e72e-4142-983e-bf034c1a7d63","name":"exit_loop","response":{}}}],"role":"user"},"inv
ocationId":"e-395cca8d-8c7c-497c-ac95-0b95bf2672f5","author":"RefinerAgent","actions":{"stateDelta":{},"artifactDelta"
:{},"escalate":true,"requestedAuthConfigs":{}},"id":"339de481-b2df-496c-b2f6-947934e086d7","timestamp":1753086339.6530
36}
2025-07-21 13:55:39,656 - ERROR - __init__.py:157 - Failed to detach context
Traceback (most recent call last):
  File "C:\vscode-python-workspace\adkagent\.venv\Lib\site-packages\opentelemetry\trace\__init__.py", line 589, in use
_span
    yield span
  File "C:\vscode-python-workspace\adkagent\.venv\Lib\site-packages\opentelemetry\sdk\trace\__init__.py", line 1105, i
n start_as_current_span
    yield span
  File "C:\vscode-python-workspace\adkagent\.venv\Lib\site-packages\opentelemetry\trace\__init__.py", line 454, in sta
rt_as_current_span
    yield span
  File "C:\vscode-python-workspace\adkagent\.venv\Lib\site-packages\google\adk\agents\base_agent.py", line 202, in run
_async
    yield event
GeneratorExit
 
During handling of the above exception, another exception occurred:
 
Traceback (most recent call last):
  File "C:\vscode-python-workspace\adkagent\.venv\Lib\site-packages\opentelemetry\context\__init__.py", line 155, in d
etach
    _RUNTIME_CONTEXT.detach(token)
    ~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
  File "C:\vscode-python-workspace\adkagent\.venv\Lib\site-packages\opentelemetry\context\contextvars_context.py", lin
e 53, in detach
    self._current_context.reset(token)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
ValueError: <Token var=<ContextVar name='current_context' default={} at 0x0000023590703BF0> at 0x000002359DFEF300> was
 created in a different Context
2025-07-21 13:55:39,665 - ERROR - __init__.py:157 - Failed to detach context
Traceback (most recent call last):
  File "C:\vscode-python-workspace\adkagent\.venv\Lib\site-packages\opentelemetry\trace\__init__.py", line 589, in use
_span
    yield span
  File "C:\vscode-python-workspace\adkagent\.venv\Lib\site-packages\opentelemetry\sdk\trace\__init__.py", line 1105, i
n start_as_current_span
    yield span
  File "C:\vscode-python-workspace\adkagent\.venv\Lib\site-packages\opentelemetry\trace\__init__.py", line 454, in sta
rt_as_current_span
    yield span
  File "C:\vscode-python-workspace\adkagent\.venv\Lib\site-packages\google\adk\flows\llm_flows\base_llm_flow.py", line
 558, in _call_llm_async
    yield llm_response
GeneratorExit
 
During handling of the above exception, another exception occurred:
 
Traceback (most recent call last):
  File "C:\vscode-python-workspace\adkagent\.venv\Lib\site-packages\opentelemetry\context\__init__.py", line 155, in d
etach
    _RUNTIME_CONTEXT.detach(token)
    ~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
  File "C:\vscode-python-workspace\adkagent\.venv\Lib\site-packages\opentelemetry\context\contextvars_context.py", lin
e 53, in detach
    self._current_context.reset(token)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
ValueError: <Token var=<ContextVar name='current_context' default={} at 0x0000023590703BF0> at 0x000002359E0506C0> was
 created in a different Context
?[32mINFO?[0m:     127.0.0.1:64129 - "?[1mGET /apps/app/users/user/sessions/96b90f60-8448-4599-b079-87a386596579 HTTP/
1.1?[0m" ?[32m200 OK?[0m
?[32mINFO?[0m:     127.0.0.1:64129 - "?[1mGET /debug/trace/session/96b90f60-8448-4599-b079-87a386596579 HTTP/1.1?[0m"
?[32m200 OK?[0m

Sequence Agent:-

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
code_writer_agent = LlmAgent(
    name="CodeWriterAgent",
    model="gemini-2.0-flash-001",
    # Change 3: Improved instruction
    instruction="""You are a Python Code Generator.
Based *only* on the user's request, write Python code that fulfills the requirement.
Output *only* the complete Python code block, enclosed in triple backticks (```python ... ```).
Do not add any other text before or after the code block.
""",
    description="Writes initial Python code based on a specification.",
    output_key="generated_code" # Stores output in state['generated_code']
)
 
# Code Reviewer Agent
# Takes the code generated by the previous agent (read from state) and provides feedback.
code_reviewer_agent = LlmAgent(
    name="CodeReviewerAgent",
    model="gemini-2.0-flash-001",
    # Change 3: Improved instruction, correctly using state key injection
    instruction="""You are an expert Python Code Reviewer.
    Your task is to provide constructive feedback on the provided code.
 
    **Code to Review:**
    ```python
    {generated_code}
    ```
 
**Review Criteria:**
1.  **Correctness:** Does the code work as intended? Are there logic errors?
2.  **Readability:** Is the code clear and easy to understand? Follows PEP 8 style guidelines?
3.  **Efficiency:** Is the code reasonably efficient? Any obvious performance bottlenecks?
4.  **Edge Cases:** Does the code handle potential edge cases or invalid inputs gracefully?
5.  **Best Practices:** Does the code follow common Python best practices?
 
**Output:**
Provide your feedback as a concise, bulleted list. Focus on the most important points for improvement.
If the code is excellent and requires no changes, simply state: "No major issues found."
Output *only* the review comments or the "No major issues" statement.
""",
    description="Reviews code and provides feedback.",
    output_key="review_comments", # Stores output in state['review_comments']
)
 
 
# Code Refactorer Agent
# Takes the original code and the review comments (read from state) and refactors the code.
code_refactorer_agent = LlmAgent(
    name="CodeRefactorerAgent",
    model="gemini-2.0-flash-001",
    # Change 3: Improved instruction, correctly using state key injection
    instruction="""You are a Python Code Refactoring AI.
Your goal is to improve the given Python code based on the provided review comments.
 
  **Original Code:**
  ```python
  {generated_code}
  ```
 
  **Review Comments:**
  {review_comments}
 
**Task:**
Carefully apply the suggestions from the review comments to refactor the original code.
If the review comments state "No major issues found," return the original code unchanged.
Ensure the final code is complete, functional, and includes necessary imports and docstrings.
 
**Output:**
Output *only* the final, refactored Python code block, enclosed in triple backticks (```python ... ```).
Do not add any other text before or after the code block.
""",
    description="Refactors code based on review comments.",
    output_key="refactored_code", # Stores output in state['refactored_code']
)
 
 
# --- 2. Create the SequentialAgent ---
# This agent orchestrates the pipeline by running the sub_agents in order.
code_pipeline_agent = SequentialAgent(
    name="CodePipelineAgent",
    sub_agents=[code_writer_agent, code_reviewer_agent, code_refactorer_agent],
    description="Executes a sequence of code writing, reviewing, and refactoring.",
    # The agents will run in the order provided: Writer -> Reviewer -> Refactorer
)
 
 
# For ADK tools compatibility, the root agent must be named `root_agent`
root_agent = code_pipeline_agent

Output:-

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
(.venv) C:\vscode-python-workspace\adkagent>adk web
C:\vscode-python-workspace\adkagent\.venv\Lib\site-packages\google\adk\cli\fast_api.py:354: UserWarning: [EXPERIMENTAL
] InMemoryCredentialService: This feature is experimental and may change or be removed in future versions without noti
ce. It may introduce breaking changes at any time.
  credential_service = InMemoryCredentialService()
C:\vscode-python-workspace\adkagent\.venv\Lib\site-packages\google\adk\auth\credential_service\in_memory_credential_se
rvice.py:33: UserWarning: [EXPERIMENTAL] BaseCredentialService: This feature is experimental and may change or be remo
ved in future versions without notice. It may introduce breaking changes at any time.
  super().__init__()
?[32mINFO?[0m:     Started server process [?[36m23480?[0m]
?[32mINFO?[0m:     Waiting for application startup.
 
+-----------------------------------------------------------------------------+
| ADK Web Server started                                                      |
|                                                                             |
+-----------------------------------------------------------------------------+
 
?[32mINFO?[0m:     Application startup complete.
?[32mINFO?[0m:     Uvicorn running on ?[1mhttp://127.0.0.1:8000?[0m (Press CTRL+C to quit)
?[32mINFO?[0m:     127.0.0.1:52716 - "?[1mGET /dev-ui/?app=app&session=96b90f60-8448-4599-b079-87a386596579 HTTP/1.1?[
0m" ?[32m200 OK?[0m
?[32mINFO?[0m:     127.0.0.1:52757 - "?[1mGET /list-apps?relative_path=./ HTTP/1.1?[0m" ?[32m200 OK?[0m
?[32mINFO?[0m:     127.0.0.1:52757 - "?[1mGET /apps/app/users/user/sessions/96b90f60-8448-4599-b079-87a386596579 HTTP/
1.1?[0m" ?[31m404 Not Found?[0m
?[32mINFO?[0m:     127.0.0.1:52757 - "?[1mGET /apps/app/eval_sets HTTP/1.1?[0m" ?[32m200 OK?[0m
?[32mINFO?[0m:     127.0.0.1:52757 - "?[1mGET /apps/app/eval_results HTTP/1.1?[0m" ?[32m200 OK?[0m
2025-07-21 14:22:05,156 - INFO - fast_api.py:475 - New session created
?[32mINFO?[0m:     127.0.0.1:52757 - "?[1mPOST /apps/app/users/user/sessions HTTP/1.1?[0m" ?[32m200 OK?[0m
?[32mINFO?[0m:     127.0.0.1:52757 - "?[1mGET /apps/app/users/user/sessions HTTP/1.1?[0m" ?[32m200 OK?[0m
?[32mINFO?[0m:     127.0.0.1:52757 - "?[1mGET /apps/app/users/user/sessions HTTP/1.1?[0m" ?[32m200 OK?[0m
?[32mINFO?[0m:     127.0.0.1:52759 - "?[1mPOST /run_sse HTTP/1.1?[0m" ?[32m200 OK?[0m
2025-07-21 14:22:58,489 - INFO - envs.py:47 - Loaded .env file for app at C:\vscode-python-workspace\adkagent\app\.env
 
2025-07-21 14:22:58,490 - INFO - envs.py:47 - Loaded .env file for app at C:\vscode-python-workspace\adkagent\app\.env
 
2025-07-21 14:22:58,508 - WARNING - llm_agent.py:477 - Invalid config for agent structured_agent: output_schema cannot
 co-exist with agent transfer configurations. Setting disallow_transfer_to_parent=True, disallow_transfer_to_peers=Tru
e
2025-07-21 14:22:58,511 - INFO - agent_loader.py:95 - Found root_agent in app.agent
2025-07-21 14:22:58,515 - WARNING - _api_client.py:105 - Both GOOGLE_API_KEY and GEMINI_API_KEY are set. Using GOOGLE_
API_KEY.
2025-07-21 14:23:00,106 - INFO - google_llm.py:90 - Sending out request, model: gemini-2.0-flash-001, backend: GoogleL
LMVariant.GEMINI_API, stream: False
2025-07-21 14:23:00,106 - INFO - google_llm.py:96 -
LLM Request:
-----------------------------------------------------------
System Instruction:
You are a Python Code Generator.
Based *only* on the user's request, write Python code that fulfills the requirement.
Output *only* the complete Python code block, enclosed in triple backticks (```python ... ```).
Do not add any other text before or after the code block.
 
 
You are an agent. Your internal name is "CodeWriterAgent".
 
 The description about you is "Writes initial Python code based on a specification."
-----------------------------------------------------------
Contents:
{"parts":[{"text":"write  python code that will count captil 'T' and small 't' in the given statment"}],"role":"user"}
 
-----------------------------------------------------------
Functions:
 
-----------------------------------------------------------
 
2025-07-21 14:23:00,108 - INFO - models.py:7421 - AFC is enabled with max remote calls: 10.
2025-07-21 14:23:01,916 - INFO - google_llm.py:175 -
LLM Response:
-----------------------------------------------------------
Text:
```python
def count_ts(statement):
  """Counts the number of capital 'T' and small 't' in a string.
 
  Args:
    statement: The string to search.
 
  Returns:
    A tuple containing the number of capital 'T's and small 't's.
  """
 
  capital_t_count = statement.count('T')
  small_t_count = statement.count('t')
  return capital_t_count, small_t_count
 
# Example usage:
statement = "This is a Test statement to count t's and T's."
capital_t, small_t = count_ts(statement)
 
print(f"Number of capital T's: {capital_t}")
print(f"Number of small t's: {small_t}")
```
-----------------------------------------------------------
Function calls:
 
-----------------------------------------------------------
Raw response:
{"sdk_http_response":{"headers":{"Content-Type":"application/json; charset=UTF-8","Vary":"Origin, X-Origin, Referer","
Content-Encoding":"gzip","Date":"Mon, 21 Jul 2025 08:49:54 GMT","Server":"scaffolding on HTTPServer2","X-XSS-Protectio
n":"0","X-Frame-Options":"SAMEORIGIN","X-Content-Type-Options":"nosniff","Server-Timing":"gfet4t7; dur=1722","Alt-Svc"
:"h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000","Transfer-Encoding":"chunked"}},"candidates":[{"content":{"parts
":[{"text":"```python\ndef count_ts(statement):\n  \"\"\"Counts the number of capital 'T' and small 't' in a string.\n
\n  Args:\n    statement: The string to search.\n\n  Returns:\n    A tuple containing the number of capital 'T's and s
mall 't's.\n  \"\"\"\n\n  capital_t_count = statement.count('T')\n  small_t_count = statement.count('t')\n  return cap
ital_t_count, small_t_count\n\n# Example usage:\nstatement = \"This is a Test statement to count t's and T's.\"\ncapit
al_t, small_t = count_ts(statement)\n\nprint(f\"Number of capital T's: {capital_t}\")\nprint(f\"Number of small t's: {
small_t}\")\n```"}],"role":"model"},"finish_reason":"STOP","avg_logprobs":-0.07504982361819017}],"model_version":"gemi
ni-2.0-flash-001","usage_metadata":{"candidates_token_count":187,"candidates_tokens_details":[{"modality":"TEXT","toke
n_count":187}],"prompt_token_count":117,"prompt_tokens_details":[{"modality":"TEXT","token_count":117}],"total_token_c
ount":304},"automatic_function_calling_history":[]}
-----------------------------------------------------------
 
2025-07-21 14:23:01,920 - INFO - fast_api.py:847 - Generated event in agent run streaming: {"content":{"parts":[{"text
":"```python\ndef count_ts(statement):\n  \"\"\"Counts the number of capital 'T' and small 't' in a string.\n\n  Args:
\n    statement: The string to search.\n\n  Returns:\n    A tuple containing the number of capital 'T's and small 't's
.\n  \"\"\"\n\n  capital_t_count = statement.count('T')\n  small_t_count = statement.count('t')\n  return capital_t_co
unt, small_t_count\n\n# Example usage:\nstatement = \"This is a Test statement to count t's and T's.\"\ncapital_t, sma
ll_t = count_ts(statement)\n\nprint(f\"Number of capital T's: {capital_t}\")\nprint(f\"Number of small t's: {small_t}\
")\n```"}],"role":"model"},"usageMetadata":{"candidatesTokenCount":187,"candidatesTokensDetails":[{"modality":"TEXT","
tokenCount":187}],"promptTokenCount":117,"promptTokensDetails":[{"modality":"TEXT","tokenCount":117}],"totalTokenCount
":304},"invocationId":"e-b509d6f7-618e-4654-b85a-2e75fa54ed5f","author":"CodeWriterAgent","actions":{"stateDelta":{"ge
nerated_code":"```python\ndef count_ts(statement):\n  \"\"\"Counts the number of capital 'T' and small 't' in a string
.\n\n  Args:\n    statement: The string to search.\n\n  Returns:\n    A tuple containing the number of capital 'T's an
d small 't's.\n  \"\"\"\n\n  capital_t_count = statement.count('T')\n  small_t_count = statement.count('t')\n  return
capital_t_count, small_t_count\n\n# Example usage:\nstatement = \"This is a Test statement to count t's and T's.\"\nca
pital_t, small_t = count_ts(statement)\n\nprint(f\"Number of capital T's: {capital_t}\")\nprint(f\"Number of small t's
: {small_t}\")\n```"},"artifactDelta":{},"requestedAuthConfigs":{}},"id":"f47a3fef-eb6e-4105-82db-67025aab47c5","times
tamp":1753087978.514625}
2025-07-21 14:23:01,930 - WARNING - _api_client.py:105 - Both GOOGLE_API_KEY and GEMINI_API_KEY are set. Using GOOGLE_
API_KEY.
2025-07-21 14:23:03,437 - INFO - google_llm.py:90 - Sending out request, model: gemini-2.0-flash-001, backend: GoogleL
LMVariant.GEMINI_API, stream: False
2025-07-21 14:23:03,438 - INFO - google_llm.py:96 -
LLM Request:
-----------------------------------------------------------
System Instruction:
You are an expert Python Code Reviewer.
    Your task is to provide constructive feedback on the provided code.
 
    **Code to Review:**
    ```python
    ```python
def count_ts(statement):
  """Counts the number of capital 'T' and small 't' in a string.
 
  Args:
    statement: The string to search.
 
  Returns:
    A tuple containing the number of capital 'T's and small 't's.
  """
 
  capital_t_count = statement.count('T')
  small_t_count = statement.count('t')
  return capital_t_count, small_t_count
 
# Example usage:
statement = "This is a Test statement to count t's and T's."
capital_t, small_t = count_ts(statement)
 
print(f"Number of capital T's: {capital_t}")
print(f"Number of small t's: {small_t}")
```
    ```
 
**Review Criteria:**
1.  **Correctness:** Does the code work as intended? Are there logic errors?
2.  **Readability:** Is the code clear and easy to understand? Follows PEP 8 style guidelines?
3.  **Efficiency:** Is the code reasonably efficient? Any obvious performance bottlenecks?
4.  **Edge Cases:** Does the code handle potential edge cases or invalid inputs gracefully?
5.  **Best Practices:** Does the code follow common Python best practices?
 
**Output:**
Provide your feedback as a concise, bulleted list. Focus on the most important points for improvement.
If the code is excellent and requires no changes, simply state: "No major issues found."
Output *only* the review comments or the "No major issues" statement.
 
 
You are an agent. Your internal name is "CodeReviewerAgent".
 
 The description about you is "Reviews code and provides feedback."
-----------------------------------------------------------
Contents:
{"parts":[{"text":"write  python code that will count captil 'T' and small 't' in the given statment"}],"role":"user"}
 
{"parts":[{"text":"For context:"},{"text":"[CodeWriterAgent] said: ```python\ndef count_ts(statement):\n  \"\"\"Counts
 the number of capital 'T' and small 't' in a string.\n\n  Args:\n    statement: The string to search.\n\n  Returns:\n
    A tuple containing the number of capital 'T's and small 't's.\n  \"\"\"\n\n  capital_t_count = statement.count('T'
)\n  small_t_count = statement.count('t')\n  return capital_t_count, small_t_count\n\n# Example usage:\nstatement = \"
This is a Test statement to count t's and T's.\"\ncapital_t, small_t = count_ts(statement)\n\nprint(f\"Number of capit
al T's: {capital_t}\")\nprint(f\"Number of small t's: {small_t}\")\n```"}],"role":"user"}
-----------------------------------------------------------
Functions:
 
-----------------------------------------------------------
 
2025-07-21 14:23:03,442 - INFO - models.py:7421 - AFC is enabled with max remote calls: 10.
2025-07-21 14:23:04,301 - INFO - google_llm.py:175 -
LLM Response:
-----------------------------------------------------------
Text:
*   No major issues found.
-----------------------------------------------------------
Function calls:
 
-----------------------------------------------------------
Raw response:
{"sdk_http_response":{"headers":{"Content-Type":"application/json; charset=UTF-8","Vary":"Origin, X-Origin, Referer","
Content-Encoding":"gzip","Date":"Mon, 21 Jul 2025 08:49:56 GMT","Server":"scaffolding on HTTPServer2","X-XSS-Protectio
n":"0","X-Frame-Options":"SAMEORIGIN","X-Content-Type-Options":"nosniff","Server-Timing":"gfet4t7; dur=808","Alt-Svc":
"h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000","Transfer-Encoding":"chunked"}},"candidates":[{"content":{"parts"
:[{"text":"*   No major issues found."}],"role":"model"},"finish_reason":"STOP","avg_logprobs":-0.07669220651899065}],
"model_version":"gemini-2.0-flash-001","usage_metadata":{"candidates_token_count":7,"candidates_tokens_details":[{"mod
ality":"TEXT","token_count":7}],"prompt_token_count":639,"prompt_tokens_details":[{"modality":"TEXT","token_count":639
}],"total_token_count":646},"automatic_function_calling_history":[]}
-----------------------------------------------------------
 
2025-07-21 14:23:04,303 - INFO - fast_api.py:847 - Generated event in agent run streaming: {"content":{"parts":[{"text
":"*   No major issues found."}],"role":"model"},"usageMetadata":{"candidatesTokenCount":7,"candidatesTokensDetails":[
{"modality":"TEXT","tokenCount":7}],"promptTokenCount":639,"promptTokensDetails":[{"modality":"TEXT","tokenCount":639}
],"totalTokenCount":646},"invocationId":"e-b509d6f7-618e-4654-b85a-2e75fa54ed5f","author":"CodeReviewerAgent","actions
":{"stateDelta":{"review_comments":"*   No major issues found."},"artifactDelta":{},"requestedAuthConfigs":{}},"id":"3
7c907bd-5358-48b2-aede-55043294ce14","timestamp":1753087981.930124}
2025-07-21 14:23:04,308 - WARNING - _api_client.py:105 - Both GOOGLE_API_KEY and GEMINI_API_KEY are set. Using GOOGLE_
API_KEY.
2025-07-21 14:23:05,830 - INFO - google_llm.py:90 - Sending out request, model: gemini-2.0-flash-001, backend: GoogleL
LMVariant.GEMINI_API, stream: False
2025-07-21 14:23:05,831 - INFO - google_llm.py:96 -
LLM Request:
-----------------------------------------------------------
System Instruction:
You are a Python Code Refactoring AI.
Your goal is to improve the given Python code based on the provided review comments.
 
  **Original Code:**
  ```python
  ```python
def count_ts(statement):
  """Counts the number of capital 'T' and small 't' in a string.
 
  Args:
    statement: The string to search.
 
  Returns:
    A tuple containing the number of capital 'T's and small 't's.
  """
 
  capital_t_count = statement.count('T')
  small_t_count = statement.count('t')
  return capital_t_count, small_t_count
 
# Example usage:
statement = "This is a Test statement to count t's and T's."
capital_t, small_t = count_ts(statement)
 
print(f"Number of capital T's: {capital_t}")
print(f"Number of small t's: {small_t}")
```
  ```
 
  **Review Comments:**
  *   No major issues found.
 
**Task:**
Carefully apply the suggestions from the review comments to refactor the original code.
If the review comments state "No major issues found," return the original code unchanged.
Ensure the final code is complete, functional, and includes necessary imports and docstrings.
 
**Output:**
Output *only* the final, refactored Python code block, enclosed in triple backticks (```python ... ```).
Do not add any other text before or after the code block.
 
 
You are an agent. Your internal name is "CodeRefactorerAgent".
 
 The description about you is "Refactors code based on review comments."
-----------------------------------------------------------
Contents:
{"parts":[{"text":"write  python code that will count captil 'T' and small 't' in the given statment"}],"role":"user"}
 
{"parts":[{"text":"For context:"},{"text":"[CodeWriterAgent] said: ```python\ndef count_ts(statement):\n  \"\"\"Counts
 the number of capital 'T' and small 't' in a string.\n\n  Args:\n    statement: The string to search.\n\n  Returns:\n
    A tuple containing the number of capital 'T's and small 't's.\n  \"\"\"\n\n  capital_t_count = statement.count('T'
)\n  small_t_count = statement.count('t')\n  return capital_t_count, small_t_count\n\n# Example usage:\nstatement = \"
This is a Test statement to count t's and T's.\"\ncapital_t, small_t = count_ts(statement)\n\nprint(f\"Number of capit
al T's: {capital_t}\")\nprint(f\"Number of small t's: {small_t}\")\n```"}],"role":"user"}
{"parts":[{"text":"For context:"},{"text":"[CodeReviewerAgent] said: *   No major issues found."}],"role":"user"}
-----------------------------------------------------------
Functions:
 
-----------------------------------------------------------
 
2025-07-21 14:23:05,835 - INFO - models.py:7421 - AFC is enabled with max remote calls: 10.
2025-07-21 14:23:07,543 - INFO - google_llm.py:175 -
LLM Response:
-----------------------------------------------------------
Text:
```python
def count_ts(statement):
  """Counts the number of capital 'T' and small 't' in a string.
 
  Args:
    statement: The string to search.
 
  Returns:
    A tuple containing the number of capital 'T's and small 't's.
  """
 
  capital_t_count = statement.count('T')
  small_t_count = statement.count('t')
  return capital_t_count, small_t_count
 
# Example usage:
statement = "This is a Test statement to count t's and T's."
capital_t, small_t = count_ts(statement)
 
print(f"Number of capital T's: {capital_t}")
print(f"Number of small t's: {small_t}")
```
 
-----------------------------------------------------------
Function calls:
 
-----------------------------------------------------------
Raw response:
{"sdk_http_response":{"headers":{"Content-Type":"application/json; charset=UTF-8","Vary":"Origin, X-Origin, Referer","
Content-Encoding":"gzip","Date":"Mon, 21 Jul 2025 08:50:00 GMT","Server":"scaffolding on HTTPServer2","X-XSS-Protectio
n":"0","X-Frame-Options":"SAMEORIGIN","X-Content-Type-Options":"nosniff","Server-Timing":"gfet4t7; dur=1652","Alt-Svc"
:"h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000","Transfer-Encoding":"chunked"}},"candidates":[{"content":{"parts
":[{"text":"```python\ndef count_ts(statement):\n  \"\"\"Counts the number of capital 'T' and small 't' in a string.\n
\n  Args:\n    statement: The string to search.\n\n  Returns:\n    A tuple containing the number of capital 'T's and s
mall 't's.\n  \"\"\"\n\n  capital_t_count = statement.count('T')\n  small_t_count = statement.count('t')\n  return cap
ital_t_count, small_t_count\n\n# Example usage:\nstatement = \"This is a Test statement to count t's and T's.\"\ncapit
al_t, small_t = count_ts(statement)\n\nprint(f\"Number of capital T's: {capital_t}\")\nprint(f\"Number of small t's: {
small_t}\")\n```\n"}],"role":"model"},"finish_reason":"STOP","avg_logprobs":-0.008770455705358627}],"model_version":"g
emini-2.0-flash-001","usage_metadata":{"candidates_token_count":188,"candidates_tokens_details":[{"modality":"TEXT","t
oken_count":188}],"prompt_token_count":612,"prompt_tokens_details":[{"modality":"TEXT","token_count":612}],"total_toke
n_count":800},"automatic_function_calling_history":[]}
-----------------------------------------------------------
 
2025-07-21 14:23:07,560 - INFO - fast_api.py:847 - Generated event in agent run streaming: {"content":{"parts":[{"text
":"```python\ndef count_ts(statement):\n  \"\"\"Counts the number of capital 'T' and small 't' in a string.\n\n  Args:
\n    statement: The string to search.\n\n  Returns:\n    A tuple containing the number of capital 'T's and small 't's
.\n  \"\"\"\n\n  capital_t_count = statement.count('T')\n  small_t_count = statement.count('t')\n  return capital_t_co
unt, small_t_count\n\n# Example usage:\nstatement = \"This is a Test statement to count t's and T's.\"\ncapital_t, sma
ll_t = count_ts(statement)\n\nprint(f\"Number of capital T's: {capital_t}\")\nprint(f\"Number of small t's: {small_t}\
")\n```\n"}],"role":"model"},"usageMetadata":{"candidatesTokenCount":188,"candidatesTokensDetails":[{"modality":"TEXT"
,"tokenCount":188}],"promptTokenCount":612,"promptTokensDetails":[{"modality":"TEXT","tokenCount":612}],"totalTokenCou
nt":800},"invocationId":"e-b509d6f7-618e-4654-b85a-2e75fa54ed5f","author":"CodeRefactorerAgent","actions":{"stateDelta
":{"refactored_code":"```python\ndef count_ts(statement):\n  \"\"\"Counts the number of capital 'T' and small 't' in a
 string.\n\n  Args:\n    statement: The string to search.\n\n  Returns:\n    A tuple containing the number of capital
'T's and small 't's.\n  \"\"\"\n\n  capital_t_count = statement.count('T')\n  small_t_count = statement.count('t')\n
return capital_t_count, small_t_count\n\n# Example usage:\nstatement = \"This is a Test statement to count t's and T's
.\"\ncapital_t, small_t = count_ts(statement)\n\nprint(f\"Number of capital T's: {capital_t}\")\nprint(f\"Number of sm
all t's: {small_t}\")\n```\n"},"artifactDelta":{},"requestedAuthConfigs":{}},"id":"cecd6da5-030f-41f8-a5d0-7565cafcbda
f","timestamp":1753087984.307956}
?[32mINFO?[0m:     127.0.0.1:52759 - "?[1mGET /apps/app/users/user/sessions/053d9ddf-0603-4a77-98c8-2fa3a99a37d8 HTTP/
1.1?[0m" ?[32m200 OK?[0m
?[32mINFO?[0m:     127.0.0.1:52759 - "?[1mGET /debug/trace/session/053d9ddf-0603-4a77-98c8-2fa3a99a37d8 HTTP/1.1?[0m"
?[32m200 OK?[0m

Parallel Agent:-

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
#parallel Agent example
 
# Part of agent.py --> Follow https://google.github.io/adk-docs/get-started/quickstart/ to learn the setup
# --- 1. Define Researcher Sub-Agents (to run in parallel) ---
 
def add(object_id1: str,object_id2: str): """ Args: object_id1 (str): First number object_id2 (str): Second number """ print(f"\n--- {object_id1} --- {object_id2}") try: return int(object_id1) + int(object_id2) except requests.exceptions.RequestException as e: print(f"Error fetching single object {object_id1} and {object_id2}: {e}") return None def Subtract(object_id1: str,object_id2: str): """ Args: object_id1 (str): First number object_id2 (str): Second number """ print(f"\n--- {object_id1} --- {object_id2}") try: return int(object_id1) - int(object_id2) except requests.exceptions.RequestException as e: print(f"Error fetching single object {object_id1} and {object_id2}: {e}") return None def Multiply(object_id1: str,object_id2: str): """ Args: object_id1 (str): First number object_id2 (str): Second number """ print(f"\n--- {object_id1} --- {object_id2}") try: return int(object_id1) * int(object_id2) except requests.exceptions.RequestException as e: print(f"Error fetching single object {object_id1} and {object_id2}: {e}") return None # Researcher 1: Addition Of two values researcher_agent_1 = LlmAgent( name="AddtionOfTwovalues", model=GEMINI_MODEL, instruction="""You are an AI Research Assistant specializing in addition of two values provided by the user. """, description="Agent that add two values.", tools=[add], # Store result in state for the merger agent output_key="researcher_agent_1_output_key" ) # Researcher 2: Substraction Of two values researcher_agent_2 = LlmAgent( name="SubstrationOfTwovalues", model=GEMINI_MODEL, instruction="""You are an AI Research Assistant specializing in substration of two values provided by the user. """, description="Agent that substract two values.", tools=[Subtract], # Store result in state for the merger agent output_key="researcher_agent_2_output_key" ) # Researcher 3: Multiplicaiton of two values researcher_agent_3 = LlmAgent( name="MultiplicaitonOfTwovalues", model=GEMINI_MODEL, instruction="""You are an AI Research Assistant specializing in multiple two values provided by the user. """, description="Agent that multiply two values.", tools=[Multiply], # Store result in state for the merger agent output_key="researcher_agent_3_output_key" ) # --- 2. Create the ParallelAgent (Runs researchers concurrently) --- # This agent orchestrates the concurrent execution of the researchers. # It finishes once all researchers have completed and stored their results in state. parallel_research_agent = ParallelAgent( name="ParallelWebResearchAgent", sub_agents=[researcher_agent_1, researcher_agent_2, researcher_agent_3], description="Runs multiple research agents in parallel to gather information." ) # --- 3. Define the Merger Agent (Runs *after* the parallel agents) --- # This agent takes the results stored in the session state by the parallel agents # and synthesizes them into a single, structured response with attributions. merger_agent = LlmAgent( name="ParallelFormatAgent", model=GEMINI_MODEL, # Or potentially a more powerful model if needed for synthesis instruction="""You are an AI Assistant responsible for combining research findings into a structured report. **Input Summaries:** * **Addition:** {researcher_agent_1_output_key} * **Substract:** {researcher_agent_2_output_key} * **Multiply:** {researcher_agent_3_output_key} """, description="Combines research findings from parallel agents into a structured, cited report, strictly grounded on provided inputs.", # No tools needed for merging # No output_key needed here, as its direct response is the final output of the sequence ) # --- 4. Create the SequentialAgent (Orchestrates the overall flow) --- # This is the main agent that will be run. It first executes the ParallelAgent # to populate the state, and then executes the MergerAgent to produce the final output. parallel_pipeline_agent = SequentialAgent( name="ParallelPipeline", # Run parallel research first, then merge sub_agents=[parallel_research_agent, merger_agent], description="Coordinates parallel the results." ) root_agent = parallel_pipeline_agent

Output:-

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
(.venv) C:\vscode-python-workspace\adkagent>adk web
C:\vscode-python-workspace\adkagent\.venv\Lib\site-packages\google\adk\cli\fast_api.py:354: UserWarning: [EXPERIMENTAL
] InMemoryCredentialService: This feature is experimental and may change or be removed in future versions without noti
ce. It may introduce breaking changes at any time.
  credential_service = InMemoryCredentialService()
C:\vscode-python-workspace\adkagent\.venv\Lib\site-packages\google\adk\auth\credential_service\in_memory_credential_se
rvice.py:33: UserWarning: [EXPERIMENTAL] BaseCredentialService: This feature is experimental and may change or be remo
ved in future versions without notice. It may introduce breaking changes at any time.
  super().__init__()
?[32mINFO?[0m:     Started server process [?[36m26104?[0m]
?[32mINFO?[0m:     Waiting for application startup.
 
+-----------------------------------------------------------------------------+
| ADK Web Server started                                                      |
|                                                                             |
+-----------------------------------------------------------------------------+
 
?[32mINFO?[0m:     Application startup complete.
?[32mINFO?[0m:     Uvicorn running on ?[1mhttp://127.0.0.1:8000?[0m (Press CTRL+C to quit)
?[32mINFO?[0m:     127.0.0.1:57378 - "?[1mGET /dev-ui/?app=app&session=4977fcb8-9c00-4a82-b572-acde5e9650ce HTTP/1.1?[
0m" ?[33m304 Not Modified?[0m
?[32mINFO?[0m:     127.0.0.1:57379 - "?[1mGET /list-apps?relative_path=./ HTTP/1.1?[0m" ?[32m200 OK?[0m
?[32mINFO?[0m:     127.0.0.1:57379 - "?[1mGET /apps/app/users/user/sessions/4977fcb8-9c00-4a82-b572-acde5e9650ce HTTP/
1.1?[0m" ?[31m404 Not Found?[0m
?[32mINFO?[0m:     127.0.0.1:57379 - "?[1mGET /apps/app/eval_sets HTTP/1.1?[0m" ?[32m200 OK?[0m
?[32mINFO?[0m:     127.0.0.1:57379 - "?[1mGET /apps/app/eval_results HTTP/1.1?[0m" ?[32m200 OK?[0m
2025-07-21 14:45:22,546 - INFO - fast_api.py:475 - New session created
?[32mINFO?[0m:     127.0.0.1:57379 - "?[1mPOST /apps/app/users/user/sessions HTTP/1.1?[0m" ?[32m200 OK?[0m
?[32mINFO?[0m:     127.0.0.1:57379 - "?[1mGET /apps/app/users/user/sessions HTTP/1.1?[0m" ?[32m200 OK?[0m
?[32mINFO?[0m:     127.0.0.1:57379 - "?[1mGET /apps/app/users/user/sessions HTTP/1.1?[0m" ?[32m200 OK?[0m
?[32mINFO?[0m:     127.0.0.1:57379 - "?[1mPOST /run_sse HTTP/1.1?[0m" ?[32m200 OK?[0m
2025-07-21 14:45:26,255 - INFO - envs.py:47 - Loaded .env file for app at C:\vscode-python-workspace\adkagent\app\.env
 
2025-07-21 14:45:26,257 - INFO - envs.py:47 - Loaded .env file for app at C:\vscode-python-workspace\adkagent\app\.env
 
2025-07-21 14:45:26,272 - WARNING - llm_agent.py:477 - Invalid config for agent structured_agent: output_schema cannot
 co-exist with agent transfer configurations. Setting disallow_transfer_to_parent=True, disallow_transfer_to_peers=Tru
e
2025-07-21 14:45:26,274 - INFO - agent_loader.py:95 - Found root_agent in app.agent
2025-07-21 14:45:26,277 - WARNING - _api_client.py:105 - Both GOOGLE_API_KEY and GEMINI_API_KEY are set. Using GOOGLE_
API_KEY.
2025-07-21 14:45:27,729 - INFO - google_llm.py:90 - Sending out request, model: gemini-2.0-flash, backend: GoogleLLMVa
riant.GEMINI_API, stream: False
2025-07-21 14:45:27,732 - INFO - google_llm.py:96 -
LLM Request:
-----------------------------------------------------------
System Instruction:
You are an AI Research Assistant specializing in addition of two values provided by the user.
 
 
You are an agent. Your internal name is "AddtionOfTwovalues".
 
 The description about you is "Agent that add two values."
-----------------------------------------------------------
Contents:
{"parts":[{"text":"object_id1 is 8 and object_id2 is 3"}],"role":"user"}
-----------------------------------------------------------
Functions:
add: {'object_id1': {'type': <Type.STRING: 'STRING'>}, 'object_id2': {'type': <Type.STRING: 'STRING'>}}
-----------------------------------------------------------
 
2025-07-21 14:45:27,734 - INFO - models.py:7421 - AFC is enabled with max remote calls: 10.
2025-07-21 14:45:27,742 - WARNING - _api_client.py:105 - Both GOOGLE_API_KEY and GEMINI_API_KEY are set. Using GOOGLE_
API_KEY.
2025-07-21 14:45:29,241 - INFO - google_llm.py:90 - Sending out request, model: gemini-2.0-flash, backend: GoogleLLMVa
riant.GEMINI_API, stream: False
2025-07-21 14:45:29,242 - INFO - google_llm.py:96 -
LLM Request:
-----------------------------------------------------------
System Instruction:
You are an AI Research Assistant specializing in substration of two values provided by the user.
 
 
You are an agent. Your internal name is "SubstrationOfTwovalues".
 
 The description about you is "Agent that substract two values."
-----------------------------------------------------------
Contents:
{"parts":[{"text":"object_id1 is 8 and object_id2 is 3"}],"role":"user"}
-----------------------------------------------------------
Functions:
Subtract: {'object_id1': {'type': <Type.STRING: 'STRING'>}, 'object_id2': {'type': <Type.STRING: 'STRING'>}}
-----------------------------------------------------------
 
2025-07-21 14:45:29,243 - INFO - models.py:7421 - AFC is enabled with max remote calls: 10.
2025-07-21 14:45:29,248 - WARNING - _api_client.py:105 - Both GOOGLE_API_KEY and GEMINI_API_KEY are set. Using GOOGLE_
API_KEY.
2025-07-21 14:45:30,751 - INFO - google_llm.py:90 - Sending out request, model: gemini-2.0-flash, backend: GoogleLLMVa
riant.GEMINI_API, stream: False
2025-07-21 14:45:30,752 - INFO - google_llm.py:96 -
LLM Request:
-----------------------------------------------------------
System Instruction:
You are an AI Research Assistant specializing in multiple two values provided by the user.
 
 
You are an agent. Your internal name is "MultiplicaitonOfTwovalues".
 
 The description about you is "Agent that multiply two values."
-----------------------------------------------------------
Contents:
{"parts":[{"text":"object_id1 is 8 and object_id2 is 3"}],"role":"user"}
-----------------------------------------------------------
Functions:
Multiply: {'object_id1': {'type': <Type.STRING: 'STRING'>}, 'object_id2': {'type': <Type.STRING: 'STRING'>}}
-----------------------------------------------------------
 
2025-07-21 14:45:30,754 - INFO - models.py:7421 - AFC is enabled with max remote calls: 10.
2025-07-21 14:45:31,651 - WARNING - types.py:5169 - Warning: there are non-text parts in the response: ['function_call
'], returning concatenated text result from text parts. Check the full candidates.content.parts accessor to get the fu
ll model response.
2025-07-21 14:45:31,652 - INFO - google_llm.py:175 -
LLM Response:
-----------------------------------------------------------
Text:
None
-----------------------------------------------------------
Function calls:
name: add, args: {'object_id1': '8', 'object_id2': '3'}
-----------------------------------------------------------
Raw response:
{"sdk_http_response":{"headers":{"Content-Type":"application/json; charset=UTF-8","Vary":"Origin, X-Origin, Referer","
Content-Encoding":"gzip","Date":"Mon, 21 Jul 2025 09:12:24 GMT","Server":"scaffolding on HTTPServer2","X-XSS-Protectio
n":"0","X-Frame-Options":"SAMEORIGIN","X-Content-Type-Options":"nosniff","Server-Timing":"gfet4t7; dur=829","Alt-Svc":
"h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000","Transfer-Encoding":"chunked"}},"candidates":[{"content":{"parts"
:[{"function_call":{"args":{"object_id1":"8","object_id2":"3"},"name":"add"}}],"role":"model"},"finish_reason":"STOP",
"avg_logprobs":-0.000013339092468165539}],"model_version":"gemini-2.0-flash","usage_metadata":{"candidates_token_count
":11,"candidates_tokens_details":[{"modality":"TEXT","token_count":11}],"prompt_token_count":130,"prompt_tokens_detail
s":[{"modality":"TEXT","token_count":130}],"total_token_count":141},"automatic_function_calling_history":[]}
-----------------------------------------------------------
 
2025-07-21 14:45:31,655 - INFO - fast_api.py:847 - Generated event in agent run streaming: {"content":{"parts":[{"func
tionCall":{"id":"adk-f4fcdc41-0190-4688-8430-cff3fbd76557","args":{"object_id1":"8","object_id2":"3"},"name":"add"}}],
"role":"model"},"usageMetadata":{"candidatesTokenCount":11,"candidatesTokensDetails":[{"modality":"TEXT","tokenCount":
11}],"promptTokenCount":130,"promptTokensDetails":[{"modality":"TEXT","tokenCount":130}],"totalTokenCount":141},"invoc
ationId":"e-818bf53f-fd24-4e80-86e6-999ff8a66e6a","author":"AddtionOfTwovalues","actions":{"stateDelta":{},"artifactDe
lta":{},"requestedAuthConfigs":{}},"longRunningToolIds":[],"branch":"ParallelWebResearchAgent.AddtionOfTwovalues","id"
:"9e115405-59f3-4b96-9a7e-9abd85ec0c17","timestamp":1753089326.277404}
2025-07-21 14:45:31,658 - WARNING - types.py:5169 - Warning: there are non-text parts in the response: ['function_call
'], returning concatenated text result from text parts. Check the full candidates.content.parts accessor to get the fu
ll model response.
2025-07-21 14:45:31,660 - INFO - google_llm.py:175 -
LLM Response:
-----------------------------------------------------------
Text:
None
-----------------------------------------------------------
Function calls:
name: Multiply, args: {'object_id1': '8', 'object_id2': '3'}
-----------------------------------------------------------
Raw response:
{"sdk_http_response":{"headers":{"Content-Type":"application/json; charset=UTF-8","Vary":"Origin, X-Origin, Referer","
Content-Encoding":"gzip","Date":"Mon, 21 Jul 2025 09:12:24 GMT","Server":"scaffolding on HTTPServer2","X-XSS-Protectio
n":"0","X-Frame-Options":"SAMEORIGIN","X-Content-Type-Options":"nosniff","Server-Timing":"gfet4t7; dur=838","Alt-Svc":
"h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000","Transfer-Encoding":"chunked"}},"candidates":[{"content":{"parts"
:[{"function_call":{"args":{"object_id1":"8","object_id2":"3"},"name":"Multiply"}}],"role":"model"},"finish_reason":"S
TOP","avg_logprobs":-0.00038685789331793785}],"model_version":"gemini-2.0-flash","usage_metadata":{"candidates_token_c
ount":11,"candidates_tokens_details":[{"modality":"TEXT","token_count":11}],"prompt_token_count":130,"prompt_tokens_de
tails":[{"modality":"TEXT","token_count":130}],"total_token_count":141},"automatic_function_calling_history":[]}
-----------------------------------------------------------
 
 
--- 8 --- 3
2025-07-21 14:45:31,664 - INFO - fast_api.py:847 - Generated event in agent run streaming: {"content":{"parts":[{"func
tionCall":{"id":"adk-2fd44779-078a-4e86-9a7e-45514d45f017","args":{"object_id1":"8","object_id2":"3"},"name":"Multiply
"}}],"role":"model"},"usageMetadata":{"candidatesTokenCount":11,"candidatesTokensDetails":[{"modality":"TEXT","tokenCo
unt":11}],"promptTokenCount":130,"promptTokensDetails":[{"modality":"TEXT","tokenCount":130}],"totalTokenCount":141},"
invocationId":"e-818bf53f-fd24-4e80-86e6-999ff8a66e6a","author":"MultiplicaitonOfTwovalues","actions":{"stateDelta":{}
,"artifactDelta":{},"requestedAuthConfigs":{}},"longRunningToolIds":[],"branch":"ParallelWebResearchAgent.Multiplicait
onOfTwovalues","id":"a766c56d-fa0d-4851-8bbf-b4e2bf71fc5a","timestamp":1753089329.248395}
2025-07-21 14:45:31,665 - INFO - fast_api.py:847 - Generated event in agent run streaming: {"content":{"parts":[{"func
tionResponse":{"id":"adk-f4fcdc41-0190-4688-8430-cff3fbd76557","name":"add","response":{"result":11}}}],"role":"user"}
,"invocationId":"e-818bf53f-fd24-4e80-86e6-999ff8a66e6a","author":"AddtionOfTwovalues","actions":{"stateDelta":{},"art
ifactDelta":{},"requestedAuthConfigs":{}},"branch":"ParallelWebResearchAgent.AddtionOfTwovalues","id":"610fb294-2068-4
be4-8b39-e3f8469d67f5","timestamp":1753089331.663679}
 
--- 8 --- 3
2025-07-21 14:45:31,682 - WARNING - _api_client.py:105 - Both GOOGLE_API_KEY and GEMINI_API_KEY are set. Using GOOGLE_
API_KEY.
2025-07-21 14:45:33,201 - INFO - google_llm.py:90 - Sending out request, model: gemini-2.0-flash, backend: GoogleLLMVa
riant.GEMINI_API, stream: False
2025-07-21 14:45:33,202 - INFO - google_llm.py:96 -
LLM Request:
-----------------------------------------------------------
System Instruction:
You are an AI Research Assistant specializing in addition of two values provided by the user.
 
 
You are an agent. Your internal name is "AddtionOfTwovalues".
 
 The description about you is "Agent that add two values."
-----------------------------------------------------------
Contents:
{"parts":[{"text":"object_id1 is 8 and object_id2 is 3"}],"role":"user"}
{"parts":[{"function_call":{"args":{"object_id1":"8","object_id2":"3"},"name":"add"}}],"role":"model"}
{"parts":[{"function_response":{"name":"add","response":{"result":11}}}],"role":"user"}
-----------------------------------------------------------
Functions:
add: {'object_id1': {'type': <Type.STRING: 'STRING'>}, 'object_id2': {'type': <Type.STRING: 'STRING'>}}
-----------------------------------------------------------
 
2025-07-21 14:45:33,205 - INFO - models.py:7421 - AFC is enabled with max remote calls: 10.
2025-07-21 14:45:33,211 - INFO - fast_api.py:847 - Generated event in agent run streaming: {"content":{"parts":[{"func
tionResponse":{"id":"adk-2fd44779-078a-4e86-9a7e-45514d45f017","name":"Multiply","response":{"result":24}}}],"role":"u
ser"},"invocationId":"e-818bf53f-fd24-4e80-86e6-999ff8a66e6a","author":"MultiplicaitonOfTwovalues","actions":{"stateDe
lta":{},"artifactDelta":{},"requestedAuthConfigs":{}},"branch":"ParallelWebResearchAgent.MultiplicaitonOfTwovalues","i
d":"094a22d9-2941-4d2b-9e36-a00cdae8e2ab","timestamp":1753089331.667663}
LLM Request:
-----------------------------------------------------------
System Instruction:
You are an AI Research Assistant specializing in multiple two values provided by the user.
 
 
You are an agent. Your internal name is "MultiplicaitonOfTwovalues".
 
 The description about you is "Agent that multiply two values."
-----------------------------------------------------------
Contents:
{"parts":[{"text":"object_id1 is 8 and object_id2 is 3"}],"role":"user"}
{"parts":[{"function_call":{"args":{"object_id1":"8","object_id2":"3"},"name":"Multiply"}}],"role":"model"}
{"parts":[{"function_response":{"name":"Multiply","response":{"result":24}}}],"role":"user"}
-----------------------------------------------------------
Functions:
Multiply: {'object_id1': {'type': <Type.STRING: 'STRING'>}, 'object_id2': {'type': <Type.STRING: 'STRING'>}}
-----------------------------------------------------------
 
2025-07-21 14:45:34,709 - INFO - models.py:7421 - AFC is enabled with max remote calls: 10.
2025-07-21 14:45:34,717 - WARNING - types.py:5169 - Warning: there are non-text parts in the response: ['function_call
'], returning concatenated text result from text parts. Check the full candidates.content.parts accessor to get the fu
ll model response.
2025-07-21 14:45:34,718 - INFO - google_llm.py:175 -
LLM Response:
-----------------------------------------------------------
Text:
None
-----------------------------------------------------------
Function calls:
name: Subtract, args: {'object_id2': '3', 'object_id1': '8'}
-----------------------------------------------------------
Raw response:
{"sdk_http_response":{"headers":{"Content-Type":"application/json; charset=UTF-8","Vary":"Origin, X-Origin, Referer","
Content-Encoding":"gzip","Date":"Mon, 21 Jul 2025 09:12:24 GMT","Server":"scaffolding on HTTPServer2","X-XSS-Protectio
n":"0","X-Frame-Options":"SAMEORIGIN","X-Content-Type-Options":"nosniff","Server-Timing":"gfet4t7; dur=929","Alt-Svc":
"h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000","Transfer-Encoding":"chunked"}},"candidates":[{"content":{"parts"
:[{"function_call":{"args":{"object_id2":"3","object_id1":"8"},"name":"Subtract"}}],"role":"model"},"finish_reason":"S
TOP","avg_logprobs":-0.0010413500395688143}],"model_version":"gemini-2.0-flash","usage_metadata":{"candidates_token_co
unt":11,"candidates_tokens_details":[{"modality":"TEXT","token_count":11}],"prompt_token_count":132,"prompt_tokens_det
ails":[{"modality":"TEXT","token_count":132}],"total_token_count":143},"automatic_function_calling_history":[]}
-----------------------------------------------------------
 
2025-07-21 14:45:34,723 - INFO - fast_api.py:847 - Generated event in agent run streaming: {"content":{"parts":[{"func
tionCall":{"id":"adk-6af6721e-2757-4d4f-9fb9-36c6e3799f6a","args":{"object_id2":"3","object_id1":"8"},"name":"Subtract
"}}],"role":"model"},"usageMetadata":{"candidatesTokenCount":11,"candidatesTokensDetails":[{"modality":"TEXT","tokenCo
unt":11}],"promptTokenCount":132,"promptTokensDetails":[{"modality":"TEXT","tokenCount":132}],"totalTokenCount":143},"
invocationId":"e-818bf53f-fd24-4e80-86e6-999ff8a66e6a","author":"SubstrationOfTwovalues","actions":{"stateDelta":{},"a
rtifactDelta":{},"requestedAuthConfigs":{}},"longRunningToolIds":[],"branch":"ParallelWebResearchAgent.SubstrationOfTw
ovalues","id":"16d448dd-bd5a-4e66-bf02-f5577e0f8812","timestamp":1753089327.741807}
 
--- 8 --- 3
2025-07-21 14:45:34,728 - INFO - fast_api.py:847 - Generated event in agent run streaming: {"content":{"parts":[{"func
tionResponse":{"id":"adk-6af6721e-2757-4d4f-9fb9-36c6e3799f6a","name":"Subtract","response":{"result":5}}}],"role":"us
er"},"invocationId":"e-818bf53f-fd24-4e80-86e6-999ff8a66e6a","author":"SubstrationOfTwovalues","actions":{"stateDelta"
:{},"artifactDelta":{},"requestedAuthConfigs":{}},"branch":"ParallelWebResearchAgent.SubstrationOfTwovalues","id":"ea2
9d913-acaa-4827-9405-3ab22fff379c","timestamp":1753089334.726702}
LLM Request:
-----------------------------------------------------------
System Instruction:
You are an AI Research Assistant specializing in substration of two values provided by the user.
 
 
You are an agent. Your internal name is "SubstrationOfTwovalues".
 
 The description about you is "Agent that substract two values."
-----------------------------------------------------------
Contents:
{"parts":[{"text":"object_id1 is 8 and object_id2 is 3"}],"role":"user"}
{"parts":[{"function_call":{"args":{"object_id2":"3","object_id1":"8"},"name":"Subtract"}}],"role":"model"}
{"parts":[{"function_response":{"name":"Subtract","response":{"result":5}}}],"role":"user"}
-----------------------------------------------------------
Functions:
Subtract: {'object_id1': {'type': <Type.STRING: 'STRING'>}, 'object_id2': {'type': <Type.STRING: 'STRING'>}}
-----------------------------------------------------------
 
2025-07-21 14:45:36,271 - INFO - models.py:7421 - AFC is enabled with max remote calls: 10.
2025-07-21 14:45:37,077 - INFO - google_llm.py:175 -
LLM Response:
-----------------------------------------------------------
Text:
The result is 5.
-----------------------------------------------------------
Function calls:
 
-----------------------------------------------------------
Raw response:
{"sdk_http_response":{"headers":{"Content-Type":"application/json; charset=UTF-8","Vary":"Origin, X-Origin, Referer","
Content-Encoding":"gzip","Date":"Mon, 21 Jul 2025 09:12:29 GMT","Server":"scaffolding on HTTPServer2","X-XSS-Protectio
n":"0","X-Frame-Options":"SAMEORIGIN","X-Content-Type-Options":"nosniff","Server-Timing":"gfet4t7; dur=707","Alt-Svc":
"h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000","Transfer-Encoding":"chunked"}},"candidates":[{"content":{"parts"
:[{"text":"The result is 5."}],"role":"model"},"finish_reason":"STOP","avg_logprobs":-0.009490013122558594}],"model_ve
rsion":"gemini-2.0-flash","usage_metadata":{"candidates_token_count":6,"candidates_tokens_details":[{"modality":"TEXT"
,"token_count":6}],"prompt_token_count":146,"prompt_tokens_details":[{"modality":"TEXT","token_count":146}],"total_tok
en_count":152},"automatic_function_calling_history":[]}
-----------------------------------------------------------
 
2025-07-21 14:45:37,081 - INFO - google_llm.py:175 -
LLM Response:
-----------------------------------------------------------
Text:
The sum of 8 and 3 is 11.
-----------------------------------------------------------
Function calls:
 
-----------------------------------------------------------
Raw response:
{"sdk_http_response":{"headers":{"Content-Type":"application/json; charset=UTF-8","Vary":"Origin, X-Origin, Referer","
Content-Encoding":"gzip","Date":"Mon, 21 Jul 2025 09:12:29 GMT","Server":"scaffolding on HTTPServer2","X-XSS-Protectio
n":"0","X-Frame-Options":"SAMEORIGIN","X-Content-Type-Options":"nosniff","Server-Timing":"gfet4t7; dur=766","Alt-Svc":
"h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000","Transfer-Encoding":"chunked"}},"candidates":[{"content":{"parts"
:[{"text":"The sum of 8 and 3 is 11."}],"role":"model"},"finish_reason":"STOP","avg_logprobs":-0.0006124901656921094}]
,"model_version":"gemini-2.0-flash","usage_metadata":{"candidates_token_count":13,"candidates_tokens_details":[{"modal
ity":"TEXT","token_count":13}],"prompt_token_count":144,"prompt_tokens_details":[{"modality":"TEXT","token_count":144}
],"total_token_count":157},"automatic_function_calling_history":[]}
-----------------------------------------------------------
 
2025-07-21 14:45:37,083 - INFO - fast_api.py:847 - Generated event in agent run streaming: {"content":{"parts":[{"text
":"The sum of 8 and 3 is 11."}],"role":"model"},"usageMetadata":{"candidatesTokenCount":13,"candidatesTokensDetails":[
{"modality":"TEXT","tokenCount":13}],"promptTokenCount":144,"promptTokensDetails":[{"modality":"TEXT","tokenCount":144
}],"totalTokenCount":157},"invocationId":"e-818bf53f-fd24-4e80-86e6-999ff8a66e6a","author":"AddtionOfTwovalues","actio
ns":{"stateDelta":{"researcher_agent_1_output_key":"The sum of 8 and 3 is 11."},"artifactDelta":{},"requestedAuthConfi
gs":{}},"branch":"ParallelWebResearchAgent.AddtionOfTwovalues","id":"95759712-8c2b-403f-954b-cef0b1cbb8bd","timestamp"
:1753089331.681961}
2025-07-21 14:45:37,085 - INFO - fast_api.py:847 - Generated event in agent run streaming: {"content":{"parts":[{"text
":"The result is 5."}],"role":"model"},"usageMetadata":{"candidatesTokenCount":6,"candidatesTokensDetails":[{"modality
":"TEXT","tokenCount":6}],"promptTokenCount":146,"promptTokensDetails":[{"modality":"TEXT","tokenCount":146}],"totalTo
kenCount":152},"invocationId":"e-818bf53f-fd24-4e80-86e6-999ff8a66e6a","author":"SubstrationOfTwovalues","actions":{"s
tateDelta":{"researcher_agent_2_output_key":"The result is 5."},"artifactDelta":{},"requestedAuthConfigs":{}},"branch"
:"ParallelWebResearchAgent.SubstrationOfTwovalues","id":"90755b8c-4733-496c-9497-1dc260bdb0f6","timestamp":1753089334.
737358}
 
LLM Response:
-----------------------------------------------------------
Text:
The multiplication of 8 and 3 is 24.
-----------------------------------------------------------
Function calls:
 
-----------------------------------------------------------
Raw response:
{"sdk_http_response":{"headers":{"Content-Type":"application/json; charset=UTF-8","Vary":"Origin, X-Origin, Referer","
Content-Encoding":"gzip","Date":"Mon, 21 Jul 2025 09:12:29 GMT","Server":"scaffolding on HTTPServer2","X-XSS-Protectio
n":"0","X-Frame-Options":"SAMEORIGIN","X-Content-Type-Options":"nosniff","Server-Timing":"gfet4t7; dur=791","Alt-Svc":
"h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000","Transfer-Encoding":"chunked"}},"candidates":[{"content":{"parts"
:[{"text":"The multiplication of 8 and 3 is 24."}],"role":"model"},"finish_reason":"STOP","avg_logprobs":-0.0028035981
72389544}],"model_version":"gemini-2.0-flash","usage_metadata":{"candidates_token_count":13,"candidates_tokens_details
":[{"modality":"TEXT","token_count":13}],"prompt_token_count":144,"prompt_tokens_details":[{"modality":"TEXT","token_c
ount":144}],"total_token_count":157},"automatic_function_calling_history":[]}
-----------------------------------------------------------
 
2025-07-21 14:45:37,110 - INFO - fast_api.py:847 - Generated event in agent run streaming: {"content":{"parts":[{"text
":"The multiplication of 8 and 3 is 24."}],"role":"model"},"usageMetadata":{"candidatesTokenCount":13,"candidatesToken
sDetails":[{"modality":"TEXT","tokenCount":13}],"promptTokenCount":144,"promptTokensDetails":[{"modality":"TEXT","toke
nCount":144}],"totalTokenCount":157},"invocationId":"e-818bf53f-fd24-4e80-86e6-999ff8a66e6a","author":"MultiplicaitonO
fTwovalues","actions":{"stateDelta":{"researcher_agent_3_output_key":"The multiplication of 8 and 3 is 24."},"artifact
Delta":{},"requestedAuthConfigs":{}},"branch":"ParallelWebResearchAgent.MultiplicaitonOfTwovalues","id":"fcd1a885-97df
-41e8-902e-f820530cc52a","timestamp":1753089333.218489}
2025-07-21 14:45:37,112 - ERROR - __init__.py:157 - Failed to detach context
 
LLM Request:
-----------------------------------------------------------
System Instruction:
You are an AI Assistant responsible for combining research findings into a structured report.
 
**Input Summaries:**
 
*   **Addtion:**
    The sum of 8 and 3 is 11.
 
*   **Sunstract:**
    The result is 5.
 
*   **Multiply:**
    The multiplication of 8 and 3 is 24.
 
 
 
You are an agent. Your internal name is "ParallelFormatAgent".
 
 The description about you is "Combines research findings from parallel agents into a structured, cited report, strict
ly grounded on provided inputs."
-----------------------------------------------------------
Contents:
{"parts":[{"text":"object_id1 is 8 and object_id2 is 3"}],"role":"user"}
{"parts":[{"text":"For context:"},{"text":"[AddtionOfTwovalues] called tool `add` with parameters: {'object_id1': '8',
 'object_id2': '3'}"}],"role":"user"}
{"parts":[{"text":"For context:"},{"text":"[MultiplicaitonOfTwovalues] called tool `Multiply` with parameters: {'objec
t_id1': '8', 'object_id2': '3'}"}],"role":"user"}
{"parts":[{"text":"For context:"},{"text":"[AddtionOfTwovalues] `add` tool returned result: {'result': 11}"}],"role":"
user"}
{"parts":[{"text":"For context:"},{"text":"[MultiplicaitonOfTwovalues] `Multiply` tool returned result: {'result': 24}
"}],"role":"user"}
{"parts":[{"text":"For context:"},{"text":"[SubstrationOfTwovalues] called tool `Subtract` with parameters: {'object_i
d2': '3', 'object_id1': '8'}"}],"role":"user"}
{"parts":[{"text":"For context:"},{"text":"[SubstrationOfTwovalues] `Subtract` tool returned result: {'result': 5}"}],
"role":"user"}
{"parts":[{"text":"For context:"},{"text":"[AddtionOfTwovalues] said: The sum of 8 and 3 is 11."}],"role":"user"}
{"parts":[{"text":"For context:"},{"text":"[SubstrationOfTwovalues] said: The result is 5."}],"role":"user"}
{"parts":[{"text":"For context:"},{"text":"[MultiplicaitonOfTwovalues] said: The multiplication of 8 and 3 is 24."}],"
role":"user"}
-----------------------------------------------------------
Functions:
 
-----------------------------------------------------------
 
2025-07-21 14:45:38,549 - INFO - models.py:7421 - AFC is enabled with max remote calls: 10.
2025-07-21 14:45:39,739 - INFO - google_llm.py:175 -
LLM Response:
-----------------------------------------------------------
Text:
# Arithmetic Operations on Two Numbers
 
This report summarizes the results of basic arithmetic operations performed on the numbers 8 and 3.
 
## Addition
 
The sum of 8 and 3 is 11 (Addtion).
 
## Subtraction
 
Subtracting 3 from 8 results in 5 (Sunstract).
 
## Multiplication
 
The product of 8 and 3 is 24 (Multiply).

Custom Agent Example

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
import asyncio
from google.adk.agents import BaseAgent
from google.adk.agents.invocation_context import InvocationContext
from google.genai import types
from google.adk.events import Event
from google.adk.runners import Runner
from google.adk.sessions import InMemorySessionService
 
class MyCustomAgent(BaseAgent):
    """
    An example of a custom agent in Google ADK.
    This agent demonstrates basic interaction and state management.
    """
    name: str = "MyCustomAgent"
    description: str = "A custom agent that processes user input and responds."
 
    async def _run_async_impl(self, ctx: InvocationContext):
        """
        Implements the core logic of the custom agent.
        This method is called when the agent is invoked.
        """
        # Access user input from the context
        # user_input = ""
        # if ctx.new_message and ctx.new_message.parts:
        #     user_input = ctx.new_message.parts[0].text
        user_input = ctx.user_content.parts[0].text
        # Perform custom logic based on the input
        if "hello" in user_input.lower():
            response_content = "Hello there! How can I help you today?"
        else:
            response_content = f"You said: '{user_input}'. I'm a custom agent and can process this further."
 
        # Update session state if necessary
        ctx.session.state["last_response"] = response_content
 
        # Yield an Event to send a response back
        yield Event(
            author=self.name,
            content=types.Content(parts=[types.Part(text=response_content)])
        )
 
# To use this agent, you would typically integrate it within an ADK application
# and potentially set up a runner to handle interactions.
# For example:
 
async def main():
    my_agent = MyCustomAgent()
    session_service = InMemorySessionService()
    # It's good practice to provide an app_name to the Runner.
    runner = Runner(agent=my_agent, session_service=session_service, app_name="MyCustomApp")
 
    # To run a query (in a simplified interactive environment):
    user_query = "hello, custom agent!"
    user_id = "user123"
    session_id = "session456"
 
    # Create a session first
    await session_service.create_session(
        app_name="MyCustomApp", user_id=user_id, session_id=session_id
    )
 
    # The user query is passed as new_message
    content = types.Content(parts=[types.Part(text=user_query)], role="user")
    print (content)
 
    print(f"--- Running query: {user_query} ---")
    async for event in runner.run_async(
        user_id=user_id, session_id=session_id, new_message=content
    ):
        if event.is_final_response() and event.content and event.content.parts:
            print(f"Agent > {event.content.parts[0].text}")
 
if __name__ == "__main__":
    asyncio.run(main())

OutPut:-

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
(.venv) C:\vscode-python-workspace\adkagent>python siddhuCustomAgentExample.py
parts=[Part(
  text='hi, custom agent!'
)] role='user'
--- Running query: hi, custom agent! ---
Agent > You said: 'hi, custom agent!'. I'm a custom agent and can process this further.
 
(.venv) C:\vscode-python-workspace\adkagent>python siddhuCustomAgentExample.py
parts=[Part(
  text='hello, custom agent!'
)] role='user'
--- Running query: hello, custom agent! ---
Agent > Hello there! How can I help you today?
 
(.venv) C:\vscode-python-workspace\adkagent>

This concludes our initial exploration of the Google Agent Development Kit. You’ve successfully set up your environment, created an agent, interacted with it via CLI and a web interface, and integrated an external API. This foundational knowledge will enable you to build more complex and powerful AI agents using ADK.

Code available on :- https://github.com/shdhumale/app.git