Large Language Models (LLMs) are not just about text—they’re reshaping how we build Full Stack applications. Imagine seeing a website you love and instantly replicating its look and feel in your own project. That’s exactly what we explored in this Proof of Concept (POC).
Step 1: Capture the Design
We started by taking a snapshot of the target site:
Image
This image served as the foundation for generating a design.json.
Step 2: Generate design.json
Using CTRL+L, we uploaded the captured image. The tool automatically created a design.json file that represents the UI structure and styling.
Think of design.json as a blueprint for your frontend—it tells your project how to look.
Step 3: Integrate into Frontend Project
Once the design.json was ready, we placed it inside our frontend project.
Then, we invoked the following prompt in CTRL+L:
1
@frontend change the ui as per the @design.json
Step 4: Let the Agent Work
As soon as the prompt was executed, our agent began transforming the UI.
Within moments, the project’s interface was updated to match the design we had captured.
Image
Conclusion
This experiment demonstrates how LLMs + design.json can revolutionize frontend development:
Rapid UI transformation without manual coding
Consistent look and feel across projects
Accelerated prototyping for Full Stack developers
By combining AI-driven design extraction with prompt-based UI updates, developers can move from idea to implementation faster than ever.
This blog post chronicles a troubleshooting journey while setting upJUnit testsfor aSpring Boot backendapplication, highlighting common obstacles and their solutions.
The Initial Attempt and Quota Limits
We attempted to use an AI tool, often referred to internally as “Google Antigravity,” to help generate backend JUnit test cases for our spring-boot folder.
Unfortunately, during this initial phase, the free account limit for the AI tool was reached. This required a quick shift to an alternative account to continue the test generation process.
It’s a common hurdle in development: tools are essential, but resource constraints (like API limits or quotas) must be managed.
Encountering a Test Failure
After generating and incorporating the necessary test files (for example, for an Authentication Controller), the next step was to execute them using the Maven build tool.
We ran the specific test class using the command:
mvn test -Dtest=AuthControllerTest
This execution resulted in a test failure, as shown in the console output.
The error was likely related to the project’s dependencies, configuration, or, as we discovered, an incompatibility with the current Java Development Kit (JDK) environment path.
The JDK Compatibility Solution
To resolve the test failure, we consulted the error details and determined that the issue was related to JDK version compatibility. The project was built to run optimally on JDK 11, but the environment path was pointing to a different version or an incorrect location.
The critical fix involved updating the system or project configuration to explicitly use the correct JDK 11 installation path. The path we configured was:
C:\Program Files\Java\jdk-11.0.10\bin
By correctly configuring the environment to use the JDK 11 binaries, we ensured that all dependencies and compilation steps aligned with the required version.
Successful Test Execution
With the JDK path corrected, we executed the full suite of backend tests using the standard Maven command:
mvn test
The test run completed successfully, confirming that the JUnit test cases were correctly implemented and executed against the Spring Boot application.
This journey highlights the importance of checking JDK and environment compatibility when troubleshooting backend test failures. A simple path change was the final step to a successful test suite!
This post details a hands-on exercise focused on integrating comprehensive frontend validation into an existing full-stack application using theGoogle Antigravity IDEand its powerfulCode Agentfeature. The goal was to sequentially implement several crucial validation rules to enhance user experience and data integrity across various authentication flows, all driven by natural language prompts to the AI agent.
1. Project Scaffolding and Toolset
Our starting point was a base full-stack application, whose code structure and initial state were derived from a pre-existing tutorial project. This provided a live application environment where we could immediately observe the agent’s impact.
The primary tool for this exercise was the Google Antigravity IDE, leveraging its integrated AI Coding Agent—accessible via the Ctrl+L code with agent command. This feature allows developers to describe the required code changes in plain language, trusting the agent to analyze the project context and execute the necessary modifications.
2. Agent-Driven Validation Implementation
We proceeded through the validation requirements one by one, focusing on the agent’s ability to interpret generic prompts, identify affected components, and correctly implement the logic.
Task 1: Comprehensive Email Format Validation
The first step was to enforce a valid email format across all relevant forms. A key observation was that the email field was present in both the Signup and Reset Password components.
We provided the agent with a single, generic prompt:
“Add validation to check that the email is in a proper format.”
The agent demonstrated its contextual awareness by successfully:
Identifying the email input field across multiple files (specifically within the Signup and Reset Password screens).
Implementing the requisite regular expression or library-based validation logic (e.g., using a schema validation library or HTML5 input types) in both components simultaneously.
This highlights the agent’s capability for efficient, multi-file updates based on a single, high-level instruction.
Task 2: Login Form Mandates
Next, we focused on ensuring user credentials were provided before allowing a login attempt.
Required Validation:
Username is mandatory.
Password is mandatory.
Agent Prompt:
“User name and password mandate before clicking the login button.”
The agent implemented the logic to disable the login button or display an error message if the respective input fields were empty upon submission, ensuring the form’s dirty state and validity state were correctly managed.
Task 3: Signup Form Mandates
The signup process typically requires more information, demanding stricter validation.
Required Validation:
Username is mandatory.
Email is mandatory.
Password is mandatory.
Agent Prompt:
“User name, email and password mandate before signup the user.”
The agent added the required field validation for all three inputs in the signup form, reinforcing that the agent successfully applies tailored validation rules specific to the form’s context, even after previous, more generic validations (like email format) were already added.
Task 4: Reset Password Mandates
Finally, we addressed the validation necessary for the password recovery flow.
Required Validation:
Email is mandatory.
New Password is mandatory.
Agent Prompt:
“Email and new password mandate before clicking reset password.”
This change ensured that a user could not proceed with the password reset without providing both the account email and the new credentials, reinforcing security and preventing submission of incomplete form data.
3. Conclusion and Functional Verification
Following the sequential implementation of all four validation tasks, the final crucial step was to perform a comprehensive functional test of the entire application.
The goal of this verification was to confirm two things:
All new validation rules (format checks and required fields) were working correctly across the Login, Signup, and Reset Password flows.
The agent’s changes had not introduced any breaking changes or regressions into the existing application functionality.
The successful completion of this exercise confirmed the efficacy of using an AI Coding Agent for fast, context-aware implementation of declarative validation logic. The Google Antigravity Agent proved capable of handling both focused, form-specific rules and broad, multi-file updates from a single natural language instruction, significantly accelerating the development process for front-end validation.