Back to Blogs
Green wave‑pattern banner with bold dark‑green headline I Tried Letting Antigravity Build An Agent For Me and Bottom‑left shows tlake.link/antigravity-builds; bottom‑right displays the Tensorlake logo.

I Tried Letting Antigravity Build An Agent For Me. Here’s What Actually Happened

TL;DR

Antigravity used to build a guest checkout plus abandoned-cart recovery emails. In this article learn where it excels and where it stumbles.

I have used a long list of AI coding tools over the past few years, most of them built around the same pattern, as inline suggestions, chat-style prompts, and occasional refactors.

When Antigravity appeared recently, powered by Gemini 3 Pro, an agent driven IDE where background processes can read your repo, propose changes, run commands, and interact with your app’s runtime environment.

Google Antigravity post

When I first opened it, I saw those agents spin up automatically, parse the project structure, touch multiple files, run package scripts, and even click through the app in a built-in browser. It behaved less like an autocomplete layer and more like a set of automated workflows acting directly on the codebase.

That made me curious. Could this system actually deliver a complete feature if I stepped back and let it run?

To find out, I assigned Antigravity a real task from my own project and observed how far the agents could get with minimal intervention.

What follows is a breakdown of that experiment, what worked, what didn’t, and where agent-driven development currently stands from a practical engineering perspective.

Why I Did This Experiment#

I wanted to understand how well Antigravity’s agent-driven workflow performs on realistic engineering tasks. Most AI tools can handle small changes, but a real feature spans multiple layers of a codebase. That’s why I designed a focused experiment:

Experiment: Guest Checkout + Abandoned Cart Email Recovery using Resend/Nodemailer with a TensorLake analytics hook.

This feature was ideal because it touches several parts of a typical project. It includes backend routes, database updates, UI work, an email flow, and a small analytics integration.

I ran the experiment to answer a few practical questions:

  • Can autonomous agents complete work that crosses backend, frontend, database, and external services?
  • Do they keep context when modifying multiple files and systems?
  • How much oversight does a developer actually need to provide?

The goal was simple. I wanted to see how far the agents could take a meaningful feature with minimal intervention and whether agent-driven development can support real-world workflows.

Here is the flow of the experiment

Experiment flow diagram for email sending

The Setup#

To keep the experiment grounded, I used a real project from my existing codebase. The stack is fairly standard and represents what many teams use today:

  • Frontend: React with TypeScript
  • Backend: Node and Express
  • Database: PostgreSQL
  • Auth: Basic JWT authentication
  • Emails: Resend or Nodemailer
  • Analytics: Tensorlake

Because the feature involved sending recovery emails, I also needed a Resend API key. Setting that up took about two minutes. It’s just a matter of creating a Resend account, grabbing the API key, and dropping it into your .env file.

Antigravity runs inside a unified environment, and its agents have access to several built-in tools:

  • Code Agent, which edits and refactors files
  • Terminal Agent, which runs scripts, migrations, and tests
  • Browser Agent, which interacts with the running application
  • Research Agent, which pulls patterns or references when needed
  • Agent Manager, which coordinates all of them

These agents share context and can act independently. They can open files, update logic, install packages, modify migrations, and verify behavior in the browser.

For the experiment, I placed the project on a clean branch and provided a clear feature description. After that, I stepped back and let the agents decide how to approach the work.

How to Execute This Experiment#

This is the exact flow I used so that another developer could reproduce the experiment using the same codebase. The project I worked with is available here.

1. Prepare the Repo#

Before letting any agents touch the code, make sure the project is in a stable state.

  • Clone the repo and install dependencies
    • git clone https://github.com/Studio1HQ/Ecommerce-platform
    • npm install or pnpm install
  • Create a new branch from main
    • Example: feat/guest-checkout-ag-experiment
  • Confirm the app runs locally without errors
    • npm run dev or your project's start script
  • Run the existing test suite
    • npm test or the equivalent command

The goal here is simple: give Antigravity a clean baseline so any failures that follow belong to the experiment, not leftover issues.

2. Open the Project in Antigravity#

  • Open Antigravity
  • Load the existing repo for this experiment
  • Wait for the initial indexing or analysis of the project to finish

Once indexing is complete, the agents have a basic understanding of the codebase's structure.

3. Send the Main Experiment Prompt#

This is the exact initial prompt I would use inside Antigravity to kick off the experiment.

1You are working inside my existing project. 2 3Feature to implement: 4- Add a "Guest checkout" flow for users who are not logged in. 5- Implement abandoned cart email recovery using Resend or Nodemailer. 6- Add a simple Tensorlake analytics hook for cart events. 7 8Project context: 9- Frontend: React with TypeScript. 10- Backend: Node with Express. 11- Database: PostgreSQL. 12- Auth: JWT based. 13- Emails can use either Resend or Nodemailer, pick one and wire it cleanly. 14- Tensorlake should be used only for a minimal event tracking integration. 15 16Constraints: 17- Treat this as a real production feature, not a demo. 18- Keep changes scoped and readable. 19- Prefer small, focused commits and clear structure. 20- You should break this feature into missions and execute them using the Code, Terminal, and Browser tools. 21- I want you to handle as much as possible. I will only step in if you get stuck or break something repeatedly. 22 23Deliverables: 24- Guest checkout flow end to end. 25- Abandoned cart recovery emails with a reasonable trigger condition. 26- Tensorlake event tracking for at least "cart created" and "cart abandoned". 27- Tests updated or added where appropriate. 28 29First, respond with a clear mission plan. Then start executing it step by step.

This sets expectations, defines the stack, and tells the agent to create a mission plan first instead of editing files immediately.

4. Review and Adjust the Mission Plan#

Antigravity should reply with something like a breakdown of tasks. For example:

  • Design the guest checkout data model
  • Add database changes for guest carts and orders
  • Implement backend routes for guest checkout
  • Implement abandoned cart identification and email sending
  • Add frontend components for guest checkout
  • Insert Tensorlake tracking
  • Add or update tests

You do not need this exact list. You just need it to be coherent.

At this step, you:

  • Check that it understood the feature
  • Ask for small adjustments if something is clearly missing

5. Let the Agents Execute Across the Stack#

Now you let Antigravity do the heavy lifting. Typical actions you will see:

  • Code Agent
    • Creates or updates Express routes for guest checkout
    • Adds controllers or services for cart and order handling
    • Writes or updates TypeScript types and interfaces
    • Creates React components or pages for guest checkout
  • Terminal Agent
    • Runs migrations for new tables or columns
    • Executes test suites
    • Installs dependencies such as Resend, Nodemailer, or the Tensorlake SDK
  • Browser Agent
    • Opens the local app
    • Walks through the guest checkout UI
    • Verifies that the flow works end-to-end

Your job at this stage is to observe.

Only intervene when:

  • It repeatedly breaks the same thing
  • It introduces an obviously wrong design decision
  • It gets stuck in a loop of failing tests and incremental fixes that do not converge

6. Verify the Flow with the Browser Agent#

Once the main missions complete, explicitly ask the agents to verify the behavior.

1Example prompt: 2 3Use the Browser and Terminal tools to verify the complete flow: 4- Start a new cart as a guest user. 5- Proceed through the guest checkout UI. 6- Leave a cart in an abandoned state and trigger the recovery email logic. 7- Confirm that Tensorlake tracking events are being sent where expected. 8 9Report what you tested and what worked or failed.

This forces a structured test pass rather than assuming the feature works.

7. Perform a Manual Review#

At the end, you should still review everything as a developer.

  • Look at the diff for each mission
  • Check routes, models, and migrations
  • Check how email sending is wired
  • Verify the Tensorlake calls do not leak sensitive data
  • Run tests yourself
  • Try the flow manually in the browser

If needed, you can ask Antigravity to clean up small issues or style problems with targeted prompts.

You can find the execution of Antigravity in this repo.

Outcome#

Here is the entire execution:


Here is the email:

Antigravity generated email for abandoned cart

What Antigravity Did Surprisingly Well

A few things stood out once the agents started working through the feature.

  • The task breakdown actually made sense: The mission plan looked like something a real dev would sketch out before starting. Nothing over-engineered, nothing missing.
  • The agents handed work off cleanly: Code edits, migrations, test runs, and browser checks happened in a reasonable order. It felt coordinated rather than chaotic.
  • Boilerplate wasn’t a mess: The generated routes, controllers, and React components were straightforward. I didn’t have to untangle odd patterns or rewrite everything.
  • Data stayed consistent across the stack: Field names, types, and payload shapes lined up. I didn’t see the usual “backend calls it one thing, frontend calls it another” issue.
  • The email flow was wired up correctly: Resend/Nodemailer setup usually gets messy, but the structure here was clear and easy to follow.
  • Tensorlake integration was small and sensible: It added a couple of event hooks without turning the code into an analytics playground.
  • Quick feedback loops: When something broke, the agents patched it fast without spiraling into nonsense fixes.

Where It Still Stumbled (Minor, but Honest)

The backend work was mostly solid, but the UI side definitely exposed some weak spots. The agents could generate components and wiring fast, but getting everything to actually behave the way I wanted took multiple iterations.

A few things stood out:

  • Frontend was the hardest part for the agents: They could scaffold React components quickly, but the details were often off. State handling, validation, and edge cases needed several rounds of fixes.
  • Connecting frontend and backend wasn’t always smooth: Endpoints existed, components existed, but stitching them together required back-and-forth corrections. The agents didn’t always keep both sides in sync on the first pass.
  • Debugging took way longer than generation: The feature would “finish” in minutes, but debugging the UI and flow took a few hours. The agents helped, but they didn’t magically remove the pain points of front-end troubleshooting.

Even with the debugging overhead, this is a feature that would normally take me two to four days end-to-end. With Antigravity, it landed in a few hours. Not perfect, but undeniably faster.

The Final Verdict

By the end of the experiment, the feature shipped. It needed a few rounds of debugging, mostly on the frontend, but the entire flow was working: guest checkout, abandoned cart recovery emails, and the Tensorlake event hook.

The time savings were real. What would normally take two to four days of manual work ended up condensed into a few hours of guiding and debugging the agents. The system is not flawless, but it moves fast enough that the rough edges still net out in your favor.

Antigravity makes the most sense when you need:

  • multi-file scaffolding
  • routine backend wiring
  • repetitive refactors
  • quick prototyping of end-to-end flows

It’s less ideal when you need tight UI polish, careful architectural decisions, or detailed business logic that isn’t spelled out clearly.

Overall, it feels less like a replacement for a developer and more like an accelerator for one. When the agents stay within context, the speed boost is dramatic. When they drift, it still takes less time to correct them than to write everything yourself.

Checklist for Your Own Antigravity Experiment

  • Start on a clean branch and confirm your project runs locally with all tests passing.
  • Set up required environment variables, including your Resend API key, and install the Antigravity browser extension so the Browser Agent can interact with your app.
  • Write a clear, scoped feature request that spans multiple areas of the stack but isn’t overly complex.
  • Have Antigravity generate a mission plan first, then review and adjust it before execution begins.
  • Let the Code, Terminal, and Browser agents run the workflow end-to-end with minimal intervention.
  • Step in only when the agents lose context, loop, or consistently misinterpret something.
  • Use the Browser Agent to verify the complete flow once the missions finish.
  • Do a final manual review to check migrations, API wiring, UI behavior, and any email or analytics logic.

The best way to understand agent-driven development is to see it operate inside your own project. Try a contained feature, let the agents run, and watch what happens. (Antigravity)

Arindam Majumder

Arindam Majumder

Developer Advocate at Tensorlake

I’m a developer advocate, writer, and builder who enjoys breaking down complex tech into simple steps, working demos, and content that developers can act on. My blogs have crossed a million views across platforms, and I create technical tutorials on YouTube focused on AI, agents, and practical workflows. I contribute to open source, explore new AI tooling, and build small apps and prototypes to show developers what’s possible with today’s models.

This website uses cookies to enhance your browsing experience. By clicking "Accept All Cookies", you consent to the use of ALL cookies. By clicking "Decline", only essential cookies will be used. Read our Privacy Policy for more details.