AI Felt Like a Bubble Until I Built a Full Stack App Overnight

Last night I built a full-stack web application in under ten hours. I’m not a software engineer and have near zero development experience, yet I shipped a tool that collects meeting inputs through a simple form, pulls details from internal directory and calendar APIs, formats a brief, and emails it to the intended recipients. In a large company, these types of tasks are usually a time sink for busy mid-level employees. I got the job done in a single extended evening using Cline inside VS Code for orchestration, Harmony for the app framework, and JDK 17 for the runtime. I also used MCP, the Model Context Protocol that lets an AI agent securely connect to tools, files, and services without hard-coding every integration.

Before this, I had already used AI to build and refine this website: Hugo for the static site, VS Code for writing, v0 by Vercel for quick UI sketches, GitHub for versioning, and Netlify for deploys. The loop stayed simple: write locally, preview, commit, ship. Cleaning navigation, linking related posts, and keeping front matter tidy gave me just enough “0 → 1” muscle to try something bigger.

By day I’m a technical product manager. I write specs, map systems, and define data flows, but I never write production code. I do, however, use Claude in Amazon Bedrock to tighten specs and draft plain-language strategy artifacts. For analytics, I anchor data in Amazon Redshift and ask the model to propose cleaner SQL first, which removes a lot of tedious effort such as creating joins, CASE statements, window-functions, and the like. This process — describe intent first, implement second — translated directly to my work on build the app.

I ran the work as a string of short, pre-planned sessions in Cline, each with one goal and a set of simple success checks. We started with semantic HTML and core flows so the app stayed accessible and failed gracefully, then layered JavaScript where it was essential e.g., search bars and the like. We initially used mock services for directory and calendar and swapped in the real APIs via environment toggles when access issues were resolved. I integrated one API per session, added error handling, fallbacks, and clear user messages, and ended every session in a working state with validation notes and updated docs so the next step started clean.

Cline got me through the hardest stretches with ease. We pivoted mid-build to hosting the app on Harmony, and Cline swapped mock calls for production one at a time. It also scaffolded the data layer and validation, wired dependencies, and kept docs and build configs in sync as I changed logic at a furious pace. We tuned live employee search with a 300 ms debounce, a minimum-character rule, and clear loading states. When we hit permission and security clearance walls at 95% readiness, I felt the pain my software engineering colleagues often talk about but one that I had never experienced first hand before this. Testing stayed lean with model-generated cases, focused deltas, targeted timeouts and retries, and a tiny reporting view. The result was a working prototype with backend logic, basic auth, a real database layer, and live docs built by a product manager who didn’t really code until now.

This experience changed how I think about the “AI bubble.” One recent work by MIT says many AI pilots fail because tools don’t learn or integrate. See coverage of MIT’s State of AI in Business 2025 report, for example in Forbes. Another says rule-based work is about to shift faster than incumbents expect. See Jonathan Gray in the Financial Times. My night-build is a small, hands-on proof of the latter and an answer to the former: a non-developer shipped a useful tool by keeping steps small and context shared. More importantly, it felt natural to someone trained to think in systems. Years of writing specifications had already taught me to describe behavior step by step, and the AI agents made those instructions executable. My value moved from interpreting requirements to designing precise instructions that compile into working software.

For product managers, this truly is game changing for the lack of a better word. Our job is rapidly shifting from authoring specs and strategy papers to shaping live, testable systems. The technical bar to create is lower, while the cognitive bar i.e., clarity of logic and precision of expression, is higher. AI isn’t replacing structured thought but instead it’s compressing the distance between thinking and doing, thus rewarding intention over repetition. Hype, failure, and wasted capital will still happen, as with every tech wave that births a new industry. What’s different now is the immediacy of utility even in the infancy stage of the technology. When a product manager with no coding background can build a full-stack application in less than a day, the “AI is a speculative bubble” story stops matching reality. This looks less like hype and more like the arrival of revolutionary infrastructure.