I Was Skeptical of the AI Bubble Then I Built a Full Stack App in Under 10 Hours
Last night I built a full-stack web application in under ten hours. I’m not a software engineer and have near zero development experience, yet I shipped a tool that collects meeting inputs through a simple form, pulls details from internal directory and calendar APIs, formats a brief, and emails it to the intended recipients. In a large company these types of tasks are usually a time sink for busy mid-level employees. I got the job done in a single extended evening using Cline inside VS Code for orchestration, Harmony for the app framework, and JDK 17 for the runtime. I also used MCP, the Model Context Protocol that lets an AI agent securely connect to tools, files, and services without hard-coding every integration.
Before this, I had already used AI to build and refine this website: Hugo for the static site, VS Code for writing, v0 by Vercel for quick UI sketches, GitHub for versioning, and Netlify for deploys. The loop stayed simple: write locally, preview, commit, ship. Cleaning navigation, linking related posts, and keeping front matter tidy gave me just enough “0 → 1” muscle to try something bigger.
By day I’m a technical product manager. I write specs, map systems, and define data flows, but I never write production code. I do, however, use Claude in Amazon Bedrock to tighten specs and draft plain-language strategy artifacts. For analytics, I anchor data in Amazon Redshift and ask the model to propose cleaner SQL first, which removes a lot of tedious effort such as creating joins, CASE statements, window-functions, and the like. That habit — describe intent first, implement second — translated directly to the build.
I ran the work as a string of short, pre-planned sessions in Cline, each with one goal and a set of simple success checks. Together, we started with semantic HTML and core flows so the app stayed accessible and failed gracefully, then layered JavaScript only where it earned its keep. We initially used mock services for directory and calendar and swapped in the real APIs via environment toggles when access cleared to avoid dead time. I integrated one API per session, added error handling, fallbacks, and clear user messages, and ended every session in a working state with validation notes and updated docs so the next step started clean.
Cline got me through the hardest stretches with ease. We pivoted mid-build to an enterprise console, and Cline kept the plan modular while we swapped mock calls for production one at a time. It also scaffolded the data layer and validation, wired dependencies, and kept docs and build configs in sync as I changed logic. We tuned live employee search with a 300 ms debounce, a minimum-character rule, and clear loading states. When we hit permission walls at 95% readiness, Cline taught me an important development lesson: involve security and platform early. Testing stayed lean with model-generated cases, focused deltas, targeted timeouts and retries, and a tiny reporting view. The result was a working prototype with backend logic, basic auth, a real database layer, and live docs built by someone who doesn’t usually code.
This experience changed how I think about the “AI bubble.” One recent work by MIT says many AI pilots fail because tools don’t learn or integrate. See coverage of MIT’s State of AI in Business 2025 report, for example in Forbes. Another says rule-based work is about to shift faster than incumbents expect. See Jonathan Gray in the Financial Times. My night-build is a small, hands-on proof of the latter and an answer to the former: a non-developer shipped a useful tool by keeping steps small and context shared. More importantly, it felt natural to someone trained to think in systems. Years of writing specifications had already taught me to describe behavior step by step, and the AI agents made those instructions executable. My value moved from interpreting requirements to designing precise instructions that compile into working software.
For product management, this matters. The job is shifting from authoring documents to shaping live, testable systems. The technical bar to create is lower, while the cognitive bar i.e., clarity of logic and precision of expression, is higher. AI isn’t replacing structured thought but instead it’s compressing the distance between thinking and doing, thus rewarding intention over repetition. Hype, failure, and wasted capital will still happen, as with every tech wave. What’s different now is the immediacy of utility. When a product manager with no coding background can build a full-stack application in less than a day, the “AI is a speculative bubble” story stops matching reality. This looks less like hype and more like the arrival of a new creative infrastructure for work.