End to End Projects With AI: A Strict, Verifiable Workflow
AI can speed you up, but it can also lie.
This post is not a tutorial and it is not a flex. It is just me explaining how I actually build end to end projects with AI, based on what I’ve really done, what I’ve actually struggled with, and how I’ve learned to make systems work instead of just talking about them.
If you only look at my repos, it can look like I’m cranking out full stack apps, dashboards, WebGPU demos, AWS deployments, Docker setups, CI pipelines, auth flows, and UI systems nonstop. That part is real. The part that isn’t real is the idea that I sit down and type perfect solutions from memory like a textbook engineer. That has never been me.
What I am good at is steering a build. I can describe behavior clearly. I can read code once it exists. I can trace a system end to end. I can debug when something breaks. I can deploy and keep poking until the live system behaves. AI just accelerates the front half of that loop. I still own whether the thing actually works.
There is one rule I follow across all of this: I do not claim something is built unless I can run it, build it, or point to the exact change that made it work. If I can’t prove it in a repo or a live deploy, I don’t talk like it’s done.
Where my baseline actually is
I have a B.S. in Web Development. I’ve worked in real codebases. I’ve shipped real projects. I’ve done an AWS Cloud Support internship where I lived inside AWS consoles, labs, logs, runbooks, and troubleshooting workflows. I’ve built a Gatsby portfolio, React apps, Node backends, WebGPU demos, and deployed systems to GitHub Pages and AWS.
I do not have a computer science background. I do not write algorithms from memory. I do not sit at a whiteboard and derive solutions from scratch. I learn by touching systems, breaking them, and fixing them. That is consistent across my projects and across my internship.
Once a system exists, I am comfortable inside it. I can follow requests, state, data flow, permissions, logs, and deployment pipelines. That is the foundation I build on.
How I actually start projects
I do not start with architecture diagrams. I start with behavior.
My first prompt to AI is always plain language. “I want a full stack app where users can sign up, log in, and post to a feed.” “I want a clean WebGPU page that renders a triangle and then a cube I can flip between.” “I want a Node backend with auth and a React front end that consumes it.”
I talk like a normal person because that is how I think about systems at the beginning. AI fills in the technical scaffolding. My job is to stop it from wandering into overbuilt nonsense.
Once the idea is clear, I let AI propose an initial folder structure. I do not accept it blindly. I look at it and ask a simple question: if I come back to this in three days, will I understand where things live. If the answer is no, I change it.
Then I ask for a single runnable slice. One entry file. One route. One page. One WebGPU init. Something I can paste in and run. It almost never works on the first try. That is expected.
From there, the real work begins. I run it. It breaks. I read the error. I check logs. I inspect DevTools. I paste context back to AI. I make one small change at a time. I keep going until the local system behaves. Then I move to deployment and repeat the same process in a live environment.
I do not architect everything up front. I build until friction appears, then I solve that friction. Over time, structure emerges. That is how every real project I’ve shipped has grown.
Where AI helps and where I take over
AI is good at giving me a fast first draft. Boilerplate. Syntax I don’t memorize. Scaffolding. Quick translations from plain language to code. That is useful because it gets me to something I can run instead of staring at an empty folder.
AI is also unreliable. It invents file paths. It switches patterns mid-build. It hallucinates APIs. It misses security concerns. It cannot see my actual environment. It cannot see my logs. It cannot see my deployment dashboards.
So the split in practice looks like this.
| Area | Where AI helps | Where I step in |
|---|---|---|
| Scaffolding | Drafts structures and starter code quickly | Deciding if the structure actually fits how I work |
| Syntax | Fills in language details | Verifying it matches real versions in my repo |
| First pass logic | Translates behavior into a starting implementation | Testing edge cases and real runtime behavior |
| Debugging | Suggests fixes when I paste errors | Reading actual logs, DevTools, and system output |
| Security | Suggests patterns if prompted | Making sure keys, roles, and flows are safe |
| Deployment | Writes Dockerfiles or configs | Running builds, fixing IAM, DNS, SSL, and environment issues |
| Direction | Generates options fast | Choosing what keeps the system understandable and maintainable |
This is not theory. This is how every project I’ve built with AI has actually gone. AI accelerates the start. I own correctness, deployment, and final shape.
What makes this real instead of performative
I do not measure progress by how much code AI wrote. I measure progress by whether the system runs and behaves in a real environment.
When something fails, I do not guess. I open logs. I open DevTools. I inspect network requests. I tail Docker output. I read CI logs. I look at AWS dashboards when I’m deploying cloud resources. That is where real answers live.
AI only sees the text I paste into chat. I see the entire system. That is why I can tell when AI is confidently wrong.
Deployment is the final reality check. AI cannot log into AWS. It cannot wire IAM roles correctly in my account. It cannot fix DNS. It cannot debug SSL. It cannot clear a bad service worker cache. When the live site is broken, it is me in a terminal and a console making it behave.
The cycle behind every project I ship
Every project I’ve completed follows the same loop.
I describe behavior. AI drafts a starting point. I run it. It breaks. I debug. It works locally. I deploy. It breaks again. I debug in the live environment. It works in production. I clean up the UI and wording. I document enough that someone else can understand it. Then I move on.
There is no magic. Just repetition, logs, friction, and stubbornness.
Why this approach fits me
I learn by doing. I learned AWS by living in consoles, labs, logs, and runbooks during my internship. I learned web development by shipping projects that broke in new ways every week. I learned AI-assisted building by being burned by hallucinated code enough times to stop trusting it blindly.
This loop fits how my brain works. Touch the system. Break the system. Fix the system. Repeat.
Closing
I am not a textbook engineer. I am a builder who uses AI as a tool, not as a replacement for thinking. I ship real systems, debug real failures, and deploy real projects.
If you want someone who can memorize algorithms and write solutions from scratch at a whiteboard, that is not me.
If you want someone who can enter an existing system, understand how it works, use AI to move faster, debug what breaks, and deliver working features end to end, that is exactly what I do.