AI Wrote My Code. But It Wasn't That Simple
0x41434f
A little while back, on March 31st, I shared an idea here on my blog. It was for a personal food plan system, something I wished existed for myself as an athlete. I wanted it to connect data from my Fitbit, Strava workouts, and even lab results to give me smart meal ideas based on how my body felt that day. At the time, it was just an idea on paper.
Recently, I started working on a separate project: a tool to help people who don't have a tech background. Lots of people are trying out AI tools that write code for them based on simple descriptions. My new tool aims to give them a simple way to check the code these AI tools create for common safety problems.
To build and test this scanner tool well, I needed a good test case, a real-world example of an app made by AI. What better example than trying to build an early version, a prototype, of that nutrition app idea I wrote about? I decided to use the same kind of AI coding assistant that the people using my scanner might try.
So, I decided to try "vibe coding." Now, I recently wrote here (on April 5th) about how this feels like a "golden age" where building software seems easier than ever, thanks to AI. While I still see that amazing potential, this hands-on test quickly added a dose of practical reality to that hopeful view. You might have heard the term "vibe coding" before; AI researcher Andrej Karpathy named it. Basically, you describe what you want in simple terms, and an AI assistant writes the code. You guide it, test it, and improve it, kind of "going with the vibe." Here’s a look at how that actually went.
My goal was to get a basic test version of the nutrition app idea running using the AI assistant. Maybe get a screen to show sample meal plans or use some demo data for workouts and lab results, based on my March blog post.
The AI tool I used had a strict limit: only four chat messages per day. This meant I had to give clear instructions. Getting even a simple test version running with fake data took two days and used up all eight messages.
Interestingly, a good chunk of those eight messages was spent fixing mistakes in the code the AI had just made. I would ask it to create a screen showing grocery items, and while it wrote code fast, sometimes that code would crash the app or show errors when I tried to run it.
I ran into some problems along the way. For example, the AI sometimes tried to style buttons using code commands (like style: { marginTop: number }
) that didn't work with the specific tools I was using. This caused errors telling me the 'style' command didn't exist for that button. Other times, it got confused about how data should be organized, like the numbers for cholesterol from lab results. This led to different errors, basically meaning the AI set up the data the wrong way (Argument of type '{...}' is not assignable...
). It often felt like a back-and-forth process: get code, test it, find problems, and then tell the AI how to fix them. My focus wasn't just on describing features; it shifted to fixing problems and improving what the AI gave me.
So, how did this match up with the idea of vibe coding? The AI was really helpful for getting the basic structure, the starting setup (like the frame of a house), put together quickly. It made files, standard code layouts, and common bits of code much faster than I could have done by hand. That speed boost really helps.
But getting beyond that first step took more back-and-forth work. I needed to figure out why the errors happened to tell the AI how to fix it. Sometimes the AI needed a few tries or different instructions to correct its own mistakes, and sometimes I had to look into the problem more myself. It reminds me of what AI researcher Simon Willison said: when you carefully check, test, and understand the AI's code, it starts to feel less like just "vibe coding" (using code without checking it) and more like using the AI as a helpful coding assistant.
This whole experience, including dealing with the code's problems, directly related to why I did this: building a test case for my security scanner tool. The process itself was a good example of why such a scanner could be useful.
It felt relevant: here I was, working through the buggy code made by an AI, all to build a test app for a tool meant to find problems in AI-made code. It showed the possible problems for non-technical users who might try vibe coding. They might accept faulty or even unsafe code without realizing it, just because the AI made it. This matches worries experts have about hidden bugs and safety risks when people use AI code without fully understanding it. Seeing these problems myself really showed why my scanner tool could be useful for people navigating this new situation.
To help people deal with these exact kinds of issues, here's how I can assist using insights from my scanner and experience:
- For $1000: I'll check your app or software that was made using AI. I'll tell you what safety problems it might have and which ones you should focus on fixing first.
- For $2000: I'll find the safety weak spots in your AI-generated code, explain them to you, and I'll fix the 10 most important problems for you.
- For $5000: If you've built an AI agent (like a bot or assistant) using any Python or TypeScript Agent Framework, I can work with you one-on-one. I'll check it for safety problems and explain what I find. After that, we can make a separate plan and agreement for me to do the fixes.
Looking back, my story started with an idea for a personal nutrition app (from my March 31st post). That idea became the plan for a test using vibe coding. And the purpose of that test was to make a real test app for my new security scanning tool.
The process of vibe coding the test version showed me a lot. AI assistants are great at speeding up the start of building something, making the basic code quickly. Yet, turning that first code into something that works well involves a back-and-forth of testing, fixing, and improving. The easy feeling of the AI making code often turned into the real work of finding and fixing problems.
This experience wasn't discouraging; it actually taught me a lot. It made me feel stronger about the security scanning tool by showing the kinds of problems users might run into. It showed the need for tools that help everyone, especially non-technical users, build things better and safer in this time of AI helping us make things. Vibe coding has big benefits, but getting the most out of it means watching it carefully and being ready to try again when things go wrong.