Let me be real with you. When I first heard the term “vibe coding,” I thought it was a joke. Just describe what you want in plain English, let an AI write all the code, skip the boring parts, and ship. Sounds amazing, right?
Until you realize what you are actually shipping.
The term was coined by Andrej Karpathy former Tesla AI lead and one of the sharpest minds in the AI space who described it as “fully giving in to the vibes, embracing exponentials, and forgetting that the code even exists.” It was meant to capture the new wave of AI-assisted development. But that last part, forgetting that the code even exists, is exactly where vibe coding security risks start piling up fast.
And now there is data to back this up.
Table of Contents
- What Is Vibe Coding, Really?
- The Numbers Do Not Lie 45% of AI Code Has Security Flaws
- Real-World Scenarios Where Vibe Coding Went Wrong
- Top Vibe Coding Security Risks You Need to Know
- So Why Is Vibe Coding a Trap?
- How to Code Fast Without Killing Your Security Posture
- Final Thoughts
What Is Vibe Coding, Really?
Vibe coding is a development approach where you use natural language prompts to generate code through AI tools like Cursor, Windsurf, Bolt, or Claude. Instead of writing logic line by line, you tell the AI what you want and it builds it for you. You review the output (or sometimes you do not even do that), tweak a few things, and deploy.
For solo builders, startup founders without a technical background, and developers who want to move fast this feels like a superpower. And honestly, for prototypes and MVPs, it can be genuinely useful. But the moment you start pushing that AI-generated code into production systems, handling real user data, or building anything with authentication and API integrations, you are entering dangerous territory.
The core philosophy of pure vibe coding is speed above everything. And speed above everything is a great way to introduce vulnerabilities that will haunt you for months.
The Numbers Do Not Lie 45% of AI Code Has Security Flaws
I want you to sit with this number for a second. The Veracode 2025 GenAI Code Security Report analyzed over 100 large language models across 80 distinct coding tasks. The conclusion? 45% of all AI-generated code introduces security vulnerabilities. Nearly half.
And here is the part that really got me: newer or larger models did not perform meaningfully better. The report found that LLMs consistently failed to secure code against common attacks like Cross-Site Scripting (CWE-80) and Log Injection (CWE-117) in over 85% of test cases. These are not obscure, edge-case attack vectors. These are the bread-and-butter vulnerabilities that security teams deal with every single week.
This is what I call the “Garbage In, Gospel Out” problem. AI models are trained on massive datasets from public repositories like GitHub. That data contains great code. It also contains years of insecure patterns, deprecated libraries, and bad practices. The model does not know the difference it learns to reproduce whatever pattern it has seen most frequently. And unfortunately, insecure code is very well-represented in public codebases.
So when your vibe-coded app ships, it is not just carrying one developer’s blind spots. It is potentially carrying the accumulated blind spots of thousands of developers whose flawed code was part of the training data.
Real-World Scenarios Where Vibe Coding Went Wrong
This is not theoretical. Here are some concrete examples of what vibe coding security risks look like in the real world.
The Moltbook Data Leak
A social network built entirely by AI agents, Moltbook, made headlines in early 2025. Developers built it almost entirely through vibe coding. A misconfigured Supabase database, the kind of configuration detail that gets glossed over when you are moving at AI speed, exposed 1.5 million API keys and 35,000 user email addresses directly to the public internet. No sophisticated hack was involved. Just shortcuts that nobody caught because nobody was really reading the code.
The Snake Game with Remote Code Execution
Databricks’ AI Red Team ran an experiment where they used Claude to build a multiplayer Snake game entirely through vibe coding. The game worked perfectly. It also contained a critical Remote Code Execution (RCE) vulnerability because the network layer used Python’s pickle module to serialize data without any validation. A malicious client could send crafted payloads to execute arbitrary code on any connected machine. The code “just worked” which is exactly why nobody caught the flaw until a security review.
Hardcoded Credentials in Login Systems
A very common vibe coding scenario: you ask an AI to generate a login system or database connection. The AI inserts placeholder credentials like db_password = "test123" or a real-looking API key directly in the source code. Developers who do not review what was generated push this to GitHub. Automated scanners pick it up within minutes. This is not hypothetical it happens all the time.
Here is a simple example of what that looks like:
# What AI-generated code might look like
import psycopg2
conn = psycopg2.connect(
host="prod-db.company.com",
database="users",
user="admin",
password="SuperSecret2024!" # Hardcoded credential never do this
)
A secure version would pull from environment variables:
import os
import psycopg2
conn = psycopg2.connect(
host=os.environ.get("DB_HOST"),
database=os.environ.get("DB_NAME"),
user=os.environ.get("DB_USER"),
password=os.environ.get("DB_PASSWORD")
)
The difference is obvious when you know what to look for. The problem is that vibe coders, by design, are often not looking.
Top Vibe Coding Security Risks You Need to Know
Beyond hardcoded credentials, here are the most common vibe coding security risks that show up in AI-generated codebases:
Prompt Injection Through Dependencies
AI coding assistants can be manipulated through the code they are asked to analyze. A malicious comment embedded in a file you ask the AI to review can redirect the AI’s behavior entirely. This is an extension of prompt injection attacks, which I have covered in detail before read my deep dive on mitigating prompt injection attacks in LLM applications here.
Insecure Cryptography Usage
AI models frequently suggest outdated hashing algorithms like MD5 for password storage. MD5 can be cracked in seconds with modern hardware. A vibe coder who does not know the difference between MD5 and Argon2 will accept the AI’s output because the code compiles and appears to work right up until a breach happens.
Broken Authentication Logic
Authentication is one of the most security-sensitive parts of any application. AI models, optimizing for functional code rather than secure code, often generate authentication flows that skip edge cases session invalidation on logout, token expiry checks, or rate limiting on login endpoints. These gaps are exactly what attackers look for.
Supply Chain Risks
AI tools recommend third-party packages without verifying their current security status. A package that was safe when the model was trained may have been compromised since. Vibe coding accelerates the intake of dependencies without the due diligence that a manual security review would catch.
OWASP LLM Top 10 Violations
Many of the risks above map directly to the OWASP Top 10 for LLM Applications a framework that specifically documents how AI-powered development introduces new attack surfaces. If you are building with AI tools and have not reviewed this framework yet, that is a gap worth closing. Check out this breakdown of OWASP Top 10 for LLM Applications on CyberSecurityWaala.
So Why Is Vibe Coding a Trap?
Here is the thing that makes vibe coding genuinely dangerous versus just risky it is the false sense of confidence it creates.
When you write code manually and something breaks, you usually understand why. You wrote it, you know the logic, and you can debug it. With vibe coding, you are often running code you do not fully understand. The application looks great. The UI works. The API responds. Everything feels fine until it is not.
Karpathy himself, who coined the term, has since warned that without proper oversight, AI agents can simply “generate slop.” His point is that as we rely more on AI to write code, our primary job shifts from writing to reviewing. And reviewing AI-generated code for security issues requires exactly the kind of deep technical knowledge that vibe coding was supposed to make unnecessary.
That is the trap. Vibe coding promises to democratize development. But the security responsibility does not disappear it just gets deferred. And deferred security debt always comes due, usually at the worst possible moment.
There is also a regulatory dimension here. The EU’s Cyber Resilience Act requires software manufacturers to follow secure-by-design principles, conduct mandatory risk assessments, and provide security updates for at least five years. Shipping AI-generated code without proper review is not just a technical risk it is a legal one in an increasing number of jurisdictions.
How to Code Fast Without Killing Your Security Posture
I am not saying stop using AI tools. I use them. Most good security professionals do. The goal is responsible AI-assisted development, not AI-free development. Here is how to do it right:
Define Security Requirements Before You Prompt
Before you write a single prompt, define security requirements in writing. Tell the AI: no public database access, sanitize all user input, no hardcoded credentials, use environment variables for secrets, apply rate limiting to all authentication endpoints. Make the AI satisfy a spec, not just “make it work.”
Always Read the Code AI Writes for You
Treat AI-generated code like you would treat a junior developer’s pull request. Read it. Question it. Check what libraries it brought in. This is non-negotiable if you are pushing to production.
Run Automated Security Scanning
Tools like Semgrep, Snyk, or Bandit for Python can catch a large class of vulnerabilities automatically. These should be in your CI/CD pipeline regardless of whether you use AI tooling, but they are especially critical when AI is generating your code.
Apply Chain-of-Thought Security Prompting
Before the AI starts writing code, ask it to think about what could go wrong first. Instead of just saying ‘build me a login system,’ say ‘build me a login system, but before you write anything, list all the security risks and tell me how you will handle each one.’ This one small change in how you prompt can make a big difference in what you get back.
Ground Everything in OWASP Top 10
The OWASP Top 10 is the industry standard for the most critical web security risks. Use it as your checklist. After your AI generates code, run through the list and ask: is this code vulnerable to injection? Is authentication handled securely? Are sensitive data exposures possible? Do not assume the AI checked verify it yourself.
Final Thoughts
Vibe coding is a powerful tool in the right hands. For prototyping, for exploring ideas, for moving fast on non-critical systems, it is genuinely useful. But the moment real users, real data, and real infrastructure are involved, the vibe has to give way to rigor.
The data is clear. Nearly half of all AI-generated code contains security vulnerabilities. The tools that make vibe coding possible Cursor, Windsurf, Bolt, and others have their own security track record to consider, with real CVEs documented in 2025. And the developers who ship without reviewing what was generated are building on a foundation that someone, somewhere, will eventually find a way to crack.
Speed is great. Shipping is great. But shipping something that hands your users’ data to an attacker is not a product milestone it is a liability. Treat vibe coding like you would treat any powerful tool: with respect, with guardrails, and with eyes open.
Because the vibes? They do not care about your security posture. You have to do that part yourself.
References and Further Reading:
Veracode 2025 GenAI Code Security Report
Kaspersky: Security Risks of Vibe Coding and LLM Assistants (2025)