AI·11 June 2025·7 min read

AI Coding Security Risks Your Developer Should Know About

AI-generated code introduces specific vulnerabilities junior developers miss because the code looks authoritative. The review checklist we use on every AI-assisted build.

By Jay

AI Coding Security Risks Your Developer Should Know About

AI Coding Security Risks Your Developer Should Know About

AI-generated code has a specific problem that makes it more dangerous than code written by a junior developer who is clearly uncertain. The code looks authoritative. It is structured correctly. It follows conventions. It passes basic review because it reads like something a senior engineer would write. And sometimes it contains security vulnerabilities that a senior engineer would never introduce.

If your development workflow includes AI-assisted coding, and it probably does, these are the vulnerabilities you need to check for on every build.

SQL Injection in AI-Generated Queries

SQL injection is one of the oldest web vulnerabilities in existence. Security awareness has been high for 20 years. And AI models still generate code that is vulnerable to it.

The reason is predictable: AI generates the most statistically likely next token, and query construction patterns that concatenate user input directly into SQL strings appear throughout training data. The model produces working code that demonstrates the concept. It does not always produce safe code.

A typical example looks like this:

query = f"SELECT * FROM users WHERE email = '{user_email}'"
cursor.execute(query)

This is textbook SQL injection. An attacker who controls user_email controls the query. AI generates patterns like this when producing quick examples, when filling in boilerplate, or when the prompt does not explicitly request parameterised queries.

The fix is parameterised queries or an ORM that handles sanitisation:

cursor.execute("SELECT * FROM users WHERE email = %s", (user_email,))

Every database query that incorporates user input needs to use parameterised queries. No exceptions. Check every query in AI-assisted codebases, because the model cannot be relied upon to always do this correctly.

Unvalidated User Inputs

AI-generated API endpoints and form handlers frequently trust user input more than they should. The code accepts a value, uses it, and moves on. Input validation is the step that gets skipped.

The risk manifests in several ways. Path traversal: a file upload handler that uses the user-supplied filename without sanitising it, allowing an attacker to write files to ../../etc/passwd. Type coercion: a handler that expects an integer ID but does not enforce the type, allowing string injection or unexpected behaviour downstream. Size limits: accepting an input field without length restriction, enabling denial-of-service through oversized payloads.

When reviewing AI-generated form handlers and API endpoints, check every input against 3 questions. Is the type enforced? Is the length restricted? Is the value validated against an allowlist where applicable? If any of those answers is no, the code needs updating before it goes anywhere near production.

Hardcoded Credentials

This one is direct: AI-generated code examples regularly include hardcoded credentials, API keys, and connection strings. The model is producing a working example. In the context of a tutorial or a demo, hardcoding password = "admin123" is illustrative. In production code, it is a breach waiting to happen.

The problem is that developers copy AI-generated code into production without scrubbing it. Or they use an AI-generated config structure that includes credential fields and fill in the real values directly in the file, which then gets committed to version control.

The rule is straightforward. No credentials in code. No credentials in committed files. Everything sensitive goes into environment variables or a secrets manager like AWS Secrets Manager or HashiCorp Vault. Run a secrets scanner like git-secrets or truffleHog on every repository before deployment.

Check AI-generated database connection strings, API client initialisations, and configuration files specifically. These are the places where hardcoded credentials most commonly appear.

Insecure Dependencies

AI models recommend libraries based on their training data, which has a cutoff date. A library that was the standard recommendation 18 months ago may have known vulnerabilities today. A model trained before a major CVE was published will still recommend the vulnerable version.

This is not a hypothetical. Common patterns include recommendations for outdated JWT libraries with known signature bypass vulnerabilities, XML parsing libraries vulnerable to XML external entity (XXE) attacks, and serialisation libraries vulnerable to deserialization exploits.

The practice is to run dependency auditing as a standard step. npm audit for Node.js projects, pip-audit for Python, bundle exec bundler-audit for Ruby. Do not skip this because the code came from an AI. Run it because the code came from an AI, and the model cannot know about vulnerabilities disclosed after its training cutoff.

Pin your dependencies to specific versions rather than accepting the latest major release automatically. Review every AI-recommended library against the current CVE database before adding it.

The Code Review Checklist We Use on Every AI-Assisted Build

This is the actual list we work through on builds that include significant AI-generated code.

Authentication and authorisation: Are all protected endpoints actually protected? AI often generates the endpoint handler correctly but omits the middleware that enforces authentication. Check every route.

SQL and database queries: Is every query that incorporates user input parameterised? No string concatenation into queries, no f-string formatting into queries.

Input validation: Does every external input have type enforcement, length limits, and format validation? This includes URL parameters, form fields, headers, and file uploads.

Secrets and credentials: Is there anything that looks like a key, password, token, or connection string hardcoded in the code? Is .env listed in .gitignore? Run a secrets scanner.

Dependency audit: Are all AI-recommended libraries current and free of known CVEs? Run the appropriate audit tool for the language.

Error handling: Are error messages exposing stack traces or internal system information to the end user? AI-generated error handlers sometimes pass raw exception messages directly to the response object.

File handling: If the build includes file uploads, does the handler restrict file types, limit file size, sanitise filenames, and store files outside the web root?

CORS configuration: AI-generated API configurations sometimes set CORS to * for convenience during development and the wildcard stays in production. Check every API server configuration.

The Underlying Problem

AI-generated code is not inherently insecure. The problem is that developers treat AI output with more trust than it has earned. A code review that would catch a junior developer's SQL injection might skip over the same issue in AI-generated code because the surrounding code looks professional.

The review discipline has to be the same regardless of who wrote the code. AI-assisted development is faster. That speed advantage disappears the moment you ship a vulnerability that requires an incident response.

If you are building AI-assisted tools and want a security-conscious development partner, see what we build or reach out directly.

AI codingsecuritySQL injectioncode review
Skip the small talk