Security Guide
AI App Security Checklist: 15 Things to Check Before Going Live
AI code generators build fast, but they don't build secure. Whether your team is using Cursor, Lovable, Bolt, or Replit, these 15 checks will catch the most common and most dangerous vulnerabilities before they reach production.
1.Scan for Exposed API Keys in Client-Side Code
AI code generators frequently embed API keys directly in frontend JavaScript. Check for OpenAI, Stripe, Supabase, Firebase, and AWS keys in your page source and bundled scripts. A single exposed key costs thousands in unauthorized API usage within hours.
2.Verify Security Headers Are Present
Ensure Content-Security-Policy (CSP), Strict-Transport-Security (HSTS), X-Frame-Options, X-Content-Type-Options, and Permissions-Policy headers are configured. AI tools rarely add these automatically, leaving apps vulnerable to XSS, clickjacking, and protocol downgrade attacks.
3.Check Authentication Isn't Client-Side Only
A common pattern in AI-generated apps: auth checks happen in the browser but not on the server. Look for localStorage-based auth gates, client-side role checks, and API routes without middleware protection. If removing a JavaScript check grants access, authentication is broken.
4.Audit Database Access Rules
If your app uses Supabase or Firebase, verify Row Level Security (RLS) policies are enabled and correctly configured. AI generators often disable RLS for simplicity, exposing entire databases to any authenticated or unauthenticated user.
5.Remove Source Maps from Production
Source maps expose your entire application source code to anyone who opens DevTools. Check for .map files being served in production. Most AI-generated build configurations leave source maps enabled by default.
6.Validate Input on the Server
AI tools love client-side validation: pretty form errors, inline checks, character limits. But without server-side validation, attackers bypass everything with a single curl command. Every API endpoint must validate and sanitize inputs independently.
7.Review CORS Configuration
Check that your API doesn't return Access-Control-Allow-Origin: *. AI code generators often set permissive CORS to avoid development friction, but this allows any website to make authenticated requests to your API on behalf of your users.
8.Check for Hardcoded Secrets in Environment Variables
Verify that .env files aren't committed to version control and that NEXT_PUBLIC_ or VITE_ prefixed variables don't contain sensitive values. AI assistants frequently suggest putting secrets in public environment variables without understanding the exposure.
9.Test Rate Limiting on Critical Endpoints
Login, signup, password reset, and payment endpoints must have rate limiting. Without it, attackers brute-force credentials, create spam accounts, or abuse expensive API calls. Most AI-generated apps have zero rate limiting.
10.Verify HTTPS Is Enforced Everywhere
Ensure all traffic is redirected to HTTPS and that HSTS headers are set. Check that no mixed content warnings appear. AI tools sometimes hardcode HTTP URLs for API calls or asset loading, creating security gaps even on HTTPS-enabled hosts.
11.Audit Third-Party Dependencies
Run npm audit or equivalent and review the results. AI code generators install packages without evaluating security posture. Look for deprecated packages, known vulnerabilities, and unnecessary dependencies that expand your attack surface.
12.Check for Sensitive Data in Error Messages
Trigger errors intentionally and review what's returned. Stack traces, database connection strings, internal file paths, and SQL queries should never reach the client. AI-generated error handling often exposes debugging information in production.
13.Review File Upload Handling
If the app accepts file uploads, verify file type validation happens server-side, files are scanned for malware, and uploaded files can't be executed. AI implementations often trust client-side file type checks and store uploads in publicly accessible directories.
14.Test Authorization Between Users
Log in as User A, capture an API request, then replay it with User B's session. Can User B access User A's data? Broken object-level authorization (BOLA) is the #1 API vulnerability and almost universal in AI-generated multi-tenant apps.
15.Set Up Continuous Monitoring
Security isn't a one-time check. AI-generated apps change frequently; every prompt-driven update introduces new vulnerabilities. Set up continuous external monitoring to catch regressions before attackers do. Tools like Scantient automate this entire checklist on a recurring schedule.
Automate this entire checklist
Scantient runs these checks continuously on every AI-generated app in your organization. No SDK required. Start your free trial and scan your first app in under 2 minutes.
Start 14-day free trial