You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
These packages were in the AI's training data at some point — maybe they were deleted, maybe they were never public, maybe the AI just made up a plausible-sounding name.
Why This Is Dangerous
These phantom package names are prime targets for namespace squatting. If an attacker registers email-validator-pro on npm tomorrow, every project that hallucinated this import suddenly pulls in attacker-controlled code.
It's essentially a dependency confusion attack — but instead of the attacker choosing the package name, the AI chose it for them.
Real Examples
We built a database of commonly hallucinated patterns. Here are some of the most frequent:
express-*helper* (AI loves adding "helper" to express packages)
react-dom-utils (doesn't exist, but sounds plausible)
lodash-*extra* (lodash variants that don't exist)
mongoose-*plugin* (phantom plugin names)
axios-*wrapper* (the AI really wants to wrap axios)
What Tools Miss
Tool
Detects Hallucinated Packages?
ESLint
❌ Syntax only
npm audit
❌ Needs package to exist first
Snyk
❌ Dependency tree analysis
SonarQube
❌ Quality rules, not registry-aware
How We Detect It
Open Code Review's L1 scanner hits the actual npm/PyPI registry API to verify every import exists. L2 adds embedding-based analysis to detect deprecated APIs from training data. Both run locally with Ollama — zero API cost.
What's Your Experience?
Have you ever found hallucinated package names in AI-generated code?
What tools do you use to verify dependencies?
Should CI/CD pipelines include registry-existence checks?
I'm building a public database of commonly hallucinated patterns. If you've seen others, please share!
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
The Supply Chain Attack Nobody Talks About: AI-Hallucinated Package Names
I've been tracking a growing class of supply chain vulnerability that traditional security tools completely miss: AI-hallucinated package names.
The Problem
When AI coding assistants generate code, they sometimes reference npm packages that don't actually exist:
These packages were in the AI's training data at some point — maybe they were deleted, maybe they were never public, maybe the AI just made up a plausible-sounding name.
Why This Is Dangerous
These phantom package names are prime targets for namespace squatting. If an attacker registers
email-validator-proon npm tomorrow, every project that hallucinated this import suddenly pulls in attacker-controlled code.It's essentially a dependency confusion attack — but instead of the attacker choosing the package name, the AI chose it for them.
Real Examples
We built a database of commonly hallucinated patterns. Here are some of the most frequent:
express-*helper*(AI loves adding "helper" to express packages)react-dom-utils(doesn't exist, but sounds plausible)lodash-*extra*(lodash variants that don't exist)mongoose-*plugin*(phantom plugin names)axios-*wrapper*(the AI really wants to wrap axios)What Tools Miss
How We Detect It
Open Code Review's L1 scanner hits the actual npm/PyPI registry API to verify every import exists. L2 adds embedding-based analysis to detect deprecated APIs from training data. Both run locally with Ollama — zero API cost.
What's Your Experience?
I'm building a public database of commonly hallucinated patterns. If you've seen others, please share!
Beta Was this translation helpful? Give feedback.
All reactions