Skip to content

LWTlong/ai-dev-analytics

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

50 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

AIDA

Make Vibe Coding Measurable.

Every vibe coding session generates massive insights β€” deviations, patterns, quality signals.
But you close the terminal, and all of it vanishes. Next session, you start blind again.
AIDA captures structured data at every development checkpoint, visualizes it in a live dashboard, and distills deviation patterns into rules that make your AI write better code β€” every single run.

One line to integrate. Zero workflow changes.

{ "mcpServers": { "aida": { "command": "npx", "args": ["-y", "ai-dev-analytics", "mcp"] } } }

npm version license node tests Live Demo ai-dev-analytics MCP server

ai-dev-analytics MCP server

One-Line Setup Β· Data-Driven Loop Β· Dashboard Β· SOP Workflow Β· Data for Reports Β· δΈ­ζ–‡ζ–‡ζ‘£


The Insight

Vibe coding is powerful. But it's a black box.

You tell Claude to build a feature. It writes code. You ship it. But you have zero visibility into what actually happened:

  • How many tasks did AI complete? How long did each take?
  • Where did AI deviate from your project conventions? Why?
  • Which deviations keep recurring? What rules would prevent them?
  • What's the bug rate? Which phases produce the most bugs?

Without data, you can't improve. You're just vibing β€” over and over, with the same blind spots.

AIDA makes the invisible visible. It collects structured data from every vibe coding session, renders it in a real-time dashboard, and turns deviation patterns into project rules. Your AI doesn't just code β€” it learns your project.


πŸ”„ The Data-Driven Loop

This is the core of AIDA β€” data in, rules out, better code next time.

Vibe Coding Session
        ↓
   AIDA silently collects structured data
   (tasks, deviations, bugs, reviews, files, timeline)
        ↓
   Dashboard visualizes patterns
   "9 deviations β†’ 56% hallucination, 44% rule-missing"
        ↓
   Deviation patterns identified β†’ AI suggests rules β†’ user confirms β†’ sedimented
   .aidevos/rules/ ← your AI's growing knowledge base
        ↓
   AI reads rules next session β†’ same mistakes eliminated
        ↓
   Repeat β€” each cycle, AI output gets closer to your expectations

Real data from a production project:

Run Deviations What happened Rules sedimented
#1 23 deviations across 47 tasks AI misused components, wrong layouts, incorrect API patterns 6 project-specific rules
#2 0 repeat deviations AI read the rules. Same patterns β€” zero errors. β€”

Step 1: See why AI deviates β€” Root cause breakdown reveals whether issues stem from hallucination, missing rules, or insufficient context.

Deviation Root Cause

Step 2: See where AI deviates β€” Category distribution pinpoints the exact areas: UI spacing, layout, component usage, API patterns.

Deviation Category Distribution

Step 3: Watch rules compound β€” As rules accumulate (green line), the same deviation patterns stop recurring.

Deviation & Rule Trend

The .aidevos/rules/ directory is your project-specific AI knowledge base. It grows with every run. The more you use AI, the smarter it gets at your project.


πŸ“Š The Dashboard

Your entire vibe coding process β€” structured, visualized, actionable.

Dashboard

Live Demo β†’ Real anonymized project data. No install needed.

AIDA captures every dimension of AI-assisted development and turns it into interactive charts:

What you see Why it matters
Deviation root cause breakdown Know why AI fails β€” rule-missing? hallucination? context gap?
Deviation category distribution Know where AI fails β€” layout? components? API?
Deviation & rule trend over time Watch deviations drop as rules accumulate
Bug severity distribution Track quality β€” which phases produce critical bugs?
Self-review pass rate trend Is AI code getting better or worse over time?
Task completion by phase See progress across the full development lifecycle
File modification hotspots Which files keep getting changed? Where are the pain points?
Rules table with source mapping Every rule links back to the deviation that created it
Full development timeline Every task, bug, review, deviation β€” chronologically
Project overview (team view) Cross-branch stats, developer comparison, requirement status

Every KPI card is clickable β€” drill down into task details, deviation root causes, review reports, and file changes.

Run npx ai-dev-analytics dashboard to see your own project's data in seconds.

πŸ”’ 100% Local. Zero External Requests.

AIDA writes JSON files to .aidevos/ in your project directory. The codebase contains zero HTTP calls to external services β€” no telemetry, no cloud sync, no analytics, no tracking. Zero runtime dependencies. Your code and data never leave your machine. Period.


⚑ 30-Second Setup

One line in .mcp.json β€” that's the entire integration.

{ "mcpServers": { "aida": { "command": "npx", "args": ["-y", "ai-dev-analytics", "mcp"] } } }

No SDK. No wrapper. No code changes. Add this to your project root .mcp.json, and AIDA starts collecting data the next time your AI writes code. It works silently β€” zero workflow changes.

Tip: If npx is slow, install globally first: npm install -g ai-dev-analytics, then change the command to "aida". Global install also gives you the aida CLI command (e.g. aida dashboard, aida init).

Cursor / VS Code Copilot / Windsurf / Lingma

Cursor .cursor/mcp.json:

{
  "mcpServers": {
    "aida": {
      "command": "npx",
      "args": ["-y", "ai-dev-analytics", "mcp"]
    }
  }
}

VS Code Copilot .vscode/mcp.json:

{
  "servers": {
    "aida": {
      "command": "npx",
      "args": ["-y", "ai-dev-analytics", "mcp"]
    }
  }
}

Windsurf ~/.codeium/windsurf/mcp_config.json:

{
  "mcpServers": {
    "aida": {
      "command": "npx",
      "args": ["-y", "ai-dev-analytics", "mcp"]
    }
  }
}

Lingma (ι€šδΉ‰η΅η ) .lingma/mcp.json:

{
  "mcpServers": {
    "aida": {
      "command": "npx",
      "args": ["-y", "ai-dev-analytics", "mcp"]
    }
  }
}

See your data

npx ai-dev-analytics dashboard

Open http://localhost:2375 β€” real-time updates via SSE, Chinese/English toggle built in.


πŸ€” Why Data Changes Everything

Without data, every vibe coding session starts from zero. With data, each one builds on the last.

Vibing blind Vibing with data
"AI keeps getting layouts wrong" Dashboard shows: 9 layout deviations, root cause 56% hallucination + 44% rule-missing. 4 rules sedimented β†’ zero repeats next run
"I corrected this three times already" AIDA recorded the deviation pattern. AI detected rule-missing, suggested a rule, you confirmed β€” AI reads it every session, you never correct it again
"That feature had a lot of bugs" 5 bugs, 3 critical β€” all concentrated in one phase. Now you know where to add guardrails
"What did I even do this quarter?" 47 tasks, 23 deviations fixed, 6 rules sedimented, 4064 lines. Export β†’ H1 performance review done

Vibe coding without data is just vibing. Add data, and it becomes a compounding system.


🎯 Use Cases

Vibe Coder β€” "I want my AI to actually learn my project"

You've been using Claude Code for a week. AIDA's dashboard shows: 23 deviations, concentrated in component-usage and layout categories, root cause mostly rule-missing. AI detects the patterns and suggests rules β€” you confirm, 6 project rules sedimented. Next week, those categories show zero deviations. Your AI now knows your project conventions.

Tech Lead β€” "I need to see what AI is actually doing across the team"

Team of 4 uses Claude Code daily. Open the project overview: Developer A has 2 deviations + 5 sedimented rules (AI is learning). Developer B has 15 deviations + 0 rules (AI is not learning). The data tells you exactly where to intervene.

Senior Engineer β€” "Show me the data for my performance review"

End of H1. Open the dashboard: 150 tasks across 3 features, 89% first-pass review rate, 12 rules sedimented that now benefit the entire team. All structured data β€” export it, attach it to your review doc. Data beats "I think I did a lot."

Team adopting vibe coding β€” "How do we go from chaotic to systematic?"

Start collecting data. After 2 weeks, the dashboard shows clear patterns: which types of tasks AI handles well, where it consistently deviates, what rules are needed. You go from "AI sometimes works" to "AI works predictably because we've taught it our conventions."


πŸ“ Data Sedimentation & Performance Reports

AIDA doesn't just visualize β€” it sediments. Every run accumulates structured data that compounds over time.

Week 1:  47 tasks, 23 deviations, 5 bugs, 6 rules, 4064 lines
Week 4:  180+ tasks, deviation rate dropping, 15 rules, full quality history
Quarter: Complete development record β€” exportable, analyzable, presentable

What you can do with sedimented data:

Scenario What you get
H1 / H2 Performance Review Tasks completed, quality metrics (pass rate, bug rate), code volume, rules contributed β€” all with numbers, not feelings
Annual Summary Cross-project trends, deviation patterns over time, rule growth curve, total output
Sprint Retrospective What went wrong, what rules were added, which phases improved, measurable quality delta
Team Leader Report Per-developer stats, deviation hotspots, which modules need better rules, team-wide AI maturity
Project Handover Full development history β€” someone new can see exactly what happened, what rules exist, and why

All data is structured JSON in .aidevos/. No vendor lock-in. Export it, query it, pipe it into any reporting tool. Run aida report to generate a summary at any time.


βš™οΈ How It Works

flowchart LR
    A["Your AI Tool\nClaude Code / Cursor"] -->|Vibe Coding| B{"AIDA MCP Server\n10 Tools"}
    B -->|Silent Data Collection| C[".aidevos/run.json"]
    C -->|Visualization| D["Dashboard\nlocalhost:2375"]
    C -->|Pattern Analysis| E[".aidevos/rules/"]
    E -->|AI Reads Rules| A
Loading

Your AI tool calls AIDA's MCP tools automatically as it works. You don't invoke them manually. No prompts to write, no scripts to run β€” just vibe code as usual.

πŸ“‹ 10 MCP Tools (auto-collected)
Tool What it captures
aida_task_start Task begins β€” ID, title, stage, PRD phase
aida_task_done Task completed β€” duration auto-calculated
aida_log_bug Bug found β€” severity, title, related files
aida_bug_fix Bug fixed β€” links fix to original bug
aida_log_review Code self-review β€” pass/fail, issue list
aida_log_deviation AI output β‰  expectation β€” root cause, category
aida_log_files File changes β€” auto-scans git diff, zero args needed
aida_highlight Notable achievement worth recording
aida_status Current run status snapshot
aida_log_rule Sediment project rule β€” user confirms, then AI calls this tool

Data Model

All data is local JSON. No database, no cloud.

Level File What it contains
Run .aidevos/runs/{branch}/{dev}/run.json Every task, bug, deviation, review, file change
Branch .aidevos/runs/{branch}/requirement.json Aggregated stats per requirement
Project .aidevos/index.json Cross-branch overview for team leads
Rules .aidevos/rules/ Sedimented project rules β€” your AI's growing knowledge base

All structured JSON β€” ready for export, analysis, or feeding into reports.


πŸš€ Standardized AI Development Workflow

Beyond data collection, AIDA provides a complete SOP for AI-assisted development β€” a standardized workflow that turns chaotic vibe coding into a repeatable, measurable process.

aida init    # Select "Full workflow"
aida start   # Create a development run

This enables 14 AI Skills orchestrated as a full development pipeline:

PRD Ingestion β†’ Requirement Analysis β†’ Task Decomposition
        ↓
Code Generation β†’ Self-Review β†’ Bug Fix β†’ Deviation Fix
        ↓
Data Collection β†’ Pattern Analysis β†’ Rule Sedimentation
        ↓
Next Run: AI reads rules β†’ better output β†’ fewer deviations
Phase What AI does What AIDA records
Requirement Parses PRD, extracts modules and phases PRD phases, scope
Task Split Breaks requirements into atomic tasks Task list, stages, estimates
Code Gen Generates code per task Files changed, lines added, duration
Self-Review Reviews its own output against conventions Pass/fail, issue list, quality score
Bug Fix Fixes bugs found during review Bug severity, fix details, related files
Deviation Fix Corrects output that doesn't match expectations Root cause, category, new rule (when root cause is rule-missing)

Every step produces structured data. Every deviation can become a rule. The SOP ensures nothing falls through the cracks β€” and the data makes the whole process visible and improvable.


πŸ–₯ CLI Reference
aida init              # Interactive project setup
aida start             # Create a new development run
aida status            # Show current run status
aida dashboard         # Launch dashboard (default port 2375)
aida dashboard --port 3000 # Custom port
aida mcp               # Start MCP server (for AI tool config)
aida log <subcommand>  # Write structured data (task, bug, review, etc.)
aida reindex           # Rebuild project-level index
aida report            # Generate performance report
aida rules build       # Generate rule view files from registry
aida rules dedupe      # Find and remove near-duplicate rules
aida rules merge       # Merge rules from parallel branches
aida update            # Update skills to latest version
aida migrate           # Migrate old data to current schema
πŸ”Œ MCP Integration Details

AIDA uses Model Context Protocol β€” the standard way for AI tools to interact with external systems. The MCP server runs over stdio with zero dependencies.

What happens when you add the config:

  1. Your AI tool discovers AIDA's 10 tools via MCP
  2. As the AI works, it naturally calls aida_task_start, aida_log_files, etc.
  3. Data flows into run.json silently
  4. Deviation patterns emerge β†’ AI suggests rules β†’ user confirms β†’ sedimented
  5. AI reads rules next session β†’ output quality improves

No prompts to write. No scripts to run. No workflow to learn.


πŸ“ Rules System β€” Team Workflow

Rules are the compounding asset of AIDA. Here's how they work in a team setting.

Architecture

.aidevos/rules.json        ← source of truth, committed to git
        ↓
aida rules build
        ↓
.aidevos/rules/*.md        ← auto-generated views, gitignored
        ↓
AI reads rules next session

aida init automatically adds .aidevos/rules/*.md to your .gitignore. Never edit the .md files manually β€” they are always regenerated from rules.json.

Daily workflow

After pulling changes that include new rules:

git pull
aida rules build   # regenerate local rule views from updated rules.json

Merge conflict resolution

When two developers add rules on separate branches and merge, rules.json may get a standard git conflict. Resolve it in one command:

# After git merge produces a conflict in rules.json:
aida rules merge   # fingerprint union β€” no duplicates, no lost rules
aida rules build   # rebuild .md views
git add .aidevos/rules.json
git commit -m "merge: resolve rules conflict"

aida rules merge uses fingerprint deduplication: if two rules have identical content, only one is kept. If they differ, both are kept and the incoming rule is renumbered to avoid ID collisions.

Managing rules over time

aida rules list      # list all rules grouped by category
aida rules dedupe    # surface rules with >40% keyword overlap for manual review

Rules have a status field (active / deprecated). When project conventions evolve, deprecate old rules β€” they stop appearing in the .md files your AI reads, but remain in rules.json as an audit trail.


Roadmap

  • Export reports as PDF / HTML (H1/H2 performance reviews)
  • Historical trend analysis β€” deviation reduction curves over time
  • Team dashboard with multi-project aggregation
  • VS Code extension for inline deviation alerts
  • Cross-project rule sharing β€” team-wide AI knowledge base

Tech Stack

Runtime Node.js + TypeScript, zero dependencies
Dashboard React 19 + ECharts + Tailwind CSS 4
Protocol MCP over stdio (JSON-RPC 2.0)
Data Local JSON files, no database
Real-time Server-Sent Events (SSE)
i18n Chinese / English, switchable in dashboard

Testing

npm test    # 82 tests across 29 suites

Contributing

Issues, feature requests, and PRs are welcome.

git clone https://github.com/LWTlong/ai-dev-analytics.git
cd ai-dev-analytics
npm install
npm test

License

MIT


Vibe coding without data is just vibing.
Add data, and your AI gets smarter every run.

Get Started in 30 Seconds β†’

About

An open-source AI coding observability layer. Silently tracks vibe coding sessions via MCP and codifies AI deviations into project rules. 100% local.

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages