Skip to content

feat: outbound webhook notifications for important bot events (#130)#219

Merged
BillChirico merged 23 commits intomainfrom
feat/issue-130
Mar 2, 2026
Merged

feat: outbound webhook notifications for important bot events (#130)#219
BillChirico merged 23 commits intomainfrom
feat/issue-130

Conversation

@BillChirico
Copy link
Collaborator

Summary

Implements issue #130 — outbound webhook notifications for important bot events.

What's New

Core Webhook Engine ()

  • HMAC-SHA256 signing via header (compatible with GitHub/Stripe style verification)
  • Exponential backoff retry — 3 attempts at 1s, 3s, 9s delays
  • Delivery logging — last 100 deliveries per guild stored in DB (pruned automatically)
  • Fire-and-forget dispatch — never blocks the calling event handler
  • Per-endpoint event filtering — each endpoint subscribes to specific event types

Event Types

Event Trigger
bot.disconnected Discord shard disconnect (non-clean)
bot.reconnected Discord shard resume
bot.error Discord client error
moderation.action Warn/ban/kick/tempban/softban case created
health.degraded Memory >80% heap or event loop lag >100ms
config.changed Any guild config key updated
member.flagged AI triage flagged a message for moderation

Config Schema

{
  "notifications": {
    "webhooks": [
      {
        "id": "uuid-generated",
        "url": "https://your-service.com/webhook",
        "events": ["moderation.action", "member.flagged"],
        "secret": "hmac-secret",
        "enabled": true
      }
    ]
  }
}

API Routes (under /api/v1/guilds/:id/notifications/)

Method Path Description
GET /webhooks List endpoints (secrets masked)
POST /webhooks Add endpoint
DELETE /webhooks/:endpointId Remove endpoint
POST /webhooks/:endpointId/test Send test ping
GET /deliveries Delivery log (newest first)

Database Migration

  • 005_webhook_notifications.cjs — Adds webhook_delivery_log table with guild/endpoint indexes

Tests

  • 51 new tests across tests/modules/webhookNotifier.test.js and tests/api/routes/notifications.test.js
  • Covers: signing, retry logic, DB logging, event filtering, secret masking, auth, limit capping, delivery log

Closes #130

…ents (#130)

- Add webhookNotifier module with HMAC-SHA256 signing, retry with
  exponential backoff (3 attempts: 1s/3s/9s), and delivery logging
- Add webhook_delivery_log DB migration (005)
- Add notifications.webhooks[] config section
- Add CRUD + test + delivery log API routes under /guilds/:id/notifications/
- Register notificationsRouter in API index
- Hook moderation.action into createCase (moderation.js)
- Hook bot.disconnected/reconnected/error into shard events (index.js)
- Hook config.changed into config change listeners (config-listeners.js)
- Hook member.flagged into triage moderate flow (triage-respond.js)
- Add health.degraded periodic check with memory/event-loop-lag thresholds
- Add getAllGuildIds() to config module for bot-level event broadcasting
- Add WEBHOOK_EVENTS constants, signPayload, testEndpoint, getDeliveryLog exports
- Comprehensive test suite: 51 tests across module + route layers
Copilot AI review requested due to automatic review settings March 2, 2026 04:51
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 2, 2026

Warning

Rate limit exceeded

@BillChirico has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 9 minutes and 51 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 82f6775 and 2b3939a.

📒 Files selected for processing (4)
  • src/api/routes/notifications.js
  • src/api/utils/ssrfProtection.js
  • src/index.js
  • tests/api/utils/ssrfProtection.test.js
📝 Walkthrough

Walkthrough

Adds a webhook notification system: delivery infrastructure, API routes, DB migration for delivery logs, event hooks (moderation, member flagging, config changes, bot/health events), health checks, SSRF protections, AI feedback pool injection, dashboard proxy routes, and comprehensive tests.

Changes

Cohort / File(s) Summary
Webhook Notifier Module
src/modules/webhookNotifier.js
New module implementing event firing, HMAC-SHA256 signing, retry with exponential backoff, delivery logging/pruning, endpoint testing, and broadcast across guilds. Exports constants and delivery/testing APIs.
Notification API & Integration
src/api/index.js, src/api/routes/notifications.js, src/api/utils/ssrfProtection.js
New protected routes to CRUD/test webhook endpoints and fetch delivery logs; includes input validation, SSRF sync/async URL validation utilities, and integration into main API mount.
Database Migration
migrations/005_webhook_notifications.cjs
New migration creating webhook_delivery_log table and two indexes (by guild/delivery_time and endpoint/delivery_time) with down migration to drop them.
Event Hooks & Health
src/index.js, src/config-listeners.js, src/modules/moderation.js, src/modules/triage-respond.js, src/utils/health.js
Wires fireEvent calls for bot errors/disconnects/reconnects, moderation actions, member.flagged, and config.changed; adds health degraded checks (memory / event-loop lag) and measureEventLoopLag export.
Config & Utilities
src/modules/config.js
Adds getAllGuildIds() and extends config change listeners to support wildcard (*) listeners.
AI Feedback Refactor
src/modules/aiFeedback.js, src/api/routes/ai-feedback.js
Injectable DB pool parameter added to getFeedbackStats/getFeedbackTrend/getRecentFeedback with resolvedPool use; routes updated to require pool and normalize recent results to camelCase; errors are now propagated.
Dashboard Proxy Routes
web/src/app/api/guilds/[guildId]/ai-feedback/.../route.ts
Adds dynamic Next.js app-router proxy endpoints for AI feedback stats and recent data with guild-admin authorization and passthrough query params.
SSRF Tests & Module Tests
tests/api/utils/ssrfProtection.test.js, tests/modules/webhookNotifier.test.js, tests/api/routes/notifications.test.js
Comprehensive test suites added for SSRF utilities, webhookNotifier behaviors (signing, retries, logging, event dispatch), and notification routes (CRUD, validation, SSRF allow/block).
Test Adjustments & Misc
tests/config-listeners.test.js, tests/redis.test.js, biome.json
Updated test expectations for extra config listener, changed Redis test mock to class-based, and added an exclusion entry in biome.json.

Possibly related issues

  • #130: feat: webhook notifications for important bot events — This PR implements the webhook config, event types, HMAC signing, retries, delivery logging, and dashboard API noted in the issue.
  • Refactor: Deduplicate AI feedback SQL queries in API route #188 — Refactor AI feedback functions — The PR’s AI feedback pool injection and route changes address the refactor objectives to centralize DB usage.

Possibly related PRs

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly summarizes the main feature: outbound webhook notifications for important bot events, matching the extensive changeset.
Description check ✅ Passed The description is comprehensive and directly related to the changeset, detailing the webhook engine, event types, config schema, API routes, database migration, and test coverage.
Linked Issues check ✅ Passed The PR successfully implements all coding objectives from issue #130: webhook config, event types, delivery with retry, API routes, delivery logging, SSRF protection, and comprehensive tests.
Out of Scope Changes check ✅ Passed All changes are directly scoped to issue #130. The PR adds webhook infrastructure, API routes, event hooks, database migration, and tests—all aligned with the stated objectives.
Docstring Coverage ✅ Passed Docstring coverage is 87.10% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/issue-130

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR implements outbound webhook notifications for important bot events (issue #130). It adds a full webhook delivery engine with HMAC signing, retry logic, delivery logging, and a management API.

Changes:

  • New webhookNotifier.js module with signing, retry/backoff delivery, fire-and-forget dispatch, and delivery logging
  • New API routes under /api/v1/guilds/:id/notifications/ for CRUD management of webhook endpoints and delivery log access
  • Integration hooks in index.js, moderation.js, triage-respond.js, and config-listeners.js to fire events at the appropriate trigger points

Reviewed changes

Copilot reviewed 14 out of 14 changed files in this pull request and generated 6 comments.

Show a summary per file
File Description
src/modules/webhookNotifier.js Core webhook engine: signing, delivery with retry, fireEvent, fireEventAllGuilds, delivery log
src/api/routes/notifications.js REST API for managing webhook endpoints and querying delivery log
src/api/index.js Registers the new notifications router
src/index.js Wires up bot-level events and periodic health degraded check
src/modules/moderation.js Fires moderation.action event on case creation
src/modules/triage-respond.js Fires member.flagged event on AI moderation trigger
src/config-listeners.js Fires config.changed event on config updates
src/modules/config.js Adds getAllGuildIds() helper for bot-level event fanout
src/utils/health.js Exports health threshold constants and measureEventLoopLag utility
migrations/005_webhook_notifications.cjs DB migration creating webhook_delivery_log table
tests/modules/webhookNotifier.test.js 438-line test suite for the notifier module
tests/api/routes/notifications.test.js 355-line test suite for the notifications API routes
tests/config-listeners.test.js Updates listener count assertion from 11 to 12
feat-issue-164 Git subproject pointer (unrelated to this PR's feature)

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@greptile-apps
Copy link

greptile-apps bot commented Mar 2, 2026

Greptile Summary

This PR implements a comprehensive outbound webhook notification system for important bot events (disconnects, moderation actions, health degradation, config changes, AI flagging). The implementation includes HMAC-SHA256 signing, exponential backoff retry, delivery logging, and per-endpoint event filtering — all well-architected with fire-and-forget dispatch that doesn't block event handlers.

Key Changes

  • Core webhook engine (webhookNotifier.js) with signing, retry logic, and DB logging
  • API routes for webhook CRUD operations with proper authentication and audit logging
  • SSRF protection utilities to prevent targeting internal endpoints
  • Database migration for delivery log table with appropriate indexes
  • Integration points in moderation, AI triage, config changes, and Discord lifecycle events
  • 51 new tests with excellent coverage of edge cases

Security Concerns

  • DNS rebinding vulnerability — SSRF validation uses synchronous checks without DNS resolution. An attacker with API access could configure a webhook URL that initially resolves to a public IP (passing validation), then change DNS to resolve to private IPs (169.254.169.254 for AWS metadata, 10.x.x.x for internal services) before webhook delivery. The fetch() call has no protection against this. Fix by using async SSRF validation with DNS resolution at config time, or re-validating DNS before each delivery attempt.
  • No SSRF re-validation at delivery time compounds the DNS rebinding risk

Issues from Previous Threads (Not Addressed)

  • Health degraded webhook still fires every 60 seconds while system remains degraded (needs state tracking to only fire on healthy→degraded transition)
  • biome.json still includes feat-issue-164 in ignore list

Performance

  • Delivery log pruning runs on every delivery attempt (up to 4x per webhook with retries), causing unnecessary DB queries for high-volume scenarios

Strengths

  • Excellent test coverage (51 tests covering signing, retry, filtering, SSRF, auth)
  • Proper secret masking in API responses
  • Fire-and-forget design prevents blocking main event handlers
  • Good error handling with Winston logging
  • Appropriate use of parameterized queries
  • Well-documented code with JSDoc annotations

Confidence Score: 2/5

  • Not safe to merge without addressing the DNS rebinding vulnerability which could allow SSRF attacks against internal infrastructure
  • Score reflects a critical DNS rebinding vulnerability in SSRF protection that could allow authenticated attackers to target internal endpoints (AWS metadata, internal services). While the feature is well-implemented with excellent tests, the security issue must be resolved before merge. The unaddressed health check spam and biome.json cleanup are lower priority.
  • Pay close attention to src/api/routes/notifications.js and src/modules/webhookNotifier.js for the DNS rebinding vulnerability fix

Important Files Changed

Filename Overview
src/modules/webhookNotifier.js Implements core webhook delivery with HMAC signing, retry logic, and delivery logging. DNS rebinding vulnerability in fetch calls and inefficient pruning on every delivery.
src/api/routes/notifications.js API routes for webhook management with proper auth and validation. DNS rebinding vulnerability due to synchronous SSRF validation without DNS resolution.
src/api/utils/ssrfProtection.js SSRF protection utilities with both sync and async versions. Async version includes DNS resolution checks to prevent DNS rebinding attacks.
src/index.js Main entry point with webhook integration for bot lifecycle events. Health degraded check fires repeatedly without state tracking.
biome.json Linter config still includes feat-issue-164 in ignore list per previous feedback.

Sequence Diagram

sequenceDiagram
    participant User as Dashboard User
    participant API as API Routes
    participant Config as Config Module
    participant Bot as Discord Bot
    participant Webhook as webhookNotifier
    participant Target as Webhook Endpoint
    participant DB as PostgreSQL

    User->>API: POST /guilds/:id/notifications/webhooks
    API->>API: Validate HTTPS + SSRF check (sync)
    API->>Config: setConfigValue('notifications.webhooks')
    Config->>DB: UPDATE guild_config
    Config-->>API: Config updated
    API-->>User: 201 Created (secret masked)

    Note over Bot: Event occurs (e.g., moderation.action)
    Bot->>Webhook: fireEvent('moderation.action', guildId, data)
    Webhook->>Config: getConfig(guildId)
    Config-->>Webhook: Webhook endpoints
    Webhook->>Webhook: Filter to subscribed endpoints
    
    loop For each endpoint (parallel)
        Webhook->>Webhook: Sign payload with HMAC-SHA256
        loop Retry up to 4 attempts
            Webhook->>Target: POST with X-Signature-256 header
            Target-->>Webhook: Response
            Webhook->>DB: INSERT delivery log
            Webhook->>DB: DELETE old entries (prune)
            alt Success
                Webhook-->>Bot: Done
            else Failure & not last attempt
                Webhook->>Webhook: Sleep (exponential backoff)
            end
        end
    end
Loading

Last reviewed commit: 2b3939a

Copy link

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

21 files reviewed, 7 comments

Edit Code Review Agent Settings | Greptile

coderabbitai[bot]
coderabbitai bot previously approved these changes Mar 2, 2026
coderabbitai[bot]
coderabbitai bot previously approved these changes Mar 2, 2026
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 17 out of 17 changed files in this pull request and generated 10 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copilot AI review requested due to automatic review settings March 2, 2026 09:35
coderabbitai[bot]
coderabbitai bot previously approved these changes Mar 2, 2026
coderabbitai[bot]
coderabbitai bot previously approved these changes Mar 2, 2026
… network errors

result.status || null evaluated to null when status was 0 (network errors/
timeouts), losing the distinction between 'no HTTP status' and 'network-level
failure'. Switch to result.status ?? null so 0 is stored correctly in the DB.

Fixes review thread PRRT_kwDORICdSM5xgDsx
coderabbitai[bot]
coderabbitai bot previously approved these changes Mar 2, 2026
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 18 out of 18 changed files in this pull request and generated 9 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +353 to +358
/**
* Get all guild IDs that have config entries (including 'global').
* Used by webhook notifier to fire bot-level events to all guilds.
*
* @returns {string[]} Array of guild IDs (excluding 'global')
*/
Copy link

Copilot AI Mar 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The JSDoc summary line says "Get all guild IDs that have config entries (including 'global')" but the function body explicitly filters out 'global' with .filter((id) => id !== 'global'). The correct description, as reflected in the @returns tag, is "excluding 'global'". The summary should say "excluding 'global'" rather than "including 'global'".

Copilot uses AI. Check for mistakes.
Comment on lines +1 to +28
/**
* Webhook Notifier Module
*
* Delivers outbound webhook notifications to configured endpoints when
* important bot events occur. Supports HMAC-SHA256 signing, per-guild
* endpoint configuration, exponential backoff retry, and delivery logging.
*
* Event types:
* bot.disconnected - Discord gateway disconnection
* bot.reconnected - Successful reconnection
* bot.error - Unhandled error
* moderation.action - Warning/ban/kick issued
* health.degraded - Memory >80% or event loop lag >100ms
* config.changed - Config updated via dashboard
* member.flagged - AI flagged a member's message
*/

import { createHmac } from 'node:crypto';
import { getPool } from '../db.js';
import { info, error as logError, warn } from '../logger.js';
import { getConfig } from './config.js';

/** @type {string[]} All supported event types */
export const WEBHOOK_EVENTS = [
'bot.disconnected',
'bot.reconnected',
'bot.error',
'moderation.action',
Copy link

Copilot AI Mar 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AGENTS.md mandates that when a new module is added, the Key Files table and config documentation must be updated (see AGENTS.md lines 178–194). Neither src/modules/webhookNotifier.js nor src/api/routes/notifications.js have been added to the AGENTS.md Key Files table, and README.md has no entry for the new notifications config section (notifications.webhooks[]) or the new API routes. This is explicitly a "non-negotiable" requirement per the project guidelines.

Copilot generated this review using guidance from organization custom instructions.
Comment on lines +363 to +367
// 7. Stop health check interval
if (healthCheckInterval) {
clearInterval(healthCheckInterval);
healthCheckInterval = null;
}
Copy link

Copilot AI Mar 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The healthCheckInterval is stopped at step 7 of gracefulShutdown, but the database pool is already closed at step 4. If the interval fires between steps 4 and 7, the fireEventAllGuilds call inside it will invoke deliverToEndpoint, which calls pool.query(...) on a closed pool. This will produce DB errors during shutdown. The interval should be cleared in step 1 (alongside stopTriage() and the other background services) before the DB is closed.

Copilot uses AI. Check for mistakes.
Comment on lines +134 to +135
*
* @param {string[]} guildIds - Guild IDs to notify
Copy link

Copilot AI Mar 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The JSDoc @param {string[]} guildIds - Guild IDs to notify on startHealthDegradedCheck is incorrect — the function takes no parameters. This is misleading for IntelliSense users.

Suggested change
*
* @param {string[]} guildIds - Guild IDs to notify

Copilot uses AI. Check for mistakes.
Comment on lines +139 to +157

// Prune old log entries for this guild (keep most recent MAX_LOG_ENTRIES)
await pool.query(
`DELETE FROM webhook_delivery_log
WHERE guild_id = $1
AND id NOT IN (
SELECT id FROM webhook_delivery_log
WHERE guild_id = $1
ORDER BY delivered_at DESC
LIMIT $2
)`,
[guildId, MAX_LOG_ENTRIES],
);
} catch (dbErr) {
warn('Failed to log webhook delivery', { error: dbErr.message });
}
}

if (result.ok) {
Copy link

Copilot AI Mar 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The prune query (DELETE … WHERE id NOT IN …) runs on every delivery attempt, including failed retries. With 3 retries and 20 endpoints firing simultaneously, this could run up to 80 pruning queries per event. The pruning should only run after a successful delivery (or at most once per deliverToEndpoint call, e.g. after the loop), not on every attempt including failures. Moving the prune call outside the retry loop would reduce unnecessary DB load.

Suggested change
// Prune old log entries for this guild (keep most recent MAX_LOG_ENTRIES)
await pool.query(
`DELETE FROM webhook_delivery_log
WHERE guild_id = $1
AND id NOT IN (
SELECT id FROM webhook_delivery_log
WHERE guild_id = $1
ORDER BY delivered_at DESC
LIMIT $2
)`,
[guildId, MAX_LOG_ENTRIES],
);
} catch (dbErr) {
warn('Failed to log webhook delivery', { error: dbErr.message });
}
}
if (result.ok) {
} catch (dbErr) {
warn('Failed to log webhook delivery', { error: dbErr.message });
}
}
if (result.ok) {
// Prune old log entries for this guild (keep most recent MAX_LOG_ENTRIES).
// This is done only after a successful delivery to avoid running the
// potentially expensive DELETE query on every failed retry attempt.
if (pool) {
try {
await pool.query(
`DELETE FROM webhook_delivery_log
WHERE guild_id = $1
AND id NOT IN (
SELECT id FROM webhook_delivery_log
WHERE guild_id = $1
ORDER BY delivered_at DESC
LIMIT $2
)`,
[guildId, MAX_LOG_ENTRIES],
);
} catch (dbErr) {
warn('Failed to prune webhook delivery logs', { error: dbErr.message });
}
}

Copilot uses AI. Check for mistakes.
Comment on lines +12 to +49
export async function GET(
request: NextRequest,
{ params }: { params: Promise<{ guildId: string }> },
) {
const { guildId } = await params;
if (!guildId) {
return NextResponse.json({ error: 'Missing guildId' }, { status: 400 });
}

const authError = await authorizeGuildAdmin(
request,
guildId,
'[api/guilds/:guildId/ai-feedback/stats]',
);
if (authError) return authError;

const config = getBotApiConfig('[api/guilds/:guildId/ai-feedback/stats]');
if (config instanceof NextResponse) return config;

const upstreamUrl = buildUpstreamUrl(
config.baseUrl,
`/guilds/${encodeURIComponent(guildId)}/ai-feedback/stats`,
'[api/guilds/:guildId/ai-feedback/stats]',
);
if (upstreamUrl instanceof NextResponse) return upstreamUrl;

const days = request.nextUrl.searchParams.get('days');
if (days !== null) {
upstreamUrl.searchParams.set('days', days);
}

return proxyToBotApi(
upstreamUrl,
config.secret,
'[api/guilds/:guildId/ai-feedback/stats]',
'Failed to fetch AI feedback stats',
);
}
Copy link

Copilot AI Mar 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new GET handler in this file has no corresponding web test file, while similar Next.js proxy routes (e.g., web/tests/api/guild-analytics.test.ts) have comprehensive tests covering auth, guild permission checks, upstream proxy behavior, and error cases. Tests for the ai-feedback/stats and ai-feedback/recent routes should be added following the same pattern.

Copilot generated this review using guidance from organization custom instructions.
Comment on lines +12 to +49
export async function GET(
request: NextRequest,
{ params }: { params: Promise<{ guildId: string }> },
) {
const { guildId } = await params;
if (!guildId) {
return NextResponse.json({ error: 'Missing guildId' }, { status: 400 });
}

const authError = await authorizeGuildAdmin(
request,
guildId,
'[api/guilds/:guildId/ai-feedback/recent]',
);
if (authError) return authError;

const config = getBotApiConfig('[api/guilds/:guildId/ai-feedback/recent]');
if (config instanceof NextResponse) return config;

const upstreamUrl = buildUpstreamUrl(
config.baseUrl,
`/guilds/${encodeURIComponent(guildId)}/ai-feedback/recent`,
'[api/guilds/:guildId/ai-feedback/recent]',
);
if (upstreamUrl instanceof NextResponse) return upstreamUrl;

const limit = request.nextUrl.searchParams.get('limit');
if (limit !== null) {
upstreamUrl.searchParams.set('limit', limit);
}

return proxyToBotApi(
upstreamUrl,
config.secret,
'[api/guilds/:guildId/ai-feedback/recent]',
'Failed to fetch recent AI feedback',
);
}
Copy link

Copilot AI Mar 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new GET handler in this file has no corresponding web test file, while similar Next.js proxy routes (e.g., web/tests/api/guild-analytics.test.ts) have comprehensive tests covering auth, guild permission checks, upstream proxy behavior, and error cases. Tests for the ai-feedback/recent route should be added following the same pattern.

Copilot generated this review using guidance from organization custom instructions.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 13

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/modules/aiFeedback.js (1)

177-181: ⚠️ Potential issue | 🟠 Major

Fix incorrect fallback for negative feedback count.

Defaulting negative to 1 creates impossible aggregates (e.g., total = 0, negative = 1).

Proposed fix
-    const positive = row?.positive || 0;
-    const negative = row?.negative || 1;
-    const total = row?.total || 0;
+    const positive = row?.positive ?? 0;
+    const negative = row?.negative ?? 0;
+    const total = row?.total ?? 0;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/modules/aiFeedback.js` around lines 177 - 181, The code incorrectly
defaults negative to 1 causing impossible aggregates; change the fallback so
negative = row?.negative || 0 and compute total as row?.total ?? (positive +
negative) (or at least ensure total uses 0 if missing), then keep ratio = total
> 0 ? Math.round((positive / total) * 100) : null; update the variables row,
positive, negative, total, ratio accordingly in src/modules/aiFeedback.js to use
these corrected fallbacks.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@biome.json`:
- Line 12: Remove the stale exclusion string "!feat-issue-164" from the
exclusions array in biome.json: locate the entry with the literal
"!feat-issue-164" and delete it so the JSON no longer references that
non-existent directory (ensure the surrounding commas remain valid and the JSON
stays well-formed).

In `@src/api/routes/ai-feedback.js`:
- Around line 110-117: The catch blocks around the parallel calls to
getFeedbackStats and getFeedbackTrend (and the similar one at lines 216-220) are
swallowing errors; instead, log the error with context and re-throw a
route-standard custom error from src/utils/errors.js. Replace the bare return
res.status(500)... in the catch by calling the request/process logger (same
logger used elsewhere in this route) with a descriptive message and the caught
error, then throw the appropriate custom error class from src/utils/errors.js
(e.g., ApiError/RouteError) so the shared API error handler can format the
response; keep references to getFeedbackStats and getFeedbackTrend and the
surrounding try/catch when making this change.

In `@src/api/routes/notifications.js`:
- Line 158: The info log in the webhook creation uses the full URL
(info('Webhook endpoint added', { guildId, endpointId: newEndpoint.id, url })),
which may leak embedded tokens; change the logging to omit or redact the secret
portion of the URL and log only safe identifiers (e.g., guildId and
newEndpoint.id) plus a redacted URL (remove query string and/or replace path
token segments with "[REDACTED]" or show only hostname). Update the log call in
the webhook creation path to stop emitting the full url and instead include the
redactedUrl or omit url entirely and keep endpointId: newEndpoint.id for
traceability.
- Around line 300-301: The current parse of req.query.limit allows negative
values (e.g., -5) to pass through into the DB LIMIT; update the limit
calculation to clamp to a positive range before applying the upper bound.
Replace the existing parseInt(req.query.limit, 10) logic so that you first
coerce the query value to a number (falling back to the default 50), then apply
Math.max(..., 1) to ensure at least 1, and finally Math.min(..., 100) to cap at
100; adjust the expression where the variable limit is defined (referencing
limit and req.query.limit and the parseInt usage) so the final value is always
between 1 and 100.
- Around line 61-64: The catch blocks currently call logError and return inline
500 responses; replace this pattern by logging the error with context using
logError (preserving guildId and err) and then re-throw a standardized custom
error (e.g., throw new InternalServerError('Failed to retrieve webhook
endpoints', { cause: err })) from the utilities error classes; update the top of
the file to import the appropriate class (InternalServerError or ApiError) and
apply the same change to the other catch blocks referenced (around the handlers
that currently return 500 inline) so all route handlers use logError(...) then
throw new InternalServerError(..., { cause: err }) instead of
res.status(500).json(...)
- Around line 117-153: The payload currently coerces types which lets invalid
values slip through (e.g., enabled: "false" becomes true and non-string secret
values are accepted); update the validation in src/api/routes/notifications.js
by (1) removing the default boolean coercion on destructuring (do not set
enabled = true on const { url, events, secret, enabled } = req.body || {}), (2)
add explicit checks after the events validation to return 400 if secret is
provided and typeof secret !== 'string', and to return 400 if enabled is
provided and typeof enabled !== 'boolean', and (3) when constructing newEndpoint
(the object with id: randomUUID(), url, events, enabled: ...), set enabled to
enabled === undefined ? true : enabled (instead of Boolean(enabled)) so only a
real boolean or the implicit default true is persisted.

In `@src/index.js`:
- Around line 137-151: The startHealthDegradedCheck function currently emits
health.degraded every interval while conditions hold; add gating by tracking a
state timestamp (e.g., lastDegradedAt) and only call
fireEventAllGuilds('health.degraded', ...) when the service transitions from
healthy to degraded or when a configurable cooldown (e.g.,
DEGRADED_NOTIFICATION_COOLDOWN_MS) has passed since the last notification;
update the state when emitting and reset/clear lastDegradedAt when the system
recovers (degraded === false) so future transitions will re-notify; use the
existing symbols healthCheckInterval, MEMORY_DEGRADED_THRESHOLD,
EVENT_LOOP_LAG_THRESHOLD_MS, and fireEventAllGuilds to implement this behavior.

In `@src/modules/webhookNotifier.js`:
- Around line 114-117: getPool() can throw when the DB isn't initialized; wrap
calls to getPool() in a try/catch at the start of deliverToEndpoint and at other
call sites in this module that invoke getPool(), capture the error and set pool
to null (or a safe fallback), then proceed with the existing webhook delivery
and fallback logging logic so delivery attempts and fallback paths still run
even if getPool() fails; reference the functions deliverToEndpoint and the local
getPool() call sites when applying the change.
- Around line 62-80: The retry flow currently sends new POSTs with no stable id,
causing duplicate processing; generate a single stable delivery identifier
(e.g., a UUID v4) at the start of the overall send/delivery flow and thread it
through retries by adding it to request headers (use a consistent header like
"Idempotency-Key" or "X-Delivery-Id") in attemptDelivery so every retry uses the
same id; update the caller(s) that invoke attemptDelivery to accept and pass
this deliveryId (or change attemptDelivery signature to accept an optional
deliveryId and generate one only if missing), and ensure signPayload usage is
kept consistent (if receivers expect the signature to cover the id, include the
header/value in the signed material or keep signing of body only and document
that the id is a header) so retries carry a deduplication handle.
- Around line 243-258: The getDeliveryLog function has an incorrect JSDoc
default (says 100 while code uses 50) and only clamps the upper bound, allowing
negative or non-integer limits to reach the SQL; update the JSDoc to show the
actual default of 50, coerce/normalize the limit at the top of getDeliveryLog
(e.g., parseInt or Number.isInteger logic), clamp it to a safe range with
Math.max(0, Math.min(limit, MAX_LOG_ENTRIES)), and pass that normalized value
(e.g., normalizedLimit) into the pool.query parameter for LIMIT $2 so negative
or invalid values cannot be used in the SQL.

In `@tests/api/routes/notifications.test.js`:
- Around line 175-183: Add a new test next to the existing invalid-URL case that
posts to the same endpoint (/api/v1/guilds/${GUILD_ID}/notifications/webhooks)
using request(app) and authHeaders(), send a payload with url:
'http://example.com' (or another http:// URL) and events: ['bot.error'], and
assert a 400 response and that res.body.error mentions HTTPS (or at least 'URL')
to ensure non-HTTPS URLs are rejected; reference the existing test name "should
return 400 for invalid URL format" and reuse GUILD_ID and authHeaders() so the
HTTPS-only enforcement cannot regress silently.

In `@tests/config-listeners.test.js`:
- Around line 97-100: The test increased expected listener count but doesn't
assert the wildcard ('*') listener is present; update the test that calls
registerConfigListeners to also verify that onConfigChange was invoked for the
'*' channel (e.g., check one of onConfigChange's calls has '*' as the first
argument or that a call exists where the listener key is '*') so the webhook
wildcard listener registration is explicitly validated alongside the call count.

In `@tests/modules/webhookNotifier.test.js`:
- Around line 50-53: The test teardown currently deletes global.fetch which can
leak state; instead capture the original fetch reference (e.g., const
originalFetch = global.fetch or declare let originalFetch and set it in a
beforeAll/beforeEach) and in afterEach call vi.restoreAllMocks(); then restore
global.fetch = originalFetch (handling the case it was undefined by setting it
to undefined) rather than using delete; update the afterEach block that
references global.fetch to perform this restore so tests remain isolated.

---

Outside diff comments:
In `@src/modules/aiFeedback.js`:
- Around line 177-181: The code incorrectly defaults negative to 1 causing
impossible aggregates; change the fallback so negative = row?.negative || 0 and
compute total as row?.total ?? (positive + negative) (or at least ensure total
uses 0 if missing), then keep ratio = total > 0 ? Math.round((positive / total)
* 100) : null; update the variables row, positive, negative, total, ratio
accordingly in src/modules/aiFeedback.js to use these corrected fallbacks.

ℹ️ Review info

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4c49a09 and 1fe44d7.

📒 Files selected for processing (19)
  • biome.json
  • migrations/005_webhook_notifications.cjs
  • src/api/index.js
  • src/api/routes/ai-feedback.js
  • src/api/routes/notifications.js
  • src/config-listeners.js
  • src/index.js
  • src/modules/aiFeedback.js
  • src/modules/config.js
  • src/modules/moderation.js
  • src/modules/triage-respond.js
  • src/modules/webhookNotifier.js
  • src/utils/health.js
  • tests/api/routes/notifications.test.js
  • tests/config-listeners.test.js
  • tests/modules/webhookNotifier.test.js
  • tests/redis.test.js
  • web/src/app/api/guilds/[guildId]/ai-feedback/recent/route.ts
  • web/src/app/api/guilds/[guildId]/ai-feedback/stats/route.ts
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Greptile Review
🧰 Additional context used
📓 Path-based instructions (7)
migrations/*.cjs

📄 CodeRabbit inference engine (AGENTS.md)

Database migrations must use CommonJS (.cjs) format and follow naming convention NNN_description.cjs where NNN is a zero-padded sequence number

Files:

  • migrations/005_webhook_notifications.cjs
**/*.js

📄 CodeRabbit inference engine (AGENTS.md)

**/*.js: Use ESM modules — use import/export, never require()
Always use node: protocol for Node.js builtins (e.g. import { readFileSync } from 'node:fs')
Always use semicolons in JavaScript code
Use single quotes for strings (enforced by Biome)
Use 2-space indentation (enforced by Biome)

Files:

  • src/modules/aiFeedback.js
  • src/api/index.js
  • src/api/routes/ai-feedback.js
  • tests/api/routes/notifications.test.js
  • src/index.js
  • src/modules/config.js
  • src/api/routes/notifications.js
  • src/utils/health.js
  • tests/config-listeners.test.js
  • src/modules/moderation.js
  • src/modules/triage-respond.js
  • src/config-listeners.js
  • src/modules/webhookNotifier.js
  • tests/modules/webhookNotifier.test.js
  • tests/redis.test.js
src/**/*.js

📄 CodeRabbit inference engine (AGENTS.md)

src/**/*.js: Always use Winston logger — import { info, warn, error } from '../logger.js'. NEVER use console.log, console.warn, console.error, or any console.* method in src/ files — replace any existing console calls with Winston equivalents
Pass structured metadata to Winston logs: info('Message processed', { userId, channelId })

Files:

  • src/modules/aiFeedback.js
  • src/api/index.js
  • src/api/routes/ai-feedback.js
  • src/index.js
  • src/modules/config.js
  • src/api/routes/notifications.js
  • src/utils/health.js
  • src/modules/moderation.js
  • src/modules/triage-respond.js
  • src/config-listeners.js
  • src/modules/webhookNotifier.js
src/modules/*.js

📄 CodeRabbit inference engine (AGENTS.md)

Per-request modules (AI, spam, moderation) should call getConfig(guildId) on every invocation for automatic config changes. Stateful resources should use onConfigChange listeners for reactive updates

Files:

  • src/modules/aiFeedback.js
  • src/modules/config.js
  • src/modules/moderation.js
  • src/modules/triage-respond.js
  • src/modules/webhookNotifier.js
src/api/routes/*.js

📄 CodeRabbit inference engine (AGENTS.md)

Always use custom error classes from src/utils/errors.js and log errors with context before re-throwing

Files:

  • src/api/routes/ai-feedback.js
  • src/api/routes/notifications.js
tests/**/*.js

📄 CodeRabbit inference engine (AGENTS.md)

All new code must include tests. Test coverage must maintain an 80% threshold on statements, branches, functions, and lines. Run pnpm test before every commit

Files:

  • tests/api/routes/notifications.test.js
  • tests/config-listeners.test.js
  • tests/modules/webhookNotifier.test.js
  • tests/redis.test.js
web/src/**/*.ts

📄 CodeRabbit inference engine (AGENTS.md)

Use TypeScript with type safety. Share contracts between dashboard UI and API responses via web/src/types/analytics.ts and similar type definition files

Files:

  • web/src/app/api/guilds/[guildId]/ai-feedback/stats/route.ts
  • web/src/app/api/guilds/[guildId]/ai-feedback/recent/route.ts
🧠 Learnings (7)
📚 Learning: 2026-03-01T06:03:34.399Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-03-01T06:03:34.399Z
Learning: Applies to migrations/*.cjs : Database migrations must use CommonJS (.cjs) format and follow naming convention `NNN_description.cjs` where NNN is a zero-padded sequence number

Applied to files:

  • migrations/005_webhook_notifications.cjs
📚 Learning: 2026-03-01T06:03:34.399Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-03-01T06:03:34.399Z
Learning: Applies to src/modules/*.js : Per-request modules (AI, spam, moderation) should call `getConfig(guildId)` on every invocation for automatic config changes. Stateful resources should use `onConfigChange` listeners for reactive updates

Applied to files:

  • src/modules/aiFeedback.js
  • src/index.js
  • src/modules/config.js
  • src/config-listeners.js
  • src/modules/webhookNotifier.js
📚 Learning: 2026-03-01T06:03:34.399Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-03-01T06:03:34.399Z
Learning: Applies to src/commands/*.js : Moderation commands must follow the shared pattern: `deferReply({ ephemeral: true })`, validate inputs, `sendDmNotification()`, execute Discord action, `createCase()`, `sendModLogEmbed()`, `checkEscalation()`

Applied to files:

  • src/modules/moderation.js
  • src/modules/triage-respond.js
📚 Learning: 2026-03-01T06:03:34.399Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-03-01T06:03:34.399Z
Learning: Moderation case numbering is per-guild sequential and assigned atomically using `COALESCE(MAX(case_number), 0) + 1` in a single INSERT statement

Applied to files:

  • src/modules/moderation.js
📚 Learning: 2026-03-01T06:03:34.399Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-03-01T06:03:34.399Z
Learning: Applies to src/**/*.js : Always use Winston logger — `import { info, warn, error } from '../logger.js'`. NEVER use `console.log`, `console.warn`, `console.error`, or any `console.*` method in src/ files — replace any existing console calls with Winston equivalents

Applied to files:

  • src/config-listeners.js
📚 Learning: 2026-03-01T06:03:34.399Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-03-01T06:03:34.399Z
Learning: Applies to AGENTS.md : Keep AGENTS.md (the AI agent guide) synchronized with key files, code conventions, 'how to add' guides, and common pitfalls. Update after every code change that affects these sections

Applied to files:

  • src/modules/webhookNotifier.js
📚 Learning: 2026-03-01T06:03:34.399Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-03-01T06:03:34.399Z
Learning: Applies to config.json : When adding a new config section or key, document it in README.md's config reference section

Applied to files:

  • src/modules/webhookNotifier.js
🧬 Code graph analysis (13)
src/modules/aiFeedback.js (1)
src/db.js (1)
  • getPool (314-319)
src/api/index.js (2)
src/api/middleware/auth.js (1)
  • requireAuth (36-70)
src/api/middleware/auditLog.js (1)
  • auditLogMiddleware (139-243)
src/api/routes/ai-feedback.js (1)
src/modules/aiFeedback.js (6)
  • pool (101-101)
  • pool (135-135)
  • getFeedbackStats (162-188)
  • getFeedbackTrend (196-223)
  • getRecentFeedback (231-256)
  • row (177-177)
web/src/app/api/guilds/[guildId]/ai-feedback/stats/route.ts (1)
web/src/lib/bot-api-proxy.ts (4)
  • authorizeGuildAdmin (37-69)
  • getBotApiConfig (82-92)
  • buildUpstreamUrl (100-113)
  • proxyToBotApi (135-176)
src/index.js (2)
src/utils/health.js (5)
  • measureEventLoopLag (178-183)
  • MEMORY_DEGRADED_THRESHOLD (165-165)
  • MEMORY_DEGRADED_THRESHOLD (165-165)
  • EVENT_LOOP_LAG_THRESHOLD_MS (170-170)
  • EVENT_LOOP_LAG_THRESHOLD_MS (170-170)
src/modules/webhookNotifier.js (1)
  • fireEventAllGuilds (290-302)
src/api/routes/notifications.js (2)
src/modules/webhookNotifier.js (5)
  • res (76-81)
  • cfg (208-208)
  • result (119-119)
  • testEndpoint (270-279)
  • getDeliveryLog (247-261)
src/modules/config.js (4)
  • getConfig (282-313)
  • err (94-94)
  • setConfigValue (496-606)
  • result (402-402)
tests/config-listeners.test.js (1)
src/config-listeners.js (1)
  • registerConfigListeners (29-122)
src/modules/moderation.js (1)
src/modules/webhookNotifier.js (1)
  • fireEvent (205-238)
src/modules/triage-respond.js (1)
src/modules/webhookNotifier.js (1)
  • fireEvent (205-238)
src/config-listeners.js (2)
src/modules/config.js (2)
  • onConfigChange (363-365)
  • path (441-441)
src/modules/webhookNotifier.js (1)
  • fireEvent (205-238)
src/modules/webhookNotifier.js (3)
src/modules/config.js (9)
  • err (94-94)
  • pool (127-127)
  • pool (146-146)
  • pool (524-524)
  • pool (621-621)
  • pool (679-679)
  • result (402-402)
  • getConfig (282-313)
  • getAllGuildIds (359-361)
src/db.js (1)
  • getPool (314-319)
src/logger.js (2)
  • warn (238-240)
  • info (231-233)
web/src/app/api/guilds/[guildId]/ai-feedback/recent/route.ts (2)
web/src/app/api/guilds/[guildId]/ai-feedback/stats/route.ts (2)
  • dynamic (10-10)
  • GET (12-49)
web/src/lib/bot-api-proxy.ts (4)
  • authorizeGuildAdmin (37-69)
  • getBotApiConfig (82-92)
  • buildUpstreamUrl (100-113)
  • proxyToBotApi (135-176)
tests/modules/webhookNotifier.test.js (1)
src/modules/webhookNotifier.js (12)
  • signPayload (50-52)
  • body (115-115)
  • body (277-277)
  • WEBHOOK_EVENTS (24-32)
  • WEBHOOK_EVENTS (24-32)
  • payload (227-232)
  • payload (271-276)
  • result (119-119)
  • deliverToEndpoint (114-193)
  • fireEvent (205-238)
  • getDeliveryLog (247-261)
  • testEndpoint (270-279)
🪛 GitHub Check: CodeQL
src/api/index.js

[failure] 63-63: Missing rate limiting
This route handler performs authorization, but is not rate-limited.
This route handler performs authorization, but is not rate-limited.

🔇 Additional comments (18)
tests/redis.test.js (1)

51-55: LGTM!

The class-based mock correctly simulates ioredis instantiation behavior. Using Object.assign(this, mockClient) ensures the mock instance receives the spy methods, allowing proper verification of mockClient.on calls.

web/src/app/api/guilds/[guildId]/ai-feedback/recent/route.ts (1)

1-49: LGTM!

The route handler follows the established pattern from the stats route, correctly implementing:

  • Next.js 15 async params API
  • Guild admin authorization
  • Bot API proxy with proper error handling
  • Optional query parameter passthrough
web/src/app/api/guilds/[guildId]/ai-feedback/stats/route.ts (1)

1-49: LGTM!

Implementation correctly mirrors the recent route pattern with appropriate endpoint-specific differences (days param vs limit param). Authorization, config retrieval, and proxy logic are all properly implemented.

src/api/index.js (2)

19-19: LGTM!

Import follows the established pattern for route modules.


62-63: LGTM!

The notifications router is correctly mounted with requireAuth() and auditLogMiddleware(), following the same protection pattern as other guild-scoped routes. The static analysis hint about missing rate limiting is a false positive — as noted in prior review, the global rate limiter in src/api/server.js applies to all endpoints.

src/modules/moderation.js (2)

14-14: LGTM!

Import follows the ESM pattern used throughout the codebase.


151-160: LGTM!

The webhook notification is correctly implemented:

  • Placed after transaction commit to ensure case exists before notification
  • Fire-and-forget pattern with suppressed errors prevents webhook failures from affecting moderation flow
  • Payload includes all relevant case details for downstream consumers
src/modules/config.js (2)

353-361: LGTM!

The getAllGuildIds() function correctly returns only guilds with config entries. The JSDoc accurately describes its purpose for webhook notification broadcasting.


399-400: LGTM!

The wildcard listener support (path === '*') is cleanly integrated into the existing matching logic, enabling the global config.changed webhook handler without disrupting exact-match or prefix-wildcard listeners.

src/modules/triage-respond.js (1)

249-257: Webhook emission in moderation flow looks correct.

This is a clean non-blocking integration and the payload size guard on reasoning is a good safety measure.

src/config-listeners.js (1)

115-121: Config-change webhook hook is safely scoped.

Filtering logging.* / notifications.* and requiring a concrete guild ID is a good guardrail for this listener.

src/utils/health.js (1)

165-183: Health threshold exports and lag helper look good.

Clear constants + a focused helper keep the integration points straightforward.

migrations/005_webhook_notifications.cjs (1)

1-42: Migration structure and rollback ordering are solid.

This is well-formed and reversible.

As per coding guidelines migrations/*.cjs: "Database migrations must use CommonJS (.cjs) format and follow naming convention NNN_description.cjs where NNN is a zero-padded sequence number".

src/index.js (1)

386-399: Lifecycle webhook hooks are wired cleanly.

Emitting bot.error, bot.disconnected, and bot.reconnected from client/shard events aligns well with the new outbound notification objective.

src/api/routes/notifications.js (1)

52-309: Guild authorization is properly enforced via mount-level middleware.

The notificationsRouter is mounted at src/api/index.js:63 with requireAuth() and auditLogMiddleware(), which runs before any route handler executes. All routes are protected by authentication; there is no privilege-escalation risk.

Minor note: Line 158 logs the raw webhook URL in structured metadata. If webhook URLs may contain secrets or sensitive tokens, sanitize or mask the URL before logging.

tests/api/routes/notifications.test.js (1)

79-360: Strong route-level coverage.

Auth, validation, CRUD, test ping, and delivery-log behaviors are all exercised with good depth.

tests/modules/webhookNotifier.test.js (1)

95-465: Great behavioral test depth for notifier flows.

The suite covers success/failure/retry/filtering/logging paths thoroughly and gives strong regression protection.

src/modules/webhookNotifier.js (1)

50-52: Signing implementation is clean and deterministic.

The HMAC-SHA256 format and output shape are straightforward and correct.

"!logs",
"!data"
"!data",
"!feat-issue-164"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash

# Check if the feat-issue-164 directory exists
if [ -d "feat-issue-164" ]; then
  echo "=== Directory exists ==="
  echo "Contents:"
  find feat-issue-164 -type f | head -20
  echo ""
  echo "Directory structure:"
  ls -la feat-issue-164
else
  echo "Directory does not exist"
fi

echo ""
echo "=== Searching for references to feat-issue-164 or issue-164 ==="
rg "feat-issue-164|issue-164|feat.issue.164" --max-count 20

Repository: VolvoxLLC/volvox-bot

Length of output: 188


🏁 Script executed:

#!/bin/bash

# Also check the full biome.json file to understand context
echo "=== Full biome.json content ==="
cat biome.json

Repository: VolvoxLLC/volvox-bot

Length of output: 1233


Remove the stale !feat-issue-164 exclusion from biome.json.

The feat-issue-164 directory does not exist in the repository and is not referenced elsewhere in the codebase. This exclusion is leftover from development and should be removed to keep the configuration clean.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@biome.json` at line 12, Remove the stale exclusion string "!feat-issue-164"
from the exclusions array in biome.json: locate the entry with the literal
"!feat-issue-164" and delete it so the JSON no longer references that
non-existent directory (ensure the surrounding commas remain valid and the JSON
stays well-formed).

Comment on lines +110 to +117
try {
[stats, trend] = await Promise.all([
getFeedbackStats(guildId, pool),
getFeedbackTrend(guildId, days, pool),
]);
} catch (_err) {
return res.status(500).json({ error: 'Failed to fetch AI feedback stats' });
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Do not swallow route errors in local catch blocks.

These catches return 500 directly without logging context or re-throwing a route-standard custom error, which breaks the shared API error contract.

As per coding guidelines src/api/routes/*.js: "Always use custom error classes from src/utils/errors.js and log errors with context before re-throwing".

Also applies to: 216-220

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/api/routes/ai-feedback.js` around lines 110 - 117, The catch blocks
around the parallel calls to getFeedbackStats and getFeedbackTrend (and the
similar one at lines 216-220) are swallowing errors; instead, log the error with
context and re-throw a route-standard custom error from src/utils/errors.js.
Replace the bare return res.status(500)... in the catch by calling the
request/process logger (same logger used elsewhere in this route) with a
descriptive message and the caught error, then throw the appropriate custom
error class from src/utils/errors.js (e.g., ApiError/RouteError) so the shared
API error handler can format the response; keep references to getFeedbackStats
and getFeedbackTrend and the surrounding try/catch when making this change.

Comment on lines +61 to +64
} catch (err) {
logError('Failed to list webhook endpoints', { guildId, error: err.message });
return res.status(500).json({ error: 'Failed to retrieve webhook endpoints' });
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Align route error handling with shared API error conventions.

These handlers return inline 500 responses instead of logging context and re-throwing standardized custom errors.

As per coding guidelines src/api/routes/*.js: "Always use custom error classes from src/utils/errors.js and log errors with context before re-throwing".

Also applies to: 160-163, 211-213, 264-266, 306-308

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/api/routes/notifications.js` around lines 61 - 64, The catch blocks
currently call logError and return inline 500 responses; replace this pattern by
logging the error with context using logError (preserving guildId and err) and
then re-throw a standardized custom error (e.g., throw new
InternalServerError('Failed to retrieve webhook endpoints', { cause: err }))
from the utilities error classes; update the top of the file to import the
appropriate class (InternalServerError or ApiError) and apply the same change to
the other catch blocks referenced (around the handlers that currently return 500
inline) so all route handlers use logError(...) then throw new
InternalServerError(..., { cause: err }) instead of res.status(500).json(...)

Comment on lines +117 to +153
const { url, events, secret, enabled = true } = req.body || {};

if (!url || typeof url !== 'string') {
return res.status(400).json({ error: 'Missing or invalid "url"' });
}

if (!/^https:\/\/.+/.test(url)) {
return res.status(400).json({ error: '"url" must be a valid HTTPS URL' });
}

if (!Array.isArray(events) || events.length === 0) {
return res.status(400).json({ error: '"events" must be a non-empty array' });
}

const invalidEvents = events.filter((e) => !WEBHOOK_EVENTS.includes(e));
if (invalidEvents.length > 0) {
return res.status(400).json({
error: `Invalid event types: ${invalidEvents.join(', ')}`,
validEvents: WEBHOOK_EVENTS,
});
}

try {
const cfg = getConfig(guildId);
const existing = Array.isArray(cfg?.notifications?.webhooks) ? cfg.notifications.webhooks : [];

if (existing.length >= 20) {
return res.status(400).json({ error: 'Maximum of 20 webhook endpoints per guild' });
}

const newEndpoint = {
id: randomUUID(),
url,
events,
enabled: Boolean(enabled),
...(secret ? { secret } : {}),
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Validate secret and enabled types before persisting.

Current coercion allows invalid payloads (e.g., enabled: "false" becomes true, non-string secret can break downstream signing/delivery behavior).

Proposed fix
 router.post('/:guildId/notifications/webhooks', async (req, res) => {
   const { guildId } = req.params;
   const { url, events, secret, enabled = true } = req.body || {};
+
+  if (secret !== undefined && typeof secret !== 'string') {
+    return res.status(400).json({ error: '"secret" must be a string' });
+  }
+
+  if (enabled !== undefined && typeof enabled !== 'boolean') {
+    return res.status(400).json({ error: '"enabled" must be a boolean' });
+  }

   if (!url || typeof url !== 'string') {
     return res.status(400).json({ error: 'Missing or invalid "url"' });
   }
@@
     const newEndpoint = {
       id: randomUUID(),
       url,
       events,
-      enabled: Boolean(enabled),
+      enabled,
       ...(secret ? { secret } : {}),
     };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/api/routes/notifications.js` around lines 117 - 153, The payload
currently coerces types which lets invalid values slip through (e.g., enabled:
"false" becomes true and non-string secret values are accepted); update the
validation in src/api/routes/notifications.js by (1) removing the default
boolean coercion on destructuring (do not set enabled = true on const { url,
events, secret, enabled } = req.body || {}), (2) add explicit checks after the
events validation to return 400 if secret is provided and typeof secret !==
'string', and to return 400 if enabled is provided and typeof enabled !==
'boolean', and (3) when constructing newEndpoint (the object with id:
randomUUID(), url, events, enabled: ...), set enabled to enabled === undefined ?
true : enabled (instead of Boolean(enabled)) so only a real boolean or the
implicit default true is persisted.

Comment on lines +243 to +258
* @param {string} guildId - Guild ID
* @param {number} [limit=100] - Max entries to return
* @returns {Promise<Object[]>} Delivery log entries, newest first
*/
export async function getDeliveryLog(guildId, limit = 50) {
const pool = getPool();
if (!pool) return [];

const { rows } = await pool.query(
`SELECT id, endpoint_id, event_type, status, response_code, response_body, attempt, delivered_at
FROM webhook_delivery_log
WHERE guild_id = $1
ORDER BY delivered_at DESC
LIMIT $2`,
[guildId, Math.min(limit, MAX_LOG_ENTRIES)],
);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Normalize limit before SQL and fix the JSDoc default.

Current clamping only enforces max; invalid/negative values can still leak through. Also JSDoc says 100 while code defaults to 50.

Clamp + docs consistency
- * `@param` {number} [limit=100] - Max entries to return
+ * `@param` {number} [limit=50] - Max entries to return
@@
 export async function getDeliveryLog(guildId, limit = 50) {
@@
+  const normalizedLimit = Number.isFinite(Number(limit))
+    ? Math.max(1, Math.min(Math.trunc(Number(limit)), MAX_LOG_ENTRIES))
+    : 50;
+
   const { rows } = await pool.query(
@@
-    [guildId, Math.min(limit, MAX_LOG_ENTRIES)],
+    [guildId, normalizedLimit],
   );
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/modules/webhookNotifier.js` around lines 243 - 258, The getDeliveryLog
function has an incorrect JSDoc default (says 100 while code uses 50) and only
clamps the upper bound, allowing negative or non-integer limits to reach the
SQL; update the JSDoc to show the actual default of 50, coerce/normalize the
limit at the top of getDeliveryLog (e.g., parseInt or Number.isInteger logic),
clamp it to a safe range with Math.max(0, Math.min(limit, MAX_LOG_ENTRIES)), and
pass that normalized value (e.g., normalizedLimit) into the pool.query parameter
for LIMIT $2 so negative or invalid values cannot be used in the SQL.

Comment on lines +175 to +183
it('should return 400 for invalid URL format', async () => {
const res = await request(app)
.post(`/api/v1/guilds/${GUILD_ID}/notifications/webhooks`)
.set(authHeaders())
.send({ url: 'not-a-url', events: ['bot.error'] });

expect(res.status).toBe(400);
expect(res.body.error).toContain('URL');
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Add an explicit non-HTTPS URL rejection test.

not-a-url is covered, but http://... isn’t. Add a dedicated case so HTTPS-only enforcement can’t regress silently.

Suggested test addition
   it('should return 400 for invalid URL format', async () => {
     const res = await request(app)
       .post(`/api/v1/guilds/${GUILD_ID}/notifications/webhooks`)
       .set(authHeaders())
       .send({ url: 'not-a-url', events: ['bot.error'] });

     expect(res.status).toBe(400);
     expect(res.body.error).toContain('URL');
   });
+
+  it('should return 400 for non-HTTPS URL', async () => {
+    const res = await request(app)
+      .post(`/api/v1/guilds/${GUILD_ID}/notifications/webhooks`)
+      .set(authHeaders())
+      .send({ url: 'http://example.com/hook', events: ['bot.error'] });
+
+    expect(res.status).toBe(400);
+    expect(res.body.error).toContain('HTTPS');
+  });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/api/routes/notifications.test.js` around lines 175 - 183, Add a new
test next to the existing invalid-URL case that posts to the same endpoint
(/api/v1/guilds/${GUILD_ID}/notifications/webhooks) using request(app) and
authHeaders(), send a payload with url: 'http://example.com' (or another http://
URL) and events: ['bot.error'], and assert a 400 response and that
res.body.error mentions HTTPS (or at least 'URL') to ensure non-HTTPS URLs are
rejected; reference the existing test name "should return 400 for invalid URL
format" and reuse GUILD_ID and authHeaders() so the HTTPS-only enforcement
cannot regress silently.

Comment on lines +97 to +100
it('registers exactly 12 listeners', () => {
const config = { logging: { database: { enabled: false } } };
registerConfigListeners({ dbPool: {}, config });
expect(onConfigChange).toHaveBeenCalledTimes(11);
expect(onConfigChange).toHaveBeenCalledTimes(12);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Consider adding explicit verification for the wildcard listener.

The listener count was updated from 11 to 12, but the test at lines 79-95 doesn't explicitly verify that the '*' wildcard listener (for webhook notifications) is registered. Consider adding a check for completeness:

       expect(registeredKeys).toContain('starboard.*');
       expect(registeredKeys).toContain('reputation.*');
+      expect(registeredKeys).toContain('*');
     });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/config-listeners.test.js` around lines 97 - 100, The test increased
expected listener count but doesn't assert the wildcard ('*') listener is
present; update the test that calls registerConfigListeners to also verify that
onConfigChange was invoked for the '*' channel (e.g., check one of
onConfigChange's calls has '*' as the first argument or that a call exists where
the listener key is '*') so the webhook wildcard listener registration is
explicitly validated alongside the call count.

"!coverage",
"!logs",
"!data"
"!data",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove feat-issue-164 from ignore list and remove the submodule from the repository (per previous feedback)

Prompt To Fix With AI
This is a comment left during a code review.
Path: biome.json
Line: 11

Comment:
Remove `feat-issue-164` from ignore list and remove the submodule from the repository (per previous feedback)

How can I resolve this? If you propose a fix, please make it concise.

The previous URL validation regex `/^https?:\/\/.+/` was too permissive
and allowed requests to internal/private network addresses, including:
- localhost
- 127.0.0.1 and other loopback addresses
- 169.254.169.254 (AWS metadata endpoint)
- Private IP ranges (10.x, 172.16-31.x, 192.168.x)

Changes:
- Add src/api/utils/ssrfProtection.js with comprehensive URL validation
- Block private/internal IP ranges, link-local addresses, and localhost variants
- Add DNS resolution check to prevent DNS rebinding attacks (async version)
- Update notifications.js route to use SSRF-safe validation
- Add extensive tests for blocked and allowed URLs

Fixes security vulnerability in webhook notification feature (PR #219).
Copilot AI review requested due to automatic review settings March 2, 2026 10:14
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 21 out of 21 changed files in this pull request and generated 3 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +140 to +154
// Prune old log entries for this guild (keep most recent MAX_LOG_ENTRIES)
await pool.query(
`DELETE FROM webhook_delivery_log
WHERE guild_id = $1
AND id NOT IN (
SELECT id FROM webhook_delivery_log
WHERE guild_id = $1
ORDER BY delivered_at DESC
LIMIT $2
)`,
[guildId, MAX_LOG_ENTRIES],
);
} catch (dbErr) {
warn('Failed to log webhook delivery', { error: dbErr.message });
}
Copy link

Copilot AI Mar 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The pruning query runs on every delivery attempt (including retries), not just after the final attempt. For an endpoint with 4 total attempts (1 initial + 3 retries), this means 4 INSERT + 4 DELETE subquery pairs are executed on the database. The pruning should only run once — either only on success, or unconditionally but only after the loop finishes.

Copilot uses AI. Check for mistakes.
Comment on lines +159 to +161
// IPv6 check (basic patterns)
if (normalized.includes(':') && (normalized.startsWith('::1') || normalized.startsWith('fe80:'))) {
return true;
Copy link

Copilot AI Mar 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The isBlockedHostname function only checks for ::1 and fe80: IPv6 prefixes, but does not call isBlockedIp for full IPv6 literal addresses like ::ffff:127.0.0.1 (IPv4-mapped IPv6 loopback). A URL such as https://[::ffff:127.0.0.1]/webhook would pass through isBlockedHostname (because it starts with ::f, not ::1), skip the IPv4 regex check, and validateUrlForSsrfSync would return { valid: true }, allowing a webhook to be registered pointing at the loopback. The fix is to also call isBlockedIp inside isBlockedHostname for any hostname that contains : (potential IPv6 literal).

Suggested change
// IPv6 check (basic patterns)
if (normalized.includes(':') && (normalized.startsWith('::1') || normalized.startsWith('fe80:'))) {
return true;
// IPv6 literal check
// Any hostname containing ':' is treated as a potential IPv6 literal (including IPv4-mapped IPv6).
// Delegate the blocking decision to isBlockedIp to ensure consistent handling with resolved addresses.
if (normalized.includes(':')) {
return isBlockedIp(normalized);

Copilot uses AI. Check for mistakes.
Comment on lines +139 to +152
healthCheckInterval = setInterval(async () => {
const mem = process.memoryUsage();
const memRatio = mem.heapTotal > 0 ? mem.heapUsed / mem.heapTotal : 0;
const lag = await measureEventLoopLag();
const degraded = memRatio > MEMORY_DEGRADED_THRESHOLD || lag > EVENT_LOOP_LAG_THRESHOLD_MS;
if (degraded) {
fireEventAllGuilds('health.degraded', {
memoryUsedMb: Math.round(mem.heapUsed / 1024 / 1024),
memoryTotalMb: Math.round(mem.heapTotal / 1024 / 1024),
memoryRatio: Math.round(memRatio * 100),
eventLoopLagMs: lag,
}).catch(() => {});
}
}, 60_000).unref();
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Health check fires health.degraded webhook every 60 seconds while system remains degraded, potentially spamming endpoints. Track degradation state and only fire when transitioning from healthy→degraded, or implement a cooldown.

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/index.js
Line: 139-152

Comment:
Health check fires `health.degraded` webhook every 60 seconds while system remains degraded, potentially spamming endpoints. Track degradation state and only fire when transitioning from healthy→degraded, or implement a cooldown.

How can I resolve this? If you propose a fix, please make it concise.

Bill Chirico added 5 commits March 2, 2026 06:13
- Removed inner catch blocks that swallowed errors with catch (_err)
- Now errors bubble up to outer catch block and use next(err)
- Follows API error handling conventions
- Use next(err) instead of inline 500 responses in all routes
- Add redactUrl() helper to avoid leaking secrets in logs
- Validate secret and enabled types before persisting
- Clamp limit to positive range (1-100) to prevent DB errors
- Add safeGetPool() wrapper to handle getPool() exceptions
- Use safeGetPool() in both deliverToEndpoint and getDeliveryLog
- Prevents crashes when DB pool is unavailable
- Save original fetch in beforeEach and restore in afterEach
- Properly restores global.fetch to avoid polluting global state
- Improves test isolation
Copilot AI review requested due to automatic review settings March 2, 2026 11:21
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

♻️ Duplicate comments (4)
src/modules/webhookNotifier.js (1)

260-275: ⚠️ Potential issue | 🟠 Major

Normalize limit before SQL and fix the JSDoc default.

LIMIT $2 currently receives unchecked values (e.g., negative/NaN), and the JSDoc default (100) is inconsistent with code (50).

✅ Suggested fix
- * `@param` {number} [limit=100] - Max entries to return
+ * `@param` {number} [limit=50] - Max entries to return
@@
 export async function getDeliveryLog(guildId, limit = 50) {
@@
+  const normalizedLimit = Number.isFinite(Number(limit))
+    ? Math.max(1, Math.min(Math.trunc(Number(limit)), MAX_LOG_ENTRIES))
+    : 50;
+
   const { rows } = await pool.query(
@@
-    [guildId, Math.min(limit, MAX_LOG_ENTRIES)],
+    [guildId, normalizedLimit],
   );
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/modules/webhookNotifier.js` around lines 260 - 275, Update
getDeliveryLog: normalize and validate the limit before passing it to the SQL
query and correct the JSDoc default to match the function's default of 50.
Inside getDeliveryLog (and near safeGetPool usage), coerce limit to a number,
sanitize it (e.g., fallback to 50 if NaN, clamp between 1 and MAX_LOG_ENTRIES
using Math.min/Math.max), then pass the sanitized value as the second query
parameter; also update the function JSDoc `@param` default to 50. Ensure the SQL
never receives negative/NaN values by referencing the sanitized variable instead
of the raw limit.
tests/api/routes/notifications.test.js (1)

175-183: ⚠️ Potential issue | 🟡 Minor

Add an explicit non-HTTPS rejection test.

Please add a dedicated http://... case so HTTPS-only enforcement can’t regress silently.

➕ Suggested test
   it('should return 400 for invalid URL format', async () => {
@@
     expect(res.body.error).toContain('URL');
   });
+
+  it('should return 400 for non-HTTPS URL', async () => {
+    const res = await request(app)
+      .post(`/api/v1/guilds/${GUILD_ID}/notifications/webhooks`)
+      .set(authHeaders())
+      .send({ url: 'http://example.com/hook', events: ['bot.error'] });
+
+    expect(res.status).toBe(400);
+    expect(res.body.error).toContain('HTTPS');
+  });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/api/routes/notifications.test.js` around lines 175 - 183, Add a new
test case alongside the existing "should return 400 for invalid URL format" that
specifically submits a webhook URL starting with "http://" (e.g.,
'http://example.com') to the POST endpoint used in the file (the route string
`/api/v1/guilds/${GUILD_ID}/notifications/webhooks` and helper calls like
request(app) and authHeaders()) and assert a 400 response and that the error
message mentions HTTPS or rejects non-HTTPS; this ensures HTTPS-only enforcement
can't regress silently.
src/api/routes/notifications.js (2)

83-85: ⚠️ Potential issue | 🟠 Major

Route handlers should log context and re-throw standardized API errors.

These catch blocks call next(err) directly without contextual route logging or wrapping into shared API error classes.

🧩 Example pattern to apply consistently
+import { InternalServerError } from '../../utils/errors.js';
+import { info, error as logError } from '../../logger.js';
@@
   } catch (err) {
-    next(err);
+    logError('Failed to list webhook endpoints', { guildId, error: err?.message });
+    next(new InternalServerError('Failed to list webhook endpoints', { cause: err }));
   }

As per coding guidelines src/api/routes/*.js: "Always use custom error classes from src/utils/errors.js and log errors with context before re-throwing".

Also applies to: 203-205, 252-254, 304-306, 348-350

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/api/routes/notifications.js` around lines 83 - 85, Catch blocks in
notifications.js currently call next(err) directly; update them to log
contextual info and rethrow a standardized API error from src/utils/errors.js.
Specifically, in each catch in the route handlers (the blocks using next(err))
call the shared logger with route context (HTTP method, req.path, req.params,
req.body, and req.user?.id if available), then convert/wrap the original error
into the project’s API error class (e.g., ApiError or mapToApiError from
src/utils/errors.js) preserving status/message and pass that wrapped error to
next(); apply the same change for the other catch sites referenced (the other
three catch blocks).

23-33: ⚠️ Potential issue | 🟠 Major

Redaction still leaks tokenized webhook paths into logs.

Many webhook providers embed secrets in path segments. Returning parsed.toString() still logs that sensitive path.

🔐 Safer redaction approach
 function redactUrl(url) {
   try {
     const parsed = new URL(url);
-    // Redact query string (may contain secrets)
-    parsed.search = '?[REDACTED]';
-    // Redact password in userinfo if present
-    if (parsed.password) {
-      parsed.password = '[REDACTED]';
-    }
-    return parsed.toString();
+    parsed.username = '';
+    parsed.password = '';
+    return `${parsed.origin}/[REDACTED]`;
   } catch {
     // If URL parsing fails, return a safe placeholder
     return '[INVALID URL]';
   }
 }

Also applies to: 197-201

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/api/routes/notifications.js` around lines 23 - 33, The redactUrl function
currently returns parsed.toString(), which leaks sensitive tokens embedded in
path segments; update redactUrl (and the duplicate implementation at the other
location) to also redact the path and fragment by replacing parsed.pathname with
a safe placeholder like '/[REDACTED]' (and parsed.hash = ''), preserve host/port
and redacted search/userinfo as already implemented, and ensure the catch
returns a non-sensitive placeholder or the original host-only redaction instead
of the full input URL.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/api/routes/ai-feedback.js`:
- Around line 214-222: The mapping in ai-feedback.js that normalizes rawFeedback
to camelCase is redundantly handling snake_case fallbacks (message_id,
channel_id, user_id, feedback_type, created_at) even though getRecentFeedback
already returns camelCase; remove the "?? row.snake_case" fallbacks and collapse
the mapping to use only camelCase keys (id, messageId, channelId, userId,
feedbackType, createdAt), and if you need protection add a single normalization
function (e.g., normalizeFeedbackRow) at the DB boundary or update
getRecentFeedback to guarantee shape rather than handling dual shapes here.
- Around line 95-99: Replace the inline 503 JSON responses in ai-feedback.js
(where you check req.app.locals.dbPool and similar checks at the other spot)
with throwing a route-standard custom error imported from src/utils/errors.js
(e.g., DatabaseUnavailableError or the appropriate ServiceUnavailable error
class), and before throwing call the route logger with context (use req.log or
req.app.locals.logger) including identifiers like route name, HTTP method and
that dbPool is missing; remove the direct res.status(503).json(...) and let the
central error middleware handle the response; apply the same change for the
second occurrence around the later check (the block currently returning 503 at
lines ~197-201).

In `@src/api/routes/notifications.js`:
- Around line 149-153: validateUrlForSsrfSync returns an object ({ valid, error
}) and does not throw, so the current try/catch does not prevent blocked URLs;
update the notifications route to call validateUrlForSsrfSync(url), inspect the
returned object (check valid === false or presence of error) and return
res.status(400).json({ error: error || 'Invalid URL' }) when validation fails,
removing or bypassing the try/catch around it and ensuring the rest of the
handler only proceeds when valid is true.

In `@src/api/utils/ssrfProtection.js`:
- Around line 61-64: The IPv6 checks in ssrfProtection currently only block ::1
and addresses starting with 'fe80:', which misses the full link-local range
(fe80::/10) and unique local addresses (fd00::/8 / fc00::/7). Update the
validation around normalizedIp (the same area that currently checks
normalizedIp.startsWith('fe80:')) to reject the entire fe80::/10 range (any
address whose first 10 bits are 1111111010, i.e. fe80–febf) and the fc00::/7 ULA
range (fc00–fdff, but at minimum fd00::/8 per RFC4193), in addition to ::1; use
consistent prefix checks or a small CIDR-aware helper used by the module (e.g.,
extend isLocalAddress / isPrivateIp logic) to return true for those ranges so
they are treated as internal and blocked.

In `@tests/api/utils/ssrfProtection.test.js`:
- Around line 81-95: Add explicit IPv6 deny tests inside the existing "IPv6"
describe block to cover unique local and additional link-local ranges: call
isBlockedIp('fd00::1') and isBlockedIp('fc00::1') and assert they return true,
and add another link-local-style case such as isBlockedIp('fe90::1') asserting
true; place these expectations alongside the existing IPv6 tests in
tests/api/utils/ssrfProtection.test.js so the "IPv6" suite covers fd00::/8,
fc00::/8, and a non-fe80 link-local example.

---

Duplicate comments:
In `@src/api/routes/notifications.js`:
- Around line 83-85: Catch blocks in notifications.js currently call next(err)
directly; update them to log contextual info and rethrow a standardized API
error from src/utils/errors.js. Specifically, in each catch in the route
handlers (the blocks using next(err)) call the shared logger with route context
(HTTP method, req.path, req.params, req.body, and req.user?.id if available),
then convert/wrap the original error into the project’s API error class (e.g.,
ApiError or mapToApiError from src/utils/errors.js) preserving status/message
and pass that wrapped error to next(); apply the same change for the other catch
sites referenced (the other three catch blocks).
- Around line 23-33: The redactUrl function currently returns parsed.toString(),
which leaks sensitive tokens embedded in path segments; update redactUrl (and
the duplicate implementation at the other location) to also redact the path and
fragment by replacing parsed.pathname with a safe placeholder like '/[REDACTED]'
(and parsed.hash = ''), preserve host/port and redacted search/userinfo as
already implemented, and ensure the catch returns a non-sensitive placeholder or
the original host-only redaction instead of the full input URL.

In `@src/modules/webhookNotifier.js`:
- Around line 260-275: Update getDeliveryLog: normalize and validate the limit
before passing it to the SQL query and correct the JSDoc default to match the
function's default of 50. Inside getDeliveryLog (and near safeGetPool usage),
coerce limit to a number, sanitize it (e.g., fallback to 50 if NaN, clamp
between 1 and MAX_LOG_ENTRIES using Math.min/Math.max), then pass the sanitized
value as the second query parameter; also update the function JSDoc `@param`
default to 50. Ensure the SQL never receives negative/NaN values by referencing
the sanitized variable instead of the raw limit.

In `@tests/api/routes/notifications.test.js`:
- Around line 175-183: Add a new test case alongside the existing "should return
400 for invalid URL format" that specifically submits a webhook URL starting
with "http://" (e.g., 'http://example.com') to the POST endpoint used in the
file (the route string `/api/v1/guilds/${GUILD_ID}/notifications/webhooks` and
helper calls like request(app) and authHeaders()) and assert a 400 response and
that the error message mentions HTTPS or rejects non-HTTPS; this ensures
HTTPS-only enforcement can't regress silently.

ℹ️ Review info

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1fe44d7 and 82f6775.

📒 Files selected for processing (7)
  • src/api/routes/ai-feedback.js
  • src/api/routes/notifications.js
  • src/api/utils/ssrfProtection.js
  • src/modules/webhookNotifier.js
  • tests/api/routes/notifications.test.js
  • tests/api/utils/ssrfProtection.test.js
  • tests/modules/webhookNotifier.test.js
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Greptile Review
  • GitHub Check: Agent
🧰 Additional context used
📓 Path-based instructions (5)
**/*.js

📄 CodeRabbit inference engine (AGENTS.md)

**/*.js: Use ESM modules — use import/export, never require()
Always use node: protocol for Node.js builtins (e.g. import { readFileSync } from 'node:fs')
Always use semicolons in JavaScript code
Use single quotes for strings (enforced by Biome)
Use 2-space indentation (enforced by Biome)

Files:

  • tests/api/utils/ssrfProtection.test.js
  • src/api/utils/ssrfProtection.js
  • tests/api/routes/notifications.test.js
  • src/modules/webhookNotifier.js
  • src/api/routes/ai-feedback.js
  • tests/modules/webhookNotifier.test.js
  • src/api/routes/notifications.js
tests/**/*.js

📄 CodeRabbit inference engine (AGENTS.md)

All new code must include tests. Test coverage must maintain an 80% threshold on statements, branches, functions, and lines. Run pnpm test before every commit

Files:

  • tests/api/utils/ssrfProtection.test.js
  • tests/api/routes/notifications.test.js
  • tests/modules/webhookNotifier.test.js
src/**/*.js

📄 CodeRabbit inference engine (AGENTS.md)

src/**/*.js: Always use Winston logger — import { info, warn, error } from '../logger.js'. NEVER use console.log, console.warn, console.error, or any console.* method in src/ files — replace any existing console calls with Winston equivalents
Pass structured metadata to Winston logs: info('Message processed', { userId, channelId })

Files:

  • src/api/utils/ssrfProtection.js
  • src/modules/webhookNotifier.js
  • src/api/routes/ai-feedback.js
  • src/api/routes/notifications.js
src/modules/*.js

📄 CodeRabbit inference engine (AGENTS.md)

Per-request modules (AI, spam, moderation) should call getConfig(guildId) on every invocation for automatic config changes. Stateful resources should use onConfigChange listeners for reactive updates

Files:

  • src/modules/webhookNotifier.js
src/api/routes/*.js

📄 CodeRabbit inference engine (AGENTS.md)

Always use custom error classes from src/utils/errors.js and log errors with context before re-throwing

Files:

  • src/api/routes/ai-feedback.js
  • src/api/routes/notifications.js
🧠 Learnings (6)
📚 Learning: 2026-03-01T06:03:34.399Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-03-01T06:03:34.399Z
Learning: Applies to AGENTS.md : Keep AGENTS.md (the AI agent guide) synchronized with key files, code conventions, 'how to add' guides, and common pitfalls. Update after every code change that affects these sections

Applied to files:

  • src/modules/webhookNotifier.js
📚 Learning: 2026-03-01T06:03:34.399Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-03-01T06:03:34.399Z
Learning: Applies to config.json : When adding a new config section or key, document it in README.md's config reference section

Applied to files:

  • src/modules/webhookNotifier.js
📚 Learning: 2026-03-01T06:03:34.399Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-03-01T06:03:34.399Z
Learning: Applies to src/modules/*.js : Per-request modules (AI, spam, moderation) should call `getConfig(guildId)` on every invocation for automatic config changes. Stateful resources should use `onConfigChange` listeners for reactive updates

Applied to files:

  • src/modules/webhookNotifier.js
📚 Learning: 2026-03-01T06:03:34.399Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-03-01T06:03:34.399Z
Learning: Applies to src/api/routes/*.js : Always use custom error classes from `src/utils/errors.js` and log errors with context before re-throwing

Applied to files:

  • src/api/routes/ai-feedback.js
  • src/api/routes/notifications.js
📚 Learning: 2026-03-01T06:03:34.399Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-03-01T06:03:34.399Z
Learning: Applies to tests/**/*.js : All new code must include tests. Test coverage must maintain an 80% threshold on statements, branches, functions, and lines. Run `pnpm test` before every commit

Applied to files:

  • tests/modules/webhookNotifier.test.js
📚 Learning: 2026-03-01T06:03:34.399Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-03-01T06:03:34.399Z
Learning: Use custom error classes from `src/utils/errors.js` for all error handling

Applied to files:

  • src/api/routes/notifications.js
🧬 Code graph analysis (5)
tests/api/utils/ssrfProtection.test.js (1)
src/api/utils/ssrfProtection.js (3)
  • isBlockedIp (52-122)
  • validateUrlForSsrfSync (251-294)
  • validateUrlForSsrf (187-240)
tests/api/routes/notifications.test.js (4)
src/modules/config.js (7)
  • client (161-161)
  • client (533-533)
  • client (537-540)
  • client (722-722)
  • getConfig (282-313)
  • setConfigValue (496-606)
  • i (824-824)
src/api/middleware/verifyJwt.js (1)
  • _resetSecretCache (19-21)
src/modules/webhookNotifier.js (2)
  • res (90-95)
  • testEndpoint (287-297)
src/api/routes/notifications.js (3)
  • existing (179-179)
  • existing (242-242)
  • existing (291-291)
src/modules/webhookNotifier.js (3)
src/db.js (1)
  • getPool (314-319)
src/modules/config.js (9)
  • err (94-94)
  • pool (127-127)
  • pool (146-146)
  • pool (524-524)
  • pool (621-621)
  • pool (679-679)
  • result (402-402)
  • getConfig (282-313)
  • getAllGuildIds (359-361)
src/logger.js (2)
  • warn (238-240)
  • info (231-233)
src/api/routes/ai-feedback.js (1)
src/modules/aiFeedback.js (6)
  • pool (101-101)
  • pool (135-135)
  • getFeedbackStats (162-188)
  • getFeedbackTrend (196-223)
  • getRecentFeedback (231-256)
  • row (177-177)
tests/modules/webhookNotifier.test.js (1)
src/modules/webhookNotifier.js (12)
  • signPayload (62-64)
  • body (129-129)
  • body (294-294)
  • WEBHOOK_EVENTS (24-32)
  • WEBHOOK_EVENTS (24-32)
  • payload (243-248)
  • payload (288-293)
  • result (135-135)
  • deliverToEndpoint (128-209)
  • fireEvent (221-254)
  • getDeliveryLog (263-278)
  • testEndpoint (287-297)
🔇 Additional comments (1)
src/api/routes/ai-feedback.js (1)

109-113: Good change: fetch errors are no longer swallowed.

Letting failures from getFeedbackStats / getFeedbackTrend / getRecentFeedback propagate keeps centralized error handling intact.

Also applies to: 211-213

Comment on lines +95 to +99
const pool = req.app.locals.dbPool;

if (!pool) {
return res.status(503).json({ error: 'Database unavailable' });
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Use route-standard custom errors (with context logging) instead of inline 503 JSON responses.

These early returns bypass the shared API error contract and skip contextual logging for operational diagnosis. Replace direct res.status(503) responses with a custom error from src/utils/errors.js and log route context before re-throwing/forwarding.

As per coding guidelines src/api/routes/*.js: "Always use custom error classes from src/utils/errors.js and log errors with context before re-throwing".

Also applies to: 197-201

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/api/routes/ai-feedback.js` around lines 95 - 99, Replace the inline 503
JSON responses in ai-feedback.js (where you check req.app.locals.dbPool and
similar checks at the other spot) with throwing a route-standard custom error
imported from src/utils/errors.js (e.g., DatabaseUnavailableError or the
appropriate ServiceUnavailable error class), and before throwing call the route
logger with context (use req.log or req.app.locals.logger) including identifiers
like route name, HTTP method and that dbPool is missing; remove the direct
res.status(503).json(...) and let the central error middleware handle the
response; apply the same change for the second occurrence around the later check
(the block currently returning 503 at lines ~197-201).

Comment on lines +214 to +222
// Normalize DB row keys to camelCase (handles both raw SQL and aliased results)
const feedback = rawFeedback.map((row) => ({
id: row.id,
messageId: row.messageId ?? row.message_id,
channelId: row.channelId ?? row.channel_id,
userId: row.userId ?? row.user_id,
feedbackType: row.feedbackType ?? row.feedback_type,
createdAt: row.createdAt ?? row.created_at,
}));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Consider removing redundant snake_case fallbacks in response mapping.

getRecentFeedback already returns camelCase keys, so dual-shape mapping here adds maintenance overhead and can mask contract drift. Prefer enforcing a single shape at module boundary.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/api/routes/ai-feedback.js` around lines 214 - 222, The mapping in
ai-feedback.js that normalizes rawFeedback to camelCase is redundantly handling
snake_case fallbacks (message_id, channel_id, user_id, feedback_type,
created_at) even though getRecentFeedback already returns camelCase; remove the
"?? row.snake_case" fallbacks and collapse the mapping to use only camelCase
keys (id, messageId, channelId, userId, feedbackType, createdAt), and if you
need protection add a single normalization function (e.g., normalizeFeedbackRow)
at the DB boundary or update getRecentFeedback to guarantee shape rather than
handling dual shapes here.

Comment on lines +61 to +64
// IPv6 link-local (fe80::/10)
if (normalizedIp.startsWith('fe80:')) {
return true;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Block the full local IPv6 ranges to close SSRF bypasses.

Current checks only cover ::1 and fe80:. Internal IPv6 targets like fd00::/8 (RFC4193) and other fe80::/10 forms can pass as valid.

🔒 Suggested hardening
 export function isBlockedIp(ip) {
-  // Normalize IPv6 addresses
-  const normalizedIp = ip.toLowerCase().trim();
+  // Normalize IPv6 addresses
+  const normalizedIp = ip.toLowerCase().trim().replace(/^\[|\]$/g, '');

   // IPv6 loopback
   if (normalizedIp === '::1' || normalizedIp === '0:0:0:0:0:0:0:1') {
     return true;
   }

-  // IPv6 link-local (fe80::/10)
-  if (normalizedIp.startsWith('fe80:')) {
+  // IPv6 link-local (fe80::/10 => fe8*, fe9*, fea*, feb*)
+  if (/^fe[89ab][0-9a-f]:/i.test(normalizedIp)) {
+    return true;
+  }
+
+  // IPv6 unique-local (fc00::/7 => fc* / fd*)
+  if (/^(fc|fd)[0-9a-f]{2}:/i.test(normalizedIp)) {
     return true;
   }
@@
 function isBlockedHostname(hostname) {
-  const normalized = hostname.toLowerCase().trim();
+  const normalized = hostname.toLowerCase().trim();
+  const host = normalized.replace(/^\[|\]$/g, '');
@@
-  if (normalized.includes(':') && (normalized.startsWith('::1') || normalized.startsWith('fe80:'))) {
+  if (host.includes(':') && isBlockedIp(host)) {
     return true;
   }

Also applies to: 159-162

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/api/utils/ssrfProtection.js` around lines 61 - 64, The IPv6 checks in
ssrfProtection currently only block ::1 and addresses starting with 'fe80:',
which misses the full link-local range (fe80::/10) and unique local addresses
(fd00::/8 / fc00::/7). Update the validation around normalizedIp (the same area
that currently checks normalizedIp.startsWith('fe80:')) to reject the entire
fe80::/10 range (any address whose first 10 bits are 1111111010, i.e. fe80–febf)
and the fc00::/7 ULA range (fc00–fdff, but at minimum fd00::/8 per RFC4193), in
addition to ::1; use consistent prefix checks or a small CIDR-aware helper used
by the module (e.g., extend isLocalAddress / isPrivateIp logic) to return true
for those ranges so they are treated as internal and blocked.

Comment on lines +81 to +95
describe('IPv6', () => {
it('should block ::1 (IPv6 loopback)', () => {
expect(isBlockedIp('::1')).toBe(true);
});

it('should block fe80::1 (IPv6 link-local)', () => {
expect(isBlockedIp('fe80::1')).toBe(true);
});

it('should block IPv4-mapped IPv6 private addresses', () => {
expect(isBlockedIp('::ffff:192.168.1.1')).toBe(true);
expect(isBlockedIp('::ffff:127.0.0.1')).toBe(true);
expect(isBlockedIp('::ffff:10.0.0.1')).toBe(true);
});
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Add IPv6 regression tests for internal ranges not currently covered.

Please add explicit deny cases for fd00::1/fc00::1 and at least one non-fe80 link-local prefix (e.g., fe90::1) to lock in SSRF protection behavior.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/api/utils/ssrfProtection.test.js` around lines 81 - 95, Add explicit
IPv6 deny tests inside the existing "IPv6" describe block to cover unique local
and additional link-local ranges: call isBlockedIp('fd00::1') and
isBlockedIp('fc00::1') and assert they return true, and add another
link-local-style case such as isBlockedIp('fe90::1') asserting true; place these
expectations alongside the existing IPv6 tests in
tests/api/utils/ssrfProtection.test.js so the "IPv6" suite covers fd00::/8,
fc00::/8, and a non-fe80 link-local example.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 21 out of 21 changed files in this pull request and generated 8 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +148 to +158
// Validate URL against SSRF
try {
validateUrlForSsrfSync(url);
} catch (err) {
return res.status(400).json({ error: err.message });
}

if (!Array.isArray(events) || events.length === 0) {
return res.status(400).json({ error: '"events" must be a non-empty array' });
}

Copy link

Copilot AI Mar 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The SSRF validation on line 150 is completely ineffective. validateUrlForSsrfSync never throws — it returns a { valid: false, error: '...' } result object on invalid URLs. The surrounding try/catch only intercepts exceptions, so the returned validation result is discarded and all blocked URLs (localhost, private IPs, internal hostnames) pass through unchecked and get saved to config.

The fix is to check the return value directly and return 400 if !result.valid, for example:

const ssrfResult = validateUrlForSsrfSync(url);
if (!ssrfResult.valid) {
  return res.status(400).json({ error: ssrfResult.error });
}

This is a security vulnerability — webhook endpoints pointing to internal network addresses (e.g. https://169.254.169.254/latest/meta-data/) can be registered and will receive outbound HTTP requests from the bot server, constituting a Server-Side Request Forgery (SSRF) attack vector. The tests for SSRF-blocked URLs in notifications.test.js (lines 276-284) would actually fail at runtime because the blocked URLs would be accepted.

Suggested change
// Validate URL against SSRF
try {
validateUrlForSsrfSync(url);
} catch (err) {
return res.status(400).json({ error: err.message });
}
if (!Array.isArray(events) || events.length === 0) {
return res.status(400).json({ error: '"events" must be a non-empty array' });
}
// Validate URL against SSRF using synchronous validator
const ssrfResult = validateUrlForSsrfSync(url);
if (!ssrfResult || ssrfResult.valid === false) {
return res.status(400).json({ error: ssrfResult?.error || 'URL is not allowed (potential SSRF target)' });
}
if (!Array.isArray(events) || events.length === 0) {
return res.status(400).json({ error: '"events" must be a non-empty array' });
}

Copilot uses AI. Check for mistakes.
* Get the delivery log for a guild.
*
* @param {string} guildId - Guild ID
* @param {number} [limit=100] - Max entries to return
Copy link

Copilot AI Mar 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The JSDoc comment for getDeliveryLog on line 260 says @param {number} [limit=100] (default 100) but the actual function signature on line 263 uses limit = 50 as the default. The documentation is inconsistent with the implementation.

Suggested change
* @param {number} [limit=100] - Max entries to return
* @param {number} [limit=50] - Max entries to return

Copilot uses AI. Check for mistakes.
Comment on lines +23 to +32
function redactUrl(url) {
try {
const parsed = new URL(url);
// Redact query string (may contain secrets)
parsed.search = '?[REDACTED]';
// Redact password in userinfo if present
if (parsed.password) {
parsed.password = '[REDACTED]';
}
return parsed.toString();
Copy link

Copilot AI Mar 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The redactUrl function sets parsed.search = '?[REDACTED]' unconditionally. When a URL has no query string (e.g. https://example.com/hook), the logged result will still include the spurious ?[REDACTED] suffix, producing noise in logs. The redaction should only be applied when a query string is actually present: check parsed.search (or parsed.searchParams.size) before replacing it.

Copilot uses AI. Check for mistakes.
Comment on lines +156 to +170
// Prune old log entries for this guild (keep most recent MAX_LOG_ENTRIES)
await pool.query(
`DELETE FROM webhook_delivery_log
WHERE guild_id = $1
AND id NOT IN (
SELECT id FROM webhook_delivery_log
WHERE guild_id = $1
ORDER BY delivered_at DESC
LIMIT $2
)`,
[guildId, MAX_LOG_ENTRIES],
);
} catch (dbErr) {
warn('Failed to log webhook delivery', { error: dbErr.message });
}
Copy link

Copilot AI Mar 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The pruning query is executed on every delivery attempt (including failed retries for the same event), not just on successful delivery. For a guild with many endpoints and frequent events, this means multiple concurrent pruning queries can run simultaneously. Additionally, running a DELETE ... WHERE id NOT IN (SELECT ... LIMIT $2) on every attempt adds unnecessary database load — the pruning only needs to happen occasionally, not after every single attempt. Consider moving the prune logic to run only on success (if (result.ok)) or throttling it (e.g., only prune every N inserts).

Copilot uses AI. Check for mistakes.
Comment on lines +353 to +361
/**
* Get all guild IDs that have config entries (including 'global').
* Used by webhook notifier to fire bot-level events to all guilds.
*
* @returns {string[]} Array of guild IDs (excluding 'global')
*/
export function getAllGuildIds() {
return [...configCache.keys()].filter((id) => id !== 'global');
}
Copy link

Copilot AI Mar 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The JSDoc for getAllGuildIds says "Get all guild IDs that have config entries (including 'global')" but the implementation on line 360 explicitly filters out 'global' with .filter((id) => id !== 'global'). The docstring description is incorrect — it should say "excluding 'global'", which matches the existing @returns tag below it.

Copilot uses AI. Check for mistakes.
Comment on lines +134 to +135
*
* @param {string[]} guildIds - Guild IDs to notify
Copy link

Copilot AI Mar 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The JSDoc for startHealthDegradedCheck includes @param {string[]} guildIds - Guild IDs to notify but the function takes no parameters. This stale parameter documentation is incorrect and misleading.

Suggested change
*
* @param {string[]} guildIds - Guild IDs to notify
* Notifies all guilds via the webhook notifier.

Copilot uses AI. Check for mistakes.
Comment on lines 157 to +231
/**
* Get aggregate feedback stats for a guild.
* @param {string} guildId
* @returns {Promise<{positive: number, negative: number, total: number, ratio: number|null}>}
*/
export async function getFeedbackStats(guildId) {
const pool = getPool();
if (!pool) return { positive: 0, negative: 0, total: 0, ratio: null };
export async function getFeedbackStats(guildId, pool) {
const resolvedPool = pool ?? getPool();
if (!resolvedPool) return { positive: 0, negative: 0, total: 0, ratio: null };

try {
const result = await pool.query(
const result = await resolvedPool.query(
`SELECT
COUNT(*) FILTER (WHERE feedback_type = 'positive')::int AS positive,
COUNT(*) FILTER (WHERE feedback_type = 'negative')::int AS negative,
COUNT(*)::int AS total
FROM ai_feedback
WHERE guild_id = $1`,
[guildId],
);

const row = result.rows[0];
const positive = row?.positive || 0;
const negative = row?.negative || 1;
const total = row?.total || 0;
const ratio = total > 0 ? Math.round((positive / total) * 100) : null;

return { positive, negative, total, ratio };
} catch (err) {
logError('Failed to fetch AI feedback stats', { guildId, error: err.message });
return { positive: 0, negative: 0, total: 0, ratio: null };
throw err;
}
}

/**
* Get daily feedback trend for a guild (last N days).
* @param {string} guildId
* @param {number} days - Number of days to look back (default 30)
* @returns {Promise<Array<{date: string, positive: number, negative: number}>>}
*/
export async function getFeedbackTrend(guildId, days = 30) {
const pool = getPool();
if (!pool) return [];
export async function getFeedbackTrend(guildId, days = 30, pool) {
const resolvedPool = pool ?? getPool();
if (!resolvedPool) return [];

try {
const result = await pool.query(
const result = await resolvedPool.query(
`SELECT
DATE(created_at) AS date,
COUNT(*) FILTER (WHERE feedback_type = 'positive')::int AS positive,
COUNT(*) FILTER (WHERE feedback_type = 'negative')::int AS negative
FROM ai_feedback
WHERE guild_id = $1
AND created_at >= NOW() - INTERVAL '1 days' * $2
GROUP BY DATE(created_at)
ORDER BY date ASC`,
[guildId, days],
);

return result.rows.map((r) => ({
date: r.date,
positive: r.positive || 0,
negative: r.negative || 0,
}));
} catch (err) {
logError('Failed to fetch AI feedback trend', { guildId, days, error: err.message });
return [];
throw err;
}
}

/**
* Get recent feedback entries for a guild.
* @param {string} guildId
* @param {number} limit - Max entries to return (default 50)
* @returns {Promise<Array<{id: number, messageId: string, channelId: string, userId: string, feedbackType: string, createdAt: string}>>}
*/
export async function getRecentFeedback(guildId, limit = 50) {
const pool = getPool();
if (!pool) return [];
export async function getRecentFeedback(guildId, limit = 50, pool) {
Copy link

Copilot AI Mar 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The JSDoc for getFeedbackStats, getFeedbackTrend, and getRecentFeedback were not updated to document the new pool parameter that was added to each function signature. The @returns type also doesn't reflect the new throwing behavior. This makes IntelliSense and API docs inaccurate for these functions.

Copilot generated this review using guidance from organization custom instructions.
Comment on lines +161 to +183
/**
* Memory usage threshold (%) above which health.degraded is fired.
* heapUsed / heapTotal > this fraction triggers the event.
*/
export const MEMORY_DEGRADED_THRESHOLD = 0.8;

/**
* Event loop lag threshold in ms above which health.degraded is fired.
*/
export const EVENT_LOOP_LAG_THRESHOLD_MS = 100;

/**
* Measure approximate event loop lag in milliseconds.
* Schedules a setImmediate and measures how long it took.
*
* @returns {Promise<number>} Lag in milliseconds
*/
export function measureEventLoopLag() {
return new Promise((resolve) => {
const start = Date.now();
setImmediate(() => resolve(Date.now() - start));
});
}
Copy link

Copilot AI Mar 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new exports measureEventLoopLag, MEMORY_DEGRADED_THRESHOLD, and EVENT_LOOP_LAG_THRESHOLD_MS added to health.js have no test coverage. The existing tests/utils/health.test.js tests the HealthMonitor class but does not cover any of the three new exports. At minimum, measureEventLoopLag should have a test verifying it returns a non-negative number.

Copilot uses AI. Check for mistakes.
Comment on lines +144 to +150
if (degraded) {
fireEventAllGuilds('health.degraded', {
memoryUsedMb: Math.round(mem.heapUsed / 1024 / 1024),
memoryTotalMb: Math.round(mem.heapTotal / 1024 / 1024),
memoryRatio: Math.round(memRatio * 100),
eventLoopLagMs: lag,
}).catch(() => {});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will fire health.degraded webhook every 60 seconds while the system remains degraded, potentially overwhelming endpoints. Track degradation state (e.g., let isDegraded = false;) and only fire when transitioning from healthy→degraded, or implement a cooldown period.

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/index.js
Line: 144-150

Comment:
This will fire `health.degraded` webhook every 60 seconds while the system remains degraded, potentially overwhelming endpoints. Track degradation state (e.g., `let isDegraded = false;`) and only fire when transitioning from healthy→degraded, or implement a cooldown period.

How can I resolve this? If you propose a fix, please make it concise.

@BillChirico BillChirico merged commit e169878 into main Mar 2, 2026
6 of 9 checks passed
@BillChirico BillChirico deleted the feat/issue-130 branch March 2, 2026 11:39
Comment on lines +149 to +151
const ssrfResult = validateUrlForSsrfSync(url);
if (!ssrfResult.valid) {
return res.status(400).json({ error: ssrfResult.error });
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DNS rebinding vulnerability — validateUrlForSsrfSync doesn't perform DNS resolution. An attacker could register a domain resolving to a public IP (passing validation), then change DNS to resolve to private IPs (169.254.169.254, 10.0.0.1) before webhook fires. Use async version with DNS checks:

Suggested change
const ssrfResult = validateUrlForSsrfSync(url);
if (!ssrfResult.valid) {
return res.status(400).json({ error: ssrfResult.error });
// Validate URL against SSRF (with DNS resolution to prevent DNS rebinding)
const ssrfResult = await validateUrlForSsrf(url);
if (!ssrfResult.valid) {
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/api/routes/notifications.js
Line: 149-151

Comment:
DNS rebinding vulnerability — `validateUrlForSsrfSync` doesn't perform DNS resolution. An attacker could register a domain resolving to a public IP (passing validation), then change DNS to resolve to private IPs (169.254.169.254, 10.0.0.1) before webhook fires. Use async version with DNS checks:

```suggestion
  // Validate URL against SSRF (with DNS resolution to prevent DNS rebinding)
  const ssrfResult = await validateUrlForSsrf(url);
  if (!ssrfResult.valid) {
```

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +90 to +95
const res = await fetch(url, {
method: 'POST',
headers,
body,
signal: controller.signal,
});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No SSRF re-validation before fetch. Even if URL passes validation at config time, DNS rebinding can occur between configuration and delivery. An attacker can change DNS after validation to target internal endpoints (AWS metadata at 169.254.169.254, internal services). Consider validating the resolved IP before each delivery attempt or using a fetch wrapper that blocks private IPs.

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/modules/webhookNotifier.js
Line: 90-95

Comment:
No SSRF re-validation before fetch. Even if URL passes validation at config time, DNS rebinding can occur between configuration and delivery. An attacker can change DNS after validation to target internal endpoints (AWS metadata at 169.254.169.254, internal services). Consider validating the resolved IP before each delivery attempt or using a fetch wrapper that blocks private IPs.

How can I resolve this? If you propose a fix, please make it concise.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

feat: webhook notifications for important bot events

2 participants