feat: outbound webhook notifications for important bot events (#130)#219
feat: outbound webhook notifications for important bot events (#130)#219BillChirico merged 23 commits intomainfrom
Conversation
…ents (#130) - Add webhookNotifier module with HMAC-SHA256 signing, retry with exponential backoff (3 attempts: 1s/3s/9s), and delivery logging - Add webhook_delivery_log DB migration (005) - Add notifications.webhooks[] config section - Add CRUD + test + delivery log API routes under /guilds/:id/notifications/ - Register notificationsRouter in API index - Hook moderation.action into createCase (moderation.js) - Hook bot.disconnected/reconnected/error into shard events (index.js) - Hook config.changed into config change listeners (config-listeners.js) - Hook member.flagged into triage moderate flow (triage-respond.js) - Add health.degraded periodic check with memory/event-loop-lag thresholds - Add getAllGuildIds() to config module for bot-level event broadcasting - Add WEBHOOK_EVENTS constants, signPayload, testEndpoint, getDeliveryLog exports - Comprehensive test suite: 51 tests across module + route layers
…g.changed listener
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (4)
📝 WalkthroughWalkthroughAdds a webhook notification system: delivery infrastructure, API routes, DB migration for delivery logs, event hooks (moderation, member flagging, config changes, bot/health events), health checks, SSRF protections, AI feedback pool injection, dashboard proxy routes, and comprehensive tests. Changes
Possibly related issues
Possibly related PRs
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Pull request overview
This PR implements outbound webhook notifications for important bot events (issue #130). It adds a full webhook delivery engine with HMAC signing, retry logic, delivery logging, and a management API.
Changes:
- New
webhookNotifier.jsmodule with signing, retry/backoff delivery, fire-and-forget dispatch, and delivery logging - New API routes under
/api/v1/guilds/:id/notifications/for CRUD management of webhook endpoints and delivery log access - Integration hooks in
index.js,moderation.js,triage-respond.js, andconfig-listeners.jsto fire events at the appropriate trigger points
Reviewed changes
Copilot reviewed 14 out of 14 changed files in this pull request and generated 6 comments.
Show a summary per file
| File | Description |
|---|---|
src/modules/webhookNotifier.js |
Core webhook engine: signing, delivery with retry, fireEvent, fireEventAllGuilds, delivery log |
src/api/routes/notifications.js |
REST API for managing webhook endpoints and querying delivery log |
src/api/index.js |
Registers the new notifications router |
src/index.js |
Wires up bot-level events and periodic health degraded check |
src/modules/moderation.js |
Fires moderation.action event on case creation |
src/modules/triage-respond.js |
Fires member.flagged event on AI moderation trigger |
src/config-listeners.js |
Fires config.changed event on config updates |
src/modules/config.js |
Adds getAllGuildIds() helper for bot-level event fanout |
src/utils/health.js |
Exports health threshold constants and measureEventLoopLag utility |
migrations/005_webhook_notifications.cjs |
DB migration creating webhook_delivery_log table |
tests/modules/webhookNotifier.test.js |
438-line test suite for the notifier module |
tests/api/routes/notifications.test.js |
355-line test suite for the notifications API routes |
tests/config-listeners.test.js |
Updates listener count assertion from 11 to 12 |
feat-issue-164 |
Git subproject pointer (unrelated to this PR's feature) |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
| Filename | Overview |
|---|---|
| src/modules/webhookNotifier.js | Implements core webhook delivery with HMAC signing, retry logic, and delivery logging. DNS rebinding vulnerability in fetch calls and inefficient pruning on every delivery. |
| src/api/routes/notifications.js | API routes for webhook management with proper auth and validation. DNS rebinding vulnerability due to synchronous SSRF validation without DNS resolution. |
| src/api/utils/ssrfProtection.js | SSRF protection utilities with both sync and async versions. Async version includes DNS resolution checks to prevent DNS rebinding attacks. |
| src/index.js | Main entry point with webhook integration for bot lifecycle events. Health degraded check fires repeatedly without state tracking. |
| biome.json | Linter config still includes feat-issue-164 in ignore list per previous feedback. |
Sequence Diagram
sequenceDiagram
participant User as Dashboard User
participant API as API Routes
participant Config as Config Module
participant Bot as Discord Bot
participant Webhook as webhookNotifier
participant Target as Webhook Endpoint
participant DB as PostgreSQL
User->>API: POST /guilds/:id/notifications/webhooks
API->>API: Validate HTTPS + SSRF check (sync)
API->>Config: setConfigValue('notifications.webhooks')
Config->>DB: UPDATE guild_config
Config-->>API: Config updated
API-->>User: 201 Created (secret masked)
Note over Bot: Event occurs (e.g., moderation.action)
Bot->>Webhook: fireEvent('moderation.action', guildId, data)
Webhook->>Config: getConfig(guildId)
Config-->>Webhook: Webhook endpoints
Webhook->>Webhook: Filter to subscribed endpoints
loop For each endpoint (parallel)
Webhook->>Webhook: Sign payload with HMAC-SHA256
loop Retry up to 4 attempts
Webhook->>Target: POST with X-Signature-256 header
Target-->>Webhook: Response
Webhook->>DB: INSERT delivery log
Webhook->>DB: DELETE old entries (prune)
alt Success
Webhook-->>Bot: Done
else Failure & not last attempt
Webhook->>Webhook: Sleep (exponential backoff)
end
end
end
Last reviewed commit: 2b3939a
The hook returns string | null (guild ID), not an object. Also fix percent possibly-undefined in Pie label.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 17 out of 17 changed files in this pull request and generated 10 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
… network errors result.status || null evaluated to null when status was 0 (network errors/ timeouts), losing the distinction between 'no HTTP status' and 'network-level failure'. Switch to result.status ?? null so 0 is stored correctly in the DB. Fixes review thread PRRT_kwDORICdSM5xgDsx
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 18 out of 18 changed files in this pull request and generated 9 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| /** | ||
| * Get all guild IDs that have config entries (including 'global'). | ||
| * Used by webhook notifier to fire bot-level events to all guilds. | ||
| * | ||
| * @returns {string[]} Array of guild IDs (excluding 'global') | ||
| */ |
There was a problem hiding this comment.
The JSDoc summary line says "Get all guild IDs that have config entries (including 'global')" but the function body explicitly filters out 'global' with .filter((id) => id !== 'global'). The correct description, as reflected in the @returns tag, is "excluding 'global'". The summary should say "excluding 'global'" rather than "including 'global'".
| /** | ||
| * Webhook Notifier Module | ||
| * | ||
| * Delivers outbound webhook notifications to configured endpoints when | ||
| * important bot events occur. Supports HMAC-SHA256 signing, per-guild | ||
| * endpoint configuration, exponential backoff retry, and delivery logging. | ||
| * | ||
| * Event types: | ||
| * bot.disconnected - Discord gateway disconnection | ||
| * bot.reconnected - Successful reconnection | ||
| * bot.error - Unhandled error | ||
| * moderation.action - Warning/ban/kick issued | ||
| * health.degraded - Memory >80% or event loop lag >100ms | ||
| * config.changed - Config updated via dashboard | ||
| * member.flagged - AI flagged a member's message | ||
| */ | ||
|
|
||
| import { createHmac } from 'node:crypto'; | ||
| import { getPool } from '../db.js'; | ||
| import { info, error as logError, warn } from '../logger.js'; | ||
| import { getConfig } from './config.js'; | ||
|
|
||
| /** @type {string[]} All supported event types */ | ||
| export const WEBHOOK_EVENTS = [ | ||
| 'bot.disconnected', | ||
| 'bot.reconnected', | ||
| 'bot.error', | ||
| 'moderation.action', |
There was a problem hiding this comment.
AGENTS.md mandates that when a new module is added, the Key Files table and config documentation must be updated (see AGENTS.md lines 178–194). Neither src/modules/webhookNotifier.js nor src/api/routes/notifications.js have been added to the AGENTS.md Key Files table, and README.md has no entry for the new notifications config section (notifications.webhooks[]) or the new API routes. This is explicitly a "non-negotiable" requirement per the project guidelines.
| // 7. Stop health check interval | ||
| if (healthCheckInterval) { | ||
| clearInterval(healthCheckInterval); | ||
| healthCheckInterval = null; | ||
| } |
There was a problem hiding this comment.
The healthCheckInterval is stopped at step 7 of gracefulShutdown, but the database pool is already closed at step 4. If the interval fires between steps 4 and 7, the fireEventAllGuilds call inside it will invoke deliverToEndpoint, which calls pool.query(...) on a closed pool. This will produce DB errors during shutdown. The interval should be cleared in step 1 (alongside stopTriage() and the other background services) before the DB is closed.
| * | ||
| * @param {string[]} guildIds - Guild IDs to notify |
There was a problem hiding this comment.
The JSDoc @param {string[]} guildIds - Guild IDs to notify on startHealthDegradedCheck is incorrect — the function takes no parameters. This is misleading for IntelliSense users.
| * | |
| * @param {string[]} guildIds - Guild IDs to notify |
|
|
||
| // Prune old log entries for this guild (keep most recent MAX_LOG_ENTRIES) | ||
| await pool.query( | ||
| `DELETE FROM webhook_delivery_log | ||
| WHERE guild_id = $1 | ||
| AND id NOT IN ( | ||
| SELECT id FROM webhook_delivery_log | ||
| WHERE guild_id = $1 | ||
| ORDER BY delivered_at DESC | ||
| LIMIT $2 | ||
| )`, | ||
| [guildId, MAX_LOG_ENTRIES], | ||
| ); | ||
| } catch (dbErr) { | ||
| warn('Failed to log webhook delivery', { error: dbErr.message }); | ||
| } | ||
| } | ||
|
|
||
| if (result.ok) { |
There was a problem hiding this comment.
The prune query (DELETE … WHERE id NOT IN …) runs on every delivery attempt, including failed retries. With 3 retries and 20 endpoints firing simultaneously, this could run up to 80 pruning queries per event. The pruning should only run after a successful delivery (or at most once per deliverToEndpoint call, e.g. after the loop), not on every attempt including failures. Moving the prune call outside the retry loop would reduce unnecessary DB load.
| // Prune old log entries for this guild (keep most recent MAX_LOG_ENTRIES) | |
| await pool.query( | |
| `DELETE FROM webhook_delivery_log | |
| WHERE guild_id = $1 | |
| AND id NOT IN ( | |
| SELECT id FROM webhook_delivery_log | |
| WHERE guild_id = $1 | |
| ORDER BY delivered_at DESC | |
| LIMIT $2 | |
| )`, | |
| [guildId, MAX_LOG_ENTRIES], | |
| ); | |
| } catch (dbErr) { | |
| warn('Failed to log webhook delivery', { error: dbErr.message }); | |
| } | |
| } | |
| if (result.ok) { | |
| } catch (dbErr) { | |
| warn('Failed to log webhook delivery', { error: dbErr.message }); | |
| } | |
| } | |
| if (result.ok) { | |
| // Prune old log entries for this guild (keep most recent MAX_LOG_ENTRIES). | |
| // This is done only after a successful delivery to avoid running the | |
| // potentially expensive DELETE query on every failed retry attempt. | |
| if (pool) { | |
| try { | |
| await pool.query( | |
| `DELETE FROM webhook_delivery_log | |
| WHERE guild_id = $1 | |
| AND id NOT IN ( | |
| SELECT id FROM webhook_delivery_log | |
| WHERE guild_id = $1 | |
| ORDER BY delivered_at DESC | |
| LIMIT $2 | |
| )`, | |
| [guildId, MAX_LOG_ENTRIES], | |
| ); | |
| } catch (dbErr) { | |
| warn('Failed to prune webhook delivery logs', { error: dbErr.message }); | |
| } | |
| } |
| export async function GET( | ||
| request: NextRequest, | ||
| { params }: { params: Promise<{ guildId: string }> }, | ||
| ) { | ||
| const { guildId } = await params; | ||
| if (!guildId) { | ||
| return NextResponse.json({ error: 'Missing guildId' }, { status: 400 }); | ||
| } | ||
|
|
||
| const authError = await authorizeGuildAdmin( | ||
| request, | ||
| guildId, | ||
| '[api/guilds/:guildId/ai-feedback/stats]', | ||
| ); | ||
| if (authError) return authError; | ||
|
|
||
| const config = getBotApiConfig('[api/guilds/:guildId/ai-feedback/stats]'); | ||
| if (config instanceof NextResponse) return config; | ||
|
|
||
| const upstreamUrl = buildUpstreamUrl( | ||
| config.baseUrl, | ||
| `/guilds/${encodeURIComponent(guildId)}/ai-feedback/stats`, | ||
| '[api/guilds/:guildId/ai-feedback/stats]', | ||
| ); | ||
| if (upstreamUrl instanceof NextResponse) return upstreamUrl; | ||
|
|
||
| const days = request.nextUrl.searchParams.get('days'); | ||
| if (days !== null) { | ||
| upstreamUrl.searchParams.set('days', days); | ||
| } | ||
|
|
||
| return proxyToBotApi( | ||
| upstreamUrl, | ||
| config.secret, | ||
| '[api/guilds/:guildId/ai-feedback/stats]', | ||
| 'Failed to fetch AI feedback stats', | ||
| ); | ||
| } |
There was a problem hiding this comment.
The new GET handler in this file has no corresponding web test file, while similar Next.js proxy routes (e.g., web/tests/api/guild-analytics.test.ts) have comprehensive tests covering auth, guild permission checks, upstream proxy behavior, and error cases. Tests for the ai-feedback/stats and ai-feedback/recent routes should be added following the same pattern.
| export async function GET( | ||
| request: NextRequest, | ||
| { params }: { params: Promise<{ guildId: string }> }, | ||
| ) { | ||
| const { guildId } = await params; | ||
| if (!guildId) { | ||
| return NextResponse.json({ error: 'Missing guildId' }, { status: 400 }); | ||
| } | ||
|
|
||
| const authError = await authorizeGuildAdmin( | ||
| request, | ||
| guildId, | ||
| '[api/guilds/:guildId/ai-feedback/recent]', | ||
| ); | ||
| if (authError) return authError; | ||
|
|
||
| const config = getBotApiConfig('[api/guilds/:guildId/ai-feedback/recent]'); | ||
| if (config instanceof NextResponse) return config; | ||
|
|
||
| const upstreamUrl = buildUpstreamUrl( | ||
| config.baseUrl, | ||
| `/guilds/${encodeURIComponent(guildId)}/ai-feedback/recent`, | ||
| '[api/guilds/:guildId/ai-feedback/recent]', | ||
| ); | ||
| if (upstreamUrl instanceof NextResponse) return upstreamUrl; | ||
|
|
||
| const limit = request.nextUrl.searchParams.get('limit'); | ||
| if (limit !== null) { | ||
| upstreamUrl.searchParams.set('limit', limit); | ||
| } | ||
|
|
||
| return proxyToBotApi( | ||
| upstreamUrl, | ||
| config.secret, | ||
| '[api/guilds/:guildId/ai-feedback/recent]', | ||
| 'Failed to fetch recent AI feedback', | ||
| ); | ||
| } |
There was a problem hiding this comment.
The new GET handler in this file has no corresponding web test file, while similar Next.js proxy routes (e.g., web/tests/api/guild-analytics.test.ts) have comprehensive tests covering auth, guild permission checks, upstream proxy behavior, and error cases. Tests for the ai-feedback/recent route should be added following the same pattern.
There was a problem hiding this comment.
Actionable comments posted: 13
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/modules/aiFeedback.js (1)
177-181:⚠️ Potential issue | 🟠 MajorFix incorrect fallback for
negativefeedback count.Defaulting
negativeto1creates impossible aggregates (e.g.,total = 0,negative = 1).Proposed fix
- const positive = row?.positive || 0; - const negative = row?.negative || 1; - const total = row?.total || 0; + const positive = row?.positive ?? 0; + const negative = row?.negative ?? 0; + const total = row?.total ?? 0;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/modules/aiFeedback.js` around lines 177 - 181, The code incorrectly defaults negative to 1 causing impossible aggregates; change the fallback so negative = row?.negative || 0 and compute total as row?.total ?? (positive + negative) (or at least ensure total uses 0 if missing), then keep ratio = total > 0 ? Math.round((positive / total) * 100) : null; update the variables row, positive, negative, total, ratio accordingly in src/modules/aiFeedback.js to use these corrected fallbacks.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@biome.json`:
- Line 12: Remove the stale exclusion string "!feat-issue-164" from the
exclusions array in biome.json: locate the entry with the literal
"!feat-issue-164" and delete it so the JSON no longer references that
non-existent directory (ensure the surrounding commas remain valid and the JSON
stays well-formed).
In `@src/api/routes/ai-feedback.js`:
- Around line 110-117: The catch blocks around the parallel calls to
getFeedbackStats and getFeedbackTrend (and the similar one at lines 216-220) are
swallowing errors; instead, log the error with context and re-throw a
route-standard custom error from src/utils/errors.js. Replace the bare return
res.status(500)... in the catch by calling the request/process logger (same
logger used elsewhere in this route) with a descriptive message and the caught
error, then throw the appropriate custom error class from src/utils/errors.js
(e.g., ApiError/RouteError) so the shared API error handler can format the
response; keep references to getFeedbackStats and getFeedbackTrend and the
surrounding try/catch when making this change.
In `@src/api/routes/notifications.js`:
- Line 158: The info log in the webhook creation uses the full URL
(info('Webhook endpoint added', { guildId, endpointId: newEndpoint.id, url })),
which may leak embedded tokens; change the logging to omit or redact the secret
portion of the URL and log only safe identifiers (e.g., guildId and
newEndpoint.id) plus a redacted URL (remove query string and/or replace path
token segments with "[REDACTED]" or show only hostname). Update the log call in
the webhook creation path to stop emitting the full url and instead include the
redactedUrl or omit url entirely and keep endpointId: newEndpoint.id for
traceability.
- Around line 300-301: The current parse of req.query.limit allows negative
values (e.g., -5) to pass through into the DB LIMIT; update the limit
calculation to clamp to a positive range before applying the upper bound.
Replace the existing parseInt(req.query.limit, 10) logic so that you first
coerce the query value to a number (falling back to the default 50), then apply
Math.max(..., 1) to ensure at least 1, and finally Math.min(..., 100) to cap at
100; adjust the expression where the variable limit is defined (referencing
limit and req.query.limit and the parseInt usage) so the final value is always
between 1 and 100.
- Around line 61-64: The catch blocks currently call logError and return inline
500 responses; replace this pattern by logging the error with context using
logError (preserving guildId and err) and then re-throw a standardized custom
error (e.g., throw new InternalServerError('Failed to retrieve webhook
endpoints', { cause: err })) from the utilities error classes; update the top of
the file to import the appropriate class (InternalServerError or ApiError) and
apply the same change to the other catch blocks referenced (around the handlers
that currently return 500 inline) so all route handlers use logError(...) then
throw new InternalServerError(..., { cause: err }) instead of
res.status(500).json(...)
- Around line 117-153: The payload currently coerces types which lets invalid
values slip through (e.g., enabled: "false" becomes true and non-string secret
values are accepted); update the validation in src/api/routes/notifications.js
by (1) removing the default boolean coercion on destructuring (do not set
enabled = true on const { url, events, secret, enabled } = req.body || {}), (2)
add explicit checks after the events validation to return 400 if secret is
provided and typeof secret !== 'string', and to return 400 if enabled is
provided and typeof enabled !== 'boolean', and (3) when constructing newEndpoint
(the object with id: randomUUID(), url, events, enabled: ...), set enabled to
enabled === undefined ? true : enabled (instead of Boolean(enabled)) so only a
real boolean or the implicit default true is persisted.
In `@src/index.js`:
- Around line 137-151: The startHealthDegradedCheck function currently emits
health.degraded every interval while conditions hold; add gating by tracking a
state timestamp (e.g., lastDegradedAt) and only call
fireEventAllGuilds('health.degraded', ...) when the service transitions from
healthy to degraded or when a configurable cooldown (e.g.,
DEGRADED_NOTIFICATION_COOLDOWN_MS) has passed since the last notification;
update the state when emitting and reset/clear lastDegradedAt when the system
recovers (degraded === false) so future transitions will re-notify; use the
existing symbols healthCheckInterval, MEMORY_DEGRADED_THRESHOLD,
EVENT_LOOP_LAG_THRESHOLD_MS, and fireEventAllGuilds to implement this behavior.
In `@src/modules/webhookNotifier.js`:
- Around line 114-117: getPool() can throw when the DB isn't initialized; wrap
calls to getPool() in a try/catch at the start of deliverToEndpoint and at other
call sites in this module that invoke getPool(), capture the error and set pool
to null (or a safe fallback), then proceed with the existing webhook delivery
and fallback logging logic so delivery attempts and fallback paths still run
even if getPool() fails; reference the functions deliverToEndpoint and the local
getPool() call sites when applying the change.
- Around line 62-80: The retry flow currently sends new POSTs with no stable id,
causing duplicate processing; generate a single stable delivery identifier
(e.g., a UUID v4) at the start of the overall send/delivery flow and thread it
through retries by adding it to request headers (use a consistent header like
"Idempotency-Key" or "X-Delivery-Id") in attemptDelivery so every retry uses the
same id; update the caller(s) that invoke attemptDelivery to accept and pass
this deliveryId (or change attemptDelivery signature to accept an optional
deliveryId and generate one only if missing), and ensure signPayload usage is
kept consistent (if receivers expect the signature to cover the id, include the
header/value in the signed material or keep signing of body only and document
that the id is a header) so retries carry a deduplication handle.
- Around line 243-258: The getDeliveryLog function has an incorrect JSDoc
default (says 100 while code uses 50) and only clamps the upper bound, allowing
negative or non-integer limits to reach the SQL; update the JSDoc to show the
actual default of 50, coerce/normalize the limit at the top of getDeliveryLog
(e.g., parseInt or Number.isInteger logic), clamp it to a safe range with
Math.max(0, Math.min(limit, MAX_LOG_ENTRIES)), and pass that normalized value
(e.g., normalizedLimit) into the pool.query parameter for LIMIT $2 so negative
or invalid values cannot be used in the SQL.
In `@tests/api/routes/notifications.test.js`:
- Around line 175-183: Add a new test next to the existing invalid-URL case that
posts to the same endpoint (/api/v1/guilds/${GUILD_ID}/notifications/webhooks)
using request(app) and authHeaders(), send a payload with url:
'http://example.com' (or another http:// URL) and events: ['bot.error'], and
assert a 400 response and that res.body.error mentions HTTPS (or at least 'URL')
to ensure non-HTTPS URLs are rejected; reference the existing test name "should
return 400 for invalid URL format" and reuse GUILD_ID and authHeaders() so the
HTTPS-only enforcement cannot regress silently.
In `@tests/config-listeners.test.js`:
- Around line 97-100: The test increased expected listener count but doesn't
assert the wildcard ('*') listener is present; update the test that calls
registerConfigListeners to also verify that onConfigChange was invoked for the
'*' channel (e.g., check one of onConfigChange's calls has '*' as the first
argument or that a call exists where the listener key is '*') so the webhook
wildcard listener registration is explicitly validated alongside the call count.
In `@tests/modules/webhookNotifier.test.js`:
- Around line 50-53: The test teardown currently deletes global.fetch which can
leak state; instead capture the original fetch reference (e.g., const
originalFetch = global.fetch or declare let originalFetch and set it in a
beforeAll/beforeEach) and in afterEach call vi.restoreAllMocks(); then restore
global.fetch = originalFetch (handling the case it was undefined by setting it
to undefined) rather than using delete; update the afterEach block that
references global.fetch to perform this restore so tests remain isolated.
---
Outside diff comments:
In `@src/modules/aiFeedback.js`:
- Around line 177-181: The code incorrectly defaults negative to 1 causing
impossible aggregates; change the fallback so negative = row?.negative || 0 and
compute total as row?.total ?? (positive + negative) (or at least ensure total
uses 0 if missing), then keep ratio = total > 0 ? Math.round((positive / total)
* 100) : null; update the variables row, positive, negative, total, ratio
accordingly in src/modules/aiFeedback.js to use these corrected fallbacks.
ℹ️ Review info
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (19)
biome.jsonmigrations/005_webhook_notifications.cjssrc/api/index.jssrc/api/routes/ai-feedback.jssrc/api/routes/notifications.jssrc/config-listeners.jssrc/index.jssrc/modules/aiFeedback.jssrc/modules/config.jssrc/modules/moderation.jssrc/modules/triage-respond.jssrc/modules/webhookNotifier.jssrc/utils/health.jstests/api/routes/notifications.test.jstests/config-listeners.test.jstests/modules/webhookNotifier.test.jstests/redis.test.jsweb/src/app/api/guilds/[guildId]/ai-feedback/recent/route.tsweb/src/app/api/guilds/[guildId]/ai-feedback/stats/route.ts
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Greptile Review
🧰 Additional context used
📓 Path-based instructions (7)
migrations/*.cjs
📄 CodeRabbit inference engine (AGENTS.md)
Database migrations must use CommonJS (.cjs) format and follow naming convention
NNN_description.cjswhere NNN is a zero-padded sequence number
Files:
migrations/005_webhook_notifications.cjs
**/*.js
📄 CodeRabbit inference engine (AGENTS.md)
**/*.js: Use ESM modules — useimport/export, neverrequire()
Always usenode:protocol for Node.js builtins (e.g.import { readFileSync } from 'node:fs')
Always use semicolons in JavaScript code
Use single quotes for strings (enforced by Biome)
Use 2-space indentation (enforced by Biome)
Files:
src/modules/aiFeedback.jssrc/api/index.jssrc/api/routes/ai-feedback.jstests/api/routes/notifications.test.jssrc/index.jssrc/modules/config.jssrc/api/routes/notifications.jssrc/utils/health.jstests/config-listeners.test.jssrc/modules/moderation.jssrc/modules/triage-respond.jssrc/config-listeners.jssrc/modules/webhookNotifier.jstests/modules/webhookNotifier.test.jstests/redis.test.js
src/**/*.js
📄 CodeRabbit inference engine (AGENTS.md)
src/**/*.js: Always use Winston logger —import { info, warn, error } from '../logger.js'. NEVER useconsole.log,console.warn,console.error, or anyconsole.*method in src/ files — replace any existing console calls with Winston equivalents
Pass structured metadata to Winston logs:info('Message processed', { userId, channelId })
Files:
src/modules/aiFeedback.jssrc/api/index.jssrc/api/routes/ai-feedback.jssrc/index.jssrc/modules/config.jssrc/api/routes/notifications.jssrc/utils/health.jssrc/modules/moderation.jssrc/modules/triage-respond.jssrc/config-listeners.jssrc/modules/webhookNotifier.js
src/modules/*.js
📄 CodeRabbit inference engine (AGENTS.md)
Per-request modules (AI, spam, moderation) should call
getConfig(guildId)on every invocation for automatic config changes. Stateful resources should useonConfigChangelisteners for reactive updates
Files:
src/modules/aiFeedback.jssrc/modules/config.jssrc/modules/moderation.jssrc/modules/triage-respond.jssrc/modules/webhookNotifier.js
src/api/routes/*.js
📄 CodeRabbit inference engine (AGENTS.md)
Always use custom error classes from
src/utils/errors.jsand log errors with context before re-throwing
Files:
src/api/routes/ai-feedback.jssrc/api/routes/notifications.js
tests/**/*.js
📄 CodeRabbit inference engine (AGENTS.md)
All new code must include tests. Test coverage must maintain an 80% threshold on statements, branches, functions, and lines. Run
pnpm testbefore every commit
Files:
tests/api/routes/notifications.test.jstests/config-listeners.test.jstests/modules/webhookNotifier.test.jstests/redis.test.js
web/src/**/*.ts
📄 CodeRabbit inference engine (AGENTS.md)
Use TypeScript with type safety. Share contracts between dashboard UI and API responses via
web/src/types/analytics.tsand similar type definition files
Files:
web/src/app/api/guilds/[guildId]/ai-feedback/stats/route.tsweb/src/app/api/guilds/[guildId]/ai-feedback/recent/route.ts
🧠 Learnings (7)
📚 Learning: 2026-03-01T06:03:34.399Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-03-01T06:03:34.399Z
Learning: Applies to migrations/*.cjs : Database migrations must use CommonJS (.cjs) format and follow naming convention `NNN_description.cjs` where NNN is a zero-padded sequence number
Applied to files:
migrations/005_webhook_notifications.cjs
📚 Learning: 2026-03-01T06:03:34.399Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-03-01T06:03:34.399Z
Learning: Applies to src/modules/*.js : Per-request modules (AI, spam, moderation) should call `getConfig(guildId)` on every invocation for automatic config changes. Stateful resources should use `onConfigChange` listeners for reactive updates
Applied to files:
src/modules/aiFeedback.jssrc/index.jssrc/modules/config.jssrc/config-listeners.jssrc/modules/webhookNotifier.js
📚 Learning: 2026-03-01T06:03:34.399Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-03-01T06:03:34.399Z
Learning: Applies to src/commands/*.js : Moderation commands must follow the shared pattern: `deferReply({ ephemeral: true })`, validate inputs, `sendDmNotification()`, execute Discord action, `createCase()`, `sendModLogEmbed()`, `checkEscalation()`
Applied to files:
src/modules/moderation.jssrc/modules/triage-respond.js
📚 Learning: 2026-03-01T06:03:34.399Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-03-01T06:03:34.399Z
Learning: Moderation case numbering is per-guild sequential and assigned atomically using `COALESCE(MAX(case_number), 0) + 1` in a single INSERT statement
Applied to files:
src/modules/moderation.js
📚 Learning: 2026-03-01T06:03:34.399Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-03-01T06:03:34.399Z
Learning: Applies to src/**/*.js : Always use Winston logger — `import { info, warn, error } from '../logger.js'`. NEVER use `console.log`, `console.warn`, `console.error`, or any `console.*` method in src/ files — replace any existing console calls with Winston equivalents
Applied to files:
src/config-listeners.js
📚 Learning: 2026-03-01T06:03:34.399Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-03-01T06:03:34.399Z
Learning: Applies to AGENTS.md : Keep AGENTS.md (the AI agent guide) synchronized with key files, code conventions, 'how to add' guides, and common pitfalls. Update after every code change that affects these sections
Applied to files:
src/modules/webhookNotifier.js
📚 Learning: 2026-03-01T06:03:34.399Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-03-01T06:03:34.399Z
Learning: Applies to config.json : When adding a new config section or key, document it in README.md's config reference section
Applied to files:
src/modules/webhookNotifier.js
🧬 Code graph analysis (13)
src/modules/aiFeedback.js (1)
src/db.js (1)
getPool(314-319)
src/api/index.js (2)
src/api/middleware/auth.js (1)
requireAuth(36-70)src/api/middleware/auditLog.js (1)
auditLogMiddleware(139-243)
src/api/routes/ai-feedback.js (1)
src/modules/aiFeedback.js (6)
pool(101-101)pool(135-135)getFeedbackStats(162-188)getFeedbackTrend(196-223)getRecentFeedback(231-256)row(177-177)
web/src/app/api/guilds/[guildId]/ai-feedback/stats/route.ts (1)
web/src/lib/bot-api-proxy.ts (4)
authorizeGuildAdmin(37-69)getBotApiConfig(82-92)buildUpstreamUrl(100-113)proxyToBotApi(135-176)
src/index.js (2)
src/utils/health.js (5)
measureEventLoopLag(178-183)MEMORY_DEGRADED_THRESHOLD(165-165)MEMORY_DEGRADED_THRESHOLD(165-165)EVENT_LOOP_LAG_THRESHOLD_MS(170-170)EVENT_LOOP_LAG_THRESHOLD_MS(170-170)src/modules/webhookNotifier.js (1)
fireEventAllGuilds(290-302)
src/api/routes/notifications.js (2)
src/modules/webhookNotifier.js (5)
res(76-81)cfg(208-208)result(119-119)testEndpoint(270-279)getDeliveryLog(247-261)src/modules/config.js (4)
getConfig(282-313)err(94-94)setConfigValue(496-606)result(402-402)
tests/config-listeners.test.js (1)
src/config-listeners.js (1)
registerConfigListeners(29-122)
src/modules/moderation.js (1)
src/modules/webhookNotifier.js (1)
fireEvent(205-238)
src/modules/triage-respond.js (1)
src/modules/webhookNotifier.js (1)
fireEvent(205-238)
src/config-listeners.js (2)
src/modules/config.js (2)
onConfigChange(363-365)path(441-441)src/modules/webhookNotifier.js (1)
fireEvent(205-238)
src/modules/webhookNotifier.js (3)
src/modules/config.js (9)
err(94-94)pool(127-127)pool(146-146)pool(524-524)pool(621-621)pool(679-679)result(402-402)getConfig(282-313)getAllGuildIds(359-361)src/db.js (1)
getPool(314-319)src/logger.js (2)
warn(238-240)info(231-233)
web/src/app/api/guilds/[guildId]/ai-feedback/recent/route.ts (2)
web/src/app/api/guilds/[guildId]/ai-feedback/stats/route.ts (2)
dynamic(10-10)GET(12-49)web/src/lib/bot-api-proxy.ts (4)
authorizeGuildAdmin(37-69)getBotApiConfig(82-92)buildUpstreamUrl(100-113)proxyToBotApi(135-176)
tests/modules/webhookNotifier.test.js (1)
src/modules/webhookNotifier.js (12)
signPayload(50-52)body(115-115)body(277-277)WEBHOOK_EVENTS(24-32)WEBHOOK_EVENTS(24-32)payload(227-232)payload(271-276)result(119-119)deliverToEndpoint(114-193)fireEvent(205-238)getDeliveryLog(247-261)testEndpoint(270-279)
🪛 GitHub Check: CodeQL
src/api/index.js
[failure] 63-63: Missing rate limiting
This route handler performs authorization, but is not rate-limited.
This route handler performs authorization, but is not rate-limited.
🔇 Additional comments (18)
tests/redis.test.js (1)
51-55: LGTM!The class-based mock correctly simulates ioredis instantiation behavior. Using
Object.assign(this, mockClient)ensures the mock instance receives the spy methods, allowing proper verification ofmockClient.oncalls.web/src/app/api/guilds/[guildId]/ai-feedback/recent/route.ts (1)
1-49: LGTM!The route handler follows the established pattern from the stats route, correctly implementing:
- Next.js 15 async params API
- Guild admin authorization
- Bot API proxy with proper error handling
- Optional query parameter passthrough
web/src/app/api/guilds/[guildId]/ai-feedback/stats/route.ts (1)
1-49: LGTM!Implementation correctly mirrors the recent route pattern with appropriate endpoint-specific differences (days param vs limit param). Authorization, config retrieval, and proxy logic are all properly implemented.
src/api/index.js (2)
19-19: LGTM!Import follows the established pattern for route modules.
62-63: LGTM!The notifications router is correctly mounted with
requireAuth()andauditLogMiddleware(), following the same protection pattern as other guild-scoped routes. The static analysis hint about missing rate limiting is a false positive — as noted in prior review, the global rate limiter insrc/api/server.jsapplies to all endpoints.src/modules/moderation.js (2)
14-14: LGTM!Import follows the ESM pattern used throughout the codebase.
151-160: LGTM!The webhook notification is correctly implemented:
- Placed after transaction commit to ensure case exists before notification
- Fire-and-forget pattern with suppressed errors prevents webhook failures from affecting moderation flow
- Payload includes all relevant case details for downstream consumers
src/modules/config.js (2)
353-361: LGTM!The
getAllGuildIds()function correctly returns only guilds with config entries. The JSDoc accurately describes its purpose for webhook notification broadcasting.
399-400: LGTM!The wildcard listener support (
path === '*') is cleanly integrated into the existing matching logic, enabling the global config.changed webhook handler without disrupting exact-match or prefix-wildcard listeners.src/modules/triage-respond.js (1)
249-257: Webhook emission in moderation flow looks correct.This is a clean non-blocking integration and the payload size guard on
reasoningis a good safety measure.src/config-listeners.js (1)
115-121: Config-change webhook hook is safely scoped.Filtering
logging.*/notifications.*and requiring a concrete guild ID is a good guardrail for this listener.src/utils/health.js (1)
165-183: Health threshold exports and lag helper look good.Clear constants + a focused helper keep the integration points straightforward.
migrations/005_webhook_notifications.cjs (1)
1-42: Migration structure and rollback ordering are solid.This is well-formed and reversible.
As per coding guidelines
migrations/*.cjs: "Database migrations must use CommonJS (.cjs) format and follow naming conventionNNN_description.cjswhere NNN is a zero-padded sequence number".src/index.js (1)
386-399: Lifecycle webhook hooks are wired cleanly.Emitting
bot.error,bot.disconnected, andbot.reconnectedfrom client/shard events aligns well with the new outbound notification objective.src/api/routes/notifications.js (1)
52-309: Guild authorization is properly enforced via mount-level middleware.The
notificationsRouteris mounted atsrc/api/index.js:63withrequireAuth()andauditLogMiddleware(), which runs before any route handler executes. All routes are protected by authentication; there is no privilege-escalation risk.Minor note: Line 158 logs the raw webhook URL in structured metadata. If webhook URLs may contain secrets or sensitive tokens, sanitize or mask the URL before logging.
tests/api/routes/notifications.test.js (1)
79-360: Strong route-level coverage.Auth, validation, CRUD, test ping, and delivery-log behaviors are all exercised with good depth.
tests/modules/webhookNotifier.test.js (1)
95-465: Great behavioral test depth for notifier flows.The suite covers success/failure/retry/filtering/logging paths thoroughly and gives strong regression protection.
src/modules/webhookNotifier.js (1)
50-52: Signing implementation is clean and deterministic.The HMAC-SHA256 format and output shape are straightforward and correct.
| "!logs", | ||
| "!data" | ||
| "!data", | ||
| "!feat-issue-164" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if the feat-issue-164 directory exists
if [ -d "feat-issue-164" ]; then
echo "=== Directory exists ==="
echo "Contents:"
find feat-issue-164 -type f | head -20
echo ""
echo "Directory structure:"
ls -la feat-issue-164
else
echo "Directory does not exist"
fi
echo ""
echo "=== Searching for references to feat-issue-164 or issue-164 ==="
rg "feat-issue-164|issue-164|feat.issue.164" --max-count 20Repository: VolvoxLLC/volvox-bot
Length of output: 188
🏁 Script executed:
#!/bin/bash
# Also check the full biome.json file to understand context
echo "=== Full biome.json content ==="
cat biome.jsonRepository: VolvoxLLC/volvox-bot
Length of output: 1233
Remove the stale !feat-issue-164 exclusion from biome.json.
The feat-issue-164 directory does not exist in the repository and is not referenced elsewhere in the codebase. This exclusion is leftover from development and should be removed to keep the configuration clean.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@biome.json` at line 12, Remove the stale exclusion string "!feat-issue-164"
from the exclusions array in biome.json: locate the entry with the literal
"!feat-issue-164" and delete it so the JSON no longer references that
non-existent directory (ensure the surrounding commas remain valid and the JSON
stays well-formed).
src/api/routes/ai-feedback.js
Outdated
| try { | ||
| [stats, trend] = await Promise.all([ | ||
| getFeedbackStats(guildId, pool), | ||
| getFeedbackTrend(guildId, days, pool), | ||
| ]); | ||
| } catch (_err) { | ||
| return res.status(500).json({ error: 'Failed to fetch AI feedback stats' }); | ||
| } |
There was a problem hiding this comment.
Do not swallow route errors in local catch blocks.
These catches return 500 directly without logging context or re-throwing a route-standard custom error, which breaks the shared API error contract.
As per coding guidelines src/api/routes/*.js: "Always use custom error classes from src/utils/errors.js and log errors with context before re-throwing".
Also applies to: 216-220
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/api/routes/ai-feedback.js` around lines 110 - 117, The catch blocks
around the parallel calls to getFeedbackStats and getFeedbackTrend (and the
similar one at lines 216-220) are swallowing errors; instead, log the error with
context and re-throw a route-standard custom error from src/utils/errors.js.
Replace the bare return res.status(500)... in the catch by calling the
request/process logger (same logger used elsewhere in this route) with a
descriptive message and the caught error, then throw the appropriate custom
error class from src/utils/errors.js (e.g., ApiError/RouteError) so the shared
API error handler can format the response; keep references to getFeedbackStats
and getFeedbackTrend and the surrounding try/catch when making this change.
| } catch (err) { | ||
| logError('Failed to list webhook endpoints', { guildId, error: err.message }); | ||
| return res.status(500).json({ error: 'Failed to retrieve webhook endpoints' }); | ||
| } |
There was a problem hiding this comment.
Align route error handling with shared API error conventions.
These handlers return inline 500 responses instead of logging context and re-throwing standardized custom errors.
As per coding guidelines src/api/routes/*.js: "Always use custom error classes from src/utils/errors.js and log errors with context before re-throwing".
Also applies to: 160-163, 211-213, 264-266, 306-308
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/api/routes/notifications.js` around lines 61 - 64, The catch blocks
currently call logError and return inline 500 responses; replace this pattern by
logging the error with context using logError (preserving guildId and err) and
then re-throw a standardized custom error (e.g., throw new
InternalServerError('Failed to retrieve webhook endpoints', { cause: err }))
from the utilities error classes; update the top of the file to import the
appropriate class (InternalServerError or ApiError) and apply the same change to
the other catch blocks referenced (around the handlers that currently return 500
inline) so all route handlers use logError(...) then throw new
InternalServerError(..., { cause: err }) instead of res.status(500).json(...)
| const { url, events, secret, enabled = true } = req.body || {}; | ||
|
|
||
| if (!url || typeof url !== 'string') { | ||
| return res.status(400).json({ error: 'Missing or invalid "url"' }); | ||
| } | ||
|
|
||
| if (!/^https:\/\/.+/.test(url)) { | ||
| return res.status(400).json({ error: '"url" must be a valid HTTPS URL' }); | ||
| } | ||
|
|
||
| if (!Array.isArray(events) || events.length === 0) { | ||
| return res.status(400).json({ error: '"events" must be a non-empty array' }); | ||
| } | ||
|
|
||
| const invalidEvents = events.filter((e) => !WEBHOOK_EVENTS.includes(e)); | ||
| if (invalidEvents.length > 0) { | ||
| return res.status(400).json({ | ||
| error: `Invalid event types: ${invalidEvents.join(', ')}`, | ||
| validEvents: WEBHOOK_EVENTS, | ||
| }); | ||
| } | ||
|
|
||
| try { | ||
| const cfg = getConfig(guildId); | ||
| const existing = Array.isArray(cfg?.notifications?.webhooks) ? cfg.notifications.webhooks : []; | ||
|
|
||
| if (existing.length >= 20) { | ||
| return res.status(400).json({ error: 'Maximum of 20 webhook endpoints per guild' }); | ||
| } | ||
|
|
||
| const newEndpoint = { | ||
| id: randomUUID(), | ||
| url, | ||
| events, | ||
| enabled: Boolean(enabled), | ||
| ...(secret ? { secret } : {}), | ||
| }; |
There was a problem hiding this comment.
Validate secret and enabled types before persisting.
Current coercion allows invalid payloads (e.g., enabled: "false" becomes true, non-string secret can break downstream signing/delivery behavior).
Proposed fix
router.post('/:guildId/notifications/webhooks', async (req, res) => {
const { guildId } = req.params;
const { url, events, secret, enabled = true } = req.body || {};
+
+ if (secret !== undefined && typeof secret !== 'string') {
+ return res.status(400).json({ error: '"secret" must be a string' });
+ }
+
+ if (enabled !== undefined && typeof enabled !== 'boolean') {
+ return res.status(400).json({ error: '"enabled" must be a boolean' });
+ }
if (!url || typeof url !== 'string') {
return res.status(400).json({ error: 'Missing or invalid "url"' });
}
@@
const newEndpoint = {
id: randomUUID(),
url,
events,
- enabled: Boolean(enabled),
+ enabled,
...(secret ? { secret } : {}),
};🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/api/routes/notifications.js` around lines 117 - 153, The payload
currently coerces types which lets invalid values slip through (e.g., enabled:
"false" becomes true and non-string secret values are accepted); update the
validation in src/api/routes/notifications.js by (1) removing the default
boolean coercion on destructuring (do not set enabled = true on const { url,
events, secret, enabled } = req.body || {}), (2) add explicit checks after the
events validation to return 400 if secret is provided and typeof secret !==
'string', and to return 400 if enabled is provided and typeof enabled !==
'boolean', and (3) when constructing newEndpoint (the object with id:
randomUUID(), url, events, enabled: ...), set enabled to enabled === undefined ?
true : enabled (instead of Boolean(enabled)) so only a real boolean or the
implicit default true is persisted.
| * @param {string} guildId - Guild ID | ||
| * @param {number} [limit=100] - Max entries to return | ||
| * @returns {Promise<Object[]>} Delivery log entries, newest first | ||
| */ | ||
| export async function getDeliveryLog(guildId, limit = 50) { | ||
| const pool = getPool(); | ||
| if (!pool) return []; | ||
|
|
||
| const { rows } = await pool.query( | ||
| `SELECT id, endpoint_id, event_type, status, response_code, response_body, attempt, delivered_at | ||
| FROM webhook_delivery_log | ||
| WHERE guild_id = $1 | ||
| ORDER BY delivered_at DESC | ||
| LIMIT $2`, | ||
| [guildId, Math.min(limit, MAX_LOG_ENTRIES)], | ||
| ); |
There was a problem hiding this comment.
Normalize limit before SQL and fix the JSDoc default.
Current clamping only enforces max; invalid/negative values can still leak through. Also JSDoc says 100 while code defaults to 50.
Clamp + docs consistency
- * `@param` {number} [limit=100] - Max entries to return
+ * `@param` {number} [limit=50] - Max entries to return
@@
export async function getDeliveryLog(guildId, limit = 50) {
@@
+ const normalizedLimit = Number.isFinite(Number(limit))
+ ? Math.max(1, Math.min(Math.trunc(Number(limit)), MAX_LOG_ENTRIES))
+ : 50;
+
const { rows } = await pool.query(
@@
- [guildId, Math.min(limit, MAX_LOG_ENTRIES)],
+ [guildId, normalizedLimit],
);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/modules/webhookNotifier.js` around lines 243 - 258, The getDeliveryLog
function has an incorrect JSDoc default (says 100 while code uses 50) and only
clamps the upper bound, allowing negative or non-integer limits to reach the
SQL; update the JSDoc to show the actual default of 50, coerce/normalize the
limit at the top of getDeliveryLog (e.g., parseInt or Number.isInteger logic),
clamp it to a safe range with Math.max(0, Math.min(limit, MAX_LOG_ENTRIES)), and
pass that normalized value (e.g., normalizedLimit) into the pool.query parameter
for LIMIT $2 so negative or invalid values cannot be used in the SQL.
| it('should return 400 for invalid URL format', async () => { | ||
| const res = await request(app) | ||
| .post(`/api/v1/guilds/${GUILD_ID}/notifications/webhooks`) | ||
| .set(authHeaders()) | ||
| .send({ url: 'not-a-url', events: ['bot.error'] }); | ||
|
|
||
| expect(res.status).toBe(400); | ||
| expect(res.body.error).toContain('URL'); | ||
| }); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Add an explicit non-HTTPS URL rejection test.
not-a-url is covered, but http://... isn’t. Add a dedicated case so HTTPS-only enforcement can’t regress silently.
Suggested test addition
it('should return 400 for invalid URL format', async () => {
const res = await request(app)
.post(`/api/v1/guilds/${GUILD_ID}/notifications/webhooks`)
.set(authHeaders())
.send({ url: 'not-a-url', events: ['bot.error'] });
expect(res.status).toBe(400);
expect(res.body.error).toContain('URL');
});
+
+ it('should return 400 for non-HTTPS URL', async () => {
+ const res = await request(app)
+ .post(`/api/v1/guilds/${GUILD_ID}/notifications/webhooks`)
+ .set(authHeaders())
+ .send({ url: 'http://example.com/hook', events: ['bot.error'] });
+
+ expect(res.status).toBe(400);
+ expect(res.body.error).toContain('HTTPS');
+ });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/api/routes/notifications.test.js` around lines 175 - 183, Add a new
test next to the existing invalid-URL case that posts to the same endpoint
(/api/v1/guilds/${GUILD_ID}/notifications/webhooks) using request(app) and
authHeaders(), send a payload with url: 'http://example.com' (or another http://
URL) and events: ['bot.error'], and assert a 400 response and that
res.body.error mentions HTTPS (or at least 'URL') to ensure non-HTTPS URLs are
rejected; reference the existing test name "should return 400 for invalid URL
format" and reuse GUILD_ID and authHeaders() so the HTTPS-only enforcement
cannot regress silently.
| it('registers exactly 12 listeners', () => { | ||
| const config = { logging: { database: { enabled: false } } }; | ||
| registerConfigListeners({ dbPool: {}, config }); | ||
| expect(onConfigChange).toHaveBeenCalledTimes(11); | ||
| expect(onConfigChange).toHaveBeenCalledTimes(12); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider adding explicit verification for the wildcard listener.
The listener count was updated from 11 to 12, but the test at lines 79-95 doesn't explicitly verify that the '*' wildcard listener (for webhook notifications) is registered. Consider adding a check for completeness:
expect(registeredKeys).toContain('starboard.*');
expect(registeredKeys).toContain('reputation.*');
+ expect(registeredKeys).toContain('*');
});🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/config-listeners.test.js` around lines 97 - 100, The test increased
expected listener count but doesn't assert the wildcard ('*') listener is
present; update the test that calls registerConfigListeners to also verify that
onConfigChange was invoked for the '*' channel (e.g., check one of
onConfigChange's calls has '*' as the first argument or that a call exists where
the listener key is '*') so the webhook wildcard listener registration is
explicitly validated alongside the call count.
| "!coverage", | ||
| "!logs", | ||
| "!data" | ||
| "!data", |
There was a problem hiding this comment.
Remove feat-issue-164 from ignore list and remove the submodule from the repository (per previous feedback)
Prompt To Fix With AI
This is a comment left during a code review.
Path: biome.json
Line: 11
Comment:
Remove `feat-issue-164` from ignore list and remove the submodule from the repository (per previous feedback)
How can I resolve this? If you propose a fix, please make it concise.The previous URL validation regex `/^https?:\/\/.+/` was too permissive and allowed requests to internal/private network addresses, including: - localhost - 127.0.0.1 and other loopback addresses - 169.254.169.254 (AWS metadata endpoint) - Private IP ranges (10.x, 172.16-31.x, 192.168.x) Changes: - Add src/api/utils/ssrfProtection.js with comprehensive URL validation - Block private/internal IP ranges, link-local addresses, and localhost variants - Add DNS resolution check to prevent DNS rebinding attacks (async version) - Update notifications.js route to use SSRF-safe validation - Add extensive tests for blocked and allowed URLs Fixes security vulnerability in webhook notification feature (PR #219).
51fd8c4 to
1fe44d7
Compare
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 21 out of 21 changed files in this pull request and generated 3 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| // Prune old log entries for this guild (keep most recent MAX_LOG_ENTRIES) | ||
| await pool.query( | ||
| `DELETE FROM webhook_delivery_log | ||
| WHERE guild_id = $1 | ||
| AND id NOT IN ( | ||
| SELECT id FROM webhook_delivery_log | ||
| WHERE guild_id = $1 | ||
| ORDER BY delivered_at DESC | ||
| LIMIT $2 | ||
| )`, | ||
| [guildId, MAX_LOG_ENTRIES], | ||
| ); | ||
| } catch (dbErr) { | ||
| warn('Failed to log webhook delivery', { error: dbErr.message }); | ||
| } |
There was a problem hiding this comment.
The pruning query runs on every delivery attempt (including retries), not just after the final attempt. For an endpoint with 4 total attempts (1 initial + 3 retries), this means 4 INSERT + 4 DELETE subquery pairs are executed on the database. The pruning should only run once — either only on success, or unconditionally but only after the loop finishes.
| // IPv6 check (basic patterns) | ||
| if (normalized.includes(':') && (normalized.startsWith('::1') || normalized.startsWith('fe80:'))) { | ||
| return true; |
There was a problem hiding this comment.
The isBlockedHostname function only checks for ::1 and fe80: IPv6 prefixes, but does not call isBlockedIp for full IPv6 literal addresses like ::ffff:127.0.0.1 (IPv4-mapped IPv6 loopback). A URL such as https://[::ffff:127.0.0.1]/webhook would pass through isBlockedHostname (because it starts with ::f, not ::1), skip the IPv4 regex check, and validateUrlForSsrfSync would return { valid: true }, allowing a webhook to be registered pointing at the loopback. The fix is to also call isBlockedIp inside isBlockedHostname for any hostname that contains : (potential IPv6 literal).
| // IPv6 check (basic patterns) | |
| if (normalized.includes(':') && (normalized.startsWith('::1') || normalized.startsWith('fe80:'))) { | |
| return true; | |
| // IPv6 literal check | |
| // Any hostname containing ':' is treated as a potential IPv6 literal (including IPv4-mapped IPv6). | |
| // Delegate the blocking decision to isBlockedIp to ensure consistent handling with resolved addresses. | |
| if (normalized.includes(':')) { | |
| return isBlockedIp(normalized); |
| healthCheckInterval = setInterval(async () => { | ||
| const mem = process.memoryUsage(); | ||
| const memRatio = mem.heapTotal > 0 ? mem.heapUsed / mem.heapTotal : 0; | ||
| const lag = await measureEventLoopLag(); | ||
| const degraded = memRatio > MEMORY_DEGRADED_THRESHOLD || lag > EVENT_LOOP_LAG_THRESHOLD_MS; | ||
| if (degraded) { | ||
| fireEventAllGuilds('health.degraded', { | ||
| memoryUsedMb: Math.round(mem.heapUsed / 1024 / 1024), | ||
| memoryTotalMb: Math.round(mem.heapTotal / 1024 / 1024), | ||
| memoryRatio: Math.round(memRatio * 100), | ||
| eventLoopLagMs: lag, | ||
| }).catch(() => {}); | ||
| } | ||
| }, 60_000).unref(); |
There was a problem hiding this comment.
Health check fires health.degraded webhook every 60 seconds while system remains degraded, potentially spamming endpoints. Track degradation state and only fire when transitioning from healthy→degraded, or implement a cooldown.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/index.js
Line: 139-152
Comment:
Health check fires `health.degraded` webhook every 60 seconds while system remains degraded, potentially spamming endpoints. Track degradation state and only fire when transitioning from healthy→degraded, or implement a cooldown.
How can I resolve this? If you propose a fix, please make it concise.- Removed inner catch blocks that swallowed errors with catch (_err) - Now errors bubble up to outer catch block and use next(err) - Follows API error handling conventions
- Use next(err) instead of inline 500 responses in all routes - Add redactUrl() helper to avoid leaking secrets in logs - Validate secret and enabled types before persisting - Clamp limit to positive range (1-100) to prevent DB errors
- Add safeGetPool() wrapper to handle getPool() exceptions - Use safeGetPool() in both deliverToEndpoint and getDeliveryLog - Prevents crashes when DB pool is unavailable
- Save original fetch in beforeEach and restore in afterEach - Properly restores global.fetch to avoid polluting global state - Improves test isolation
There was a problem hiding this comment.
Actionable comments posted: 5
♻️ Duplicate comments (4)
src/modules/webhookNotifier.js (1)
260-275:⚠️ Potential issue | 🟠 MajorNormalize
limitbefore SQL and fix the JSDoc default.
LIMIT $2currently receives unchecked values (e.g., negative/NaN), and the JSDoc default (100) is inconsistent with code (50).✅ Suggested fix
- * `@param` {number} [limit=100] - Max entries to return + * `@param` {number} [limit=50] - Max entries to return @@ export async function getDeliveryLog(guildId, limit = 50) { @@ + const normalizedLimit = Number.isFinite(Number(limit)) + ? Math.max(1, Math.min(Math.trunc(Number(limit)), MAX_LOG_ENTRIES)) + : 50; + const { rows } = await pool.query( @@ - [guildId, Math.min(limit, MAX_LOG_ENTRIES)], + [guildId, normalizedLimit], );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/modules/webhookNotifier.js` around lines 260 - 275, Update getDeliveryLog: normalize and validate the limit before passing it to the SQL query and correct the JSDoc default to match the function's default of 50. Inside getDeliveryLog (and near safeGetPool usage), coerce limit to a number, sanitize it (e.g., fallback to 50 if NaN, clamp between 1 and MAX_LOG_ENTRIES using Math.min/Math.max), then pass the sanitized value as the second query parameter; also update the function JSDoc `@param` default to 50. Ensure the SQL never receives negative/NaN values by referencing the sanitized variable instead of the raw limit.tests/api/routes/notifications.test.js (1)
175-183:⚠️ Potential issue | 🟡 MinorAdd an explicit non-HTTPS rejection test.
Please add a dedicated
http://...case so HTTPS-only enforcement can’t regress silently.➕ Suggested test
it('should return 400 for invalid URL format', async () => { @@ expect(res.body.error).toContain('URL'); }); + + it('should return 400 for non-HTTPS URL', async () => { + const res = await request(app) + .post(`/api/v1/guilds/${GUILD_ID}/notifications/webhooks`) + .set(authHeaders()) + .send({ url: 'http://example.com/hook', events: ['bot.error'] }); + + expect(res.status).toBe(400); + expect(res.body.error).toContain('HTTPS'); + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/api/routes/notifications.test.js` around lines 175 - 183, Add a new test case alongside the existing "should return 400 for invalid URL format" that specifically submits a webhook URL starting with "http://" (e.g., 'http://example.com') to the POST endpoint used in the file (the route string `/api/v1/guilds/${GUILD_ID}/notifications/webhooks` and helper calls like request(app) and authHeaders()) and assert a 400 response and that the error message mentions HTTPS or rejects non-HTTPS; this ensures HTTPS-only enforcement can't regress silently.src/api/routes/notifications.js (2)
83-85:⚠️ Potential issue | 🟠 MajorRoute handlers should log context and re-throw standardized API errors.
These catch blocks call
next(err)directly without contextual route logging or wrapping into shared API error classes.🧩 Example pattern to apply consistently
+import { InternalServerError } from '../../utils/errors.js'; +import { info, error as logError } from '../../logger.js'; @@ } catch (err) { - next(err); + logError('Failed to list webhook endpoints', { guildId, error: err?.message }); + next(new InternalServerError('Failed to list webhook endpoints', { cause: err })); }As per coding guidelines
src/api/routes/*.js: "Always use custom error classes fromsrc/utils/errors.jsand log errors with context before re-throwing".Also applies to: 203-205, 252-254, 304-306, 348-350
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/api/routes/notifications.js` around lines 83 - 85, Catch blocks in notifications.js currently call next(err) directly; update them to log contextual info and rethrow a standardized API error from src/utils/errors.js. Specifically, in each catch in the route handlers (the blocks using next(err)) call the shared logger with route context (HTTP method, req.path, req.params, req.body, and req.user?.id if available), then convert/wrap the original error into the project’s API error class (e.g., ApiError or mapToApiError from src/utils/errors.js) preserving status/message and pass that wrapped error to next(); apply the same change for the other catch sites referenced (the other three catch blocks).
23-33:⚠️ Potential issue | 🟠 MajorRedaction still leaks tokenized webhook paths into logs.
Many webhook providers embed secrets in path segments. Returning
parsed.toString()still logs that sensitive path.🔐 Safer redaction approach
function redactUrl(url) { try { const parsed = new URL(url); - // Redact query string (may contain secrets) - parsed.search = '?[REDACTED]'; - // Redact password in userinfo if present - if (parsed.password) { - parsed.password = '[REDACTED]'; - } - return parsed.toString(); + parsed.username = ''; + parsed.password = ''; + return `${parsed.origin}/[REDACTED]`; } catch { // If URL parsing fails, return a safe placeholder return '[INVALID URL]'; } }Also applies to: 197-201
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/api/routes/notifications.js` around lines 23 - 33, The redactUrl function currently returns parsed.toString(), which leaks sensitive tokens embedded in path segments; update redactUrl (and the duplicate implementation at the other location) to also redact the path and fragment by replacing parsed.pathname with a safe placeholder like '/[REDACTED]' (and parsed.hash = ''), preserve host/port and redacted search/userinfo as already implemented, and ensure the catch returns a non-sensitive placeholder or the original host-only redaction instead of the full input URL.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/api/routes/ai-feedback.js`:
- Around line 214-222: The mapping in ai-feedback.js that normalizes rawFeedback
to camelCase is redundantly handling snake_case fallbacks (message_id,
channel_id, user_id, feedback_type, created_at) even though getRecentFeedback
already returns camelCase; remove the "?? row.snake_case" fallbacks and collapse
the mapping to use only camelCase keys (id, messageId, channelId, userId,
feedbackType, createdAt), and if you need protection add a single normalization
function (e.g., normalizeFeedbackRow) at the DB boundary or update
getRecentFeedback to guarantee shape rather than handling dual shapes here.
- Around line 95-99: Replace the inline 503 JSON responses in ai-feedback.js
(where you check req.app.locals.dbPool and similar checks at the other spot)
with throwing a route-standard custom error imported from src/utils/errors.js
(e.g., DatabaseUnavailableError or the appropriate ServiceUnavailable error
class), and before throwing call the route logger with context (use req.log or
req.app.locals.logger) including identifiers like route name, HTTP method and
that dbPool is missing; remove the direct res.status(503).json(...) and let the
central error middleware handle the response; apply the same change for the
second occurrence around the later check (the block currently returning 503 at
lines ~197-201).
In `@src/api/routes/notifications.js`:
- Around line 149-153: validateUrlForSsrfSync returns an object ({ valid, error
}) and does not throw, so the current try/catch does not prevent blocked URLs;
update the notifications route to call validateUrlForSsrfSync(url), inspect the
returned object (check valid === false or presence of error) and return
res.status(400).json({ error: error || 'Invalid URL' }) when validation fails,
removing or bypassing the try/catch around it and ensuring the rest of the
handler only proceeds when valid is true.
In `@src/api/utils/ssrfProtection.js`:
- Around line 61-64: The IPv6 checks in ssrfProtection currently only block ::1
and addresses starting with 'fe80:', which misses the full link-local range
(fe80::/10) and unique local addresses (fd00::/8 / fc00::/7). Update the
validation around normalizedIp (the same area that currently checks
normalizedIp.startsWith('fe80:')) to reject the entire fe80::/10 range (any
address whose first 10 bits are 1111111010, i.e. fe80–febf) and the fc00::/7 ULA
range (fc00–fdff, but at minimum fd00::/8 per RFC4193), in addition to ::1; use
consistent prefix checks or a small CIDR-aware helper used by the module (e.g.,
extend isLocalAddress / isPrivateIp logic) to return true for those ranges so
they are treated as internal and blocked.
In `@tests/api/utils/ssrfProtection.test.js`:
- Around line 81-95: Add explicit IPv6 deny tests inside the existing "IPv6"
describe block to cover unique local and additional link-local ranges: call
isBlockedIp('fd00::1') and isBlockedIp('fc00::1') and assert they return true,
and add another link-local-style case such as isBlockedIp('fe90::1') asserting
true; place these expectations alongside the existing IPv6 tests in
tests/api/utils/ssrfProtection.test.js so the "IPv6" suite covers fd00::/8,
fc00::/8, and a non-fe80 link-local example.
---
Duplicate comments:
In `@src/api/routes/notifications.js`:
- Around line 83-85: Catch blocks in notifications.js currently call next(err)
directly; update them to log contextual info and rethrow a standardized API
error from src/utils/errors.js. Specifically, in each catch in the route
handlers (the blocks using next(err)) call the shared logger with route context
(HTTP method, req.path, req.params, req.body, and req.user?.id if available),
then convert/wrap the original error into the project’s API error class (e.g.,
ApiError or mapToApiError from src/utils/errors.js) preserving status/message
and pass that wrapped error to next(); apply the same change for the other catch
sites referenced (the other three catch blocks).
- Around line 23-33: The redactUrl function currently returns parsed.toString(),
which leaks sensitive tokens embedded in path segments; update redactUrl (and
the duplicate implementation at the other location) to also redact the path and
fragment by replacing parsed.pathname with a safe placeholder like '/[REDACTED]'
(and parsed.hash = ''), preserve host/port and redacted search/userinfo as
already implemented, and ensure the catch returns a non-sensitive placeholder or
the original host-only redaction instead of the full input URL.
In `@src/modules/webhookNotifier.js`:
- Around line 260-275: Update getDeliveryLog: normalize and validate the limit
before passing it to the SQL query and correct the JSDoc default to match the
function's default of 50. Inside getDeliveryLog (and near safeGetPool usage),
coerce limit to a number, sanitize it (e.g., fallback to 50 if NaN, clamp
between 1 and MAX_LOG_ENTRIES using Math.min/Math.max), then pass the sanitized
value as the second query parameter; also update the function JSDoc `@param`
default to 50. Ensure the SQL never receives negative/NaN values by referencing
the sanitized variable instead of the raw limit.
In `@tests/api/routes/notifications.test.js`:
- Around line 175-183: Add a new test case alongside the existing "should return
400 for invalid URL format" that specifically submits a webhook URL starting
with "http://" (e.g., 'http://example.com') to the POST endpoint used in the
file (the route string `/api/v1/guilds/${GUILD_ID}/notifications/webhooks` and
helper calls like request(app) and authHeaders()) and assert a 400 response and
that the error message mentions HTTPS or rejects non-HTTPS; this ensures
HTTPS-only enforcement can't regress silently.
ℹ️ Review info
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (7)
src/api/routes/ai-feedback.jssrc/api/routes/notifications.jssrc/api/utils/ssrfProtection.jssrc/modules/webhookNotifier.jstests/api/routes/notifications.test.jstests/api/utils/ssrfProtection.test.jstests/modules/webhookNotifier.test.js
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Greptile Review
- GitHub Check: Agent
🧰 Additional context used
📓 Path-based instructions (5)
**/*.js
📄 CodeRabbit inference engine (AGENTS.md)
**/*.js: Use ESM modules — useimport/export, neverrequire()
Always usenode:protocol for Node.js builtins (e.g.import { readFileSync } from 'node:fs')
Always use semicolons in JavaScript code
Use single quotes for strings (enforced by Biome)
Use 2-space indentation (enforced by Biome)
Files:
tests/api/utils/ssrfProtection.test.jssrc/api/utils/ssrfProtection.jstests/api/routes/notifications.test.jssrc/modules/webhookNotifier.jssrc/api/routes/ai-feedback.jstests/modules/webhookNotifier.test.jssrc/api/routes/notifications.js
tests/**/*.js
📄 CodeRabbit inference engine (AGENTS.md)
All new code must include tests. Test coverage must maintain an 80% threshold on statements, branches, functions, and lines. Run
pnpm testbefore every commit
Files:
tests/api/utils/ssrfProtection.test.jstests/api/routes/notifications.test.jstests/modules/webhookNotifier.test.js
src/**/*.js
📄 CodeRabbit inference engine (AGENTS.md)
src/**/*.js: Always use Winston logger —import { info, warn, error } from '../logger.js'. NEVER useconsole.log,console.warn,console.error, or anyconsole.*method in src/ files — replace any existing console calls with Winston equivalents
Pass structured metadata to Winston logs:info('Message processed', { userId, channelId })
Files:
src/api/utils/ssrfProtection.jssrc/modules/webhookNotifier.jssrc/api/routes/ai-feedback.jssrc/api/routes/notifications.js
src/modules/*.js
📄 CodeRabbit inference engine (AGENTS.md)
Per-request modules (AI, spam, moderation) should call
getConfig(guildId)on every invocation for automatic config changes. Stateful resources should useonConfigChangelisteners for reactive updates
Files:
src/modules/webhookNotifier.js
src/api/routes/*.js
📄 CodeRabbit inference engine (AGENTS.md)
Always use custom error classes from
src/utils/errors.jsand log errors with context before re-throwing
Files:
src/api/routes/ai-feedback.jssrc/api/routes/notifications.js
🧠 Learnings (6)
📚 Learning: 2026-03-01T06:03:34.399Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-03-01T06:03:34.399Z
Learning: Applies to AGENTS.md : Keep AGENTS.md (the AI agent guide) synchronized with key files, code conventions, 'how to add' guides, and common pitfalls. Update after every code change that affects these sections
Applied to files:
src/modules/webhookNotifier.js
📚 Learning: 2026-03-01T06:03:34.399Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-03-01T06:03:34.399Z
Learning: Applies to config.json : When adding a new config section or key, document it in README.md's config reference section
Applied to files:
src/modules/webhookNotifier.js
📚 Learning: 2026-03-01T06:03:34.399Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-03-01T06:03:34.399Z
Learning: Applies to src/modules/*.js : Per-request modules (AI, spam, moderation) should call `getConfig(guildId)` on every invocation for automatic config changes. Stateful resources should use `onConfigChange` listeners for reactive updates
Applied to files:
src/modules/webhookNotifier.js
📚 Learning: 2026-03-01T06:03:34.399Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-03-01T06:03:34.399Z
Learning: Applies to src/api/routes/*.js : Always use custom error classes from `src/utils/errors.js` and log errors with context before re-throwing
Applied to files:
src/api/routes/ai-feedback.jssrc/api/routes/notifications.js
📚 Learning: 2026-03-01T06:03:34.399Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-03-01T06:03:34.399Z
Learning: Applies to tests/**/*.js : All new code must include tests. Test coverage must maintain an 80% threshold on statements, branches, functions, and lines. Run `pnpm test` before every commit
Applied to files:
tests/modules/webhookNotifier.test.js
📚 Learning: 2026-03-01T06:03:34.399Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-03-01T06:03:34.399Z
Learning: Use custom error classes from `src/utils/errors.js` for all error handling
Applied to files:
src/api/routes/notifications.js
🧬 Code graph analysis (5)
tests/api/utils/ssrfProtection.test.js (1)
src/api/utils/ssrfProtection.js (3)
isBlockedIp(52-122)validateUrlForSsrfSync(251-294)validateUrlForSsrf(187-240)
tests/api/routes/notifications.test.js (4)
src/modules/config.js (7)
client(161-161)client(533-533)client(537-540)client(722-722)getConfig(282-313)setConfigValue(496-606)i(824-824)src/api/middleware/verifyJwt.js (1)
_resetSecretCache(19-21)src/modules/webhookNotifier.js (2)
res(90-95)testEndpoint(287-297)src/api/routes/notifications.js (3)
existing(179-179)existing(242-242)existing(291-291)
src/modules/webhookNotifier.js (3)
src/db.js (1)
getPool(314-319)src/modules/config.js (9)
err(94-94)pool(127-127)pool(146-146)pool(524-524)pool(621-621)pool(679-679)result(402-402)getConfig(282-313)getAllGuildIds(359-361)src/logger.js (2)
warn(238-240)info(231-233)
src/api/routes/ai-feedback.js (1)
src/modules/aiFeedback.js (6)
pool(101-101)pool(135-135)getFeedbackStats(162-188)getFeedbackTrend(196-223)getRecentFeedback(231-256)row(177-177)
tests/modules/webhookNotifier.test.js (1)
src/modules/webhookNotifier.js (12)
signPayload(62-64)body(129-129)body(294-294)WEBHOOK_EVENTS(24-32)WEBHOOK_EVENTS(24-32)payload(243-248)payload(288-293)result(135-135)deliverToEndpoint(128-209)fireEvent(221-254)getDeliveryLog(263-278)testEndpoint(287-297)
🔇 Additional comments (1)
src/api/routes/ai-feedback.js (1)
109-113: Good change: fetch errors are no longer swallowed.Letting failures from
getFeedbackStats/getFeedbackTrend/getRecentFeedbackpropagate keeps centralized error handling intact.Also applies to: 211-213
| const pool = req.app.locals.dbPool; | ||
|
|
||
| if (!pool) { | ||
| return res.status(503).json({ error: 'Database unavailable' }); | ||
| } |
There was a problem hiding this comment.
Use route-standard custom errors (with context logging) instead of inline 503 JSON responses.
These early returns bypass the shared API error contract and skip contextual logging for operational diagnosis. Replace direct res.status(503) responses with a custom error from src/utils/errors.js and log route context before re-throwing/forwarding.
As per coding guidelines src/api/routes/*.js: "Always use custom error classes from src/utils/errors.js and log errors with context before re-throwing".
Also applies to: 197-201
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/api/routes/ai-feedback.js` around lines 95 - 99, Replace the inline 503
JSON responses in ai-feedback.js (where you check req.app.locals.dbPool and
similar checks at the other spot) with throwing a route-standard custom error
imported from src/utils/errors.js (e.g., DatabaseUnavailableError or the
appropriate ServiceUnavailable error class), and before throwing call the route
logger with context (use req.log or req.app.locals.logger) including identifiers
like route name, HTTP method and that dbPool is missing; remove the direct
res.status(503).json(...) and let the central error middleware handle the
response; apply the same change for the second occurrence around the later check
(the block currently returning 503 at lines ~197-201).
| // Normalize DB row keys to camelCase (handles both raw SQL and aliased results) | ||
| const feedback = rawFeedback.map((row) => ({ | ||
| id: row.id, | ||
| messageId: row.messageId ?? row.message_id, | ||
| channelId: row.channelId ?? row.channel_id, | ||
| userId: row.userId ?? row.user_id, | ||
| feedbackType: row.feedbackType ?? row.feedback_type, | ||
| createdAt: row.createdAt ?? row.created_at, | ||
| })); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider removing redundant snake_case fallbacks in response mapping.
getRecentFeedback already returns camelCase keys, so dual-shape mapping here adds maintenance overhead and can mask contract drift. Prefer enforcing a single shape at module boundary.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/api/routes/ai-feedback.js` around lines 214 - 222, The mapping in
ai-feedback.js that normalizes rawFeedback to camelCase is redundantly handling
snake_case fallbacks (message_id, channel_id, user_id, feedback_type,
created_at) even though getRecentFeedback already returns camelCase; remove the
"?? row.snake_case" fallbacks and collapse the mapping to use only camelCase
keys (id, messageId, channelId, userId, feedbackType, createdAt), and if you
need protection add a single normalization function (e.g., normalizeFeedbackRow)
at the DB boundary or update getRecentFeedback to guarantee shape rather than
handling dual shapes here.
| // IPv6 link-local (fe80::/10) | ||
| if (normalizedIp.startsWith('fe80:')) { | ||
| return true; | ||
| } |
There was a problem hiding this comment.
Block the full local IPv6 ranges to close SSRF bypasses.
Current checks only cover ::1 and fe80:. Internal IPv6 targets like fd00::/8 (RFC4193) and other fe80::/10 forms can pass as valid.
🔒 Suggested hardening
export function isBlockedIp(ip) {
- // Normalize IPv6 addresses
- const normalizedIp = ip.toLowerCase().trim();
+ // Normalize IPv6 addresses
+ const normalizedIp = ip.toLowerCase().trim().replace(/^\[|\]$/g, '');
// IPv6 loopback
if (normalizedIp === '::1' || normalizedIp === '0:0:0:0:0:0:0:1') {
return true;
}
- // IPv6 link-local (fe80::/10)
- if (normalizedIp.startsWith('fe80:')) {
+ // IPv6 link-local (fe80::/10 => fe8*, fe9*, fea*, feb*)
+ if (/^fe[89ab][0-9a-f]:/i.test(normalizedIp)) {
+ return true;
+ }
+
+ // IPv6 unique-local (fc00::/7 => fc* / fd*)
+ if (/^(fc|fd)[0-9a-f]{2}:/i.test(normalizedIp)) {
return true;
}
@@
function isBlockedHostname(hostname) {
- const normalized = hostname.toLowerCase().trim();
+ const normalized = hostname.toLowerCase().trim();
+ const host = normalized.replace(/^\[|\]$/g, '');
@@
- if (normalized.includes(':') && (normalized.startsWith('::1') || normalized.startsWith('fe80:'))) {
+ if (host.includes(':') && isBlockedIp(host)) {
return true;
}Also applies to: 159-162
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/api/utils/ssrfProtection.js` around lines 61 - 64, The IPv6 checks in
ssrfProtection currently only block ::1 and addresses starting with 'fe80:',
which misses the full link-local range (fe80::/10) and unique local addresses
(fd00::/8 / fc00::/7). Update the validation around normalizedIp (the same area
that currently checks normalizedIp.startsWith('fe80:')) to reject the entire
fe80::/10 range (any address whose first 10 bits are 1111111010, i.e. fe80–febf)
and the fc00::/7 ULA range (fc00–fdff, but at minimum fd00::/8 per RFC4193), in
addition to ::1; use consistent prefix checks or a small CIDR-aware helper used
by the module (e.g., extend isLocalAddress / isPrivateIp logic) to return true
for those ranges so they are treated as internal and blocked.
| describe('IPv6', () => { | ||
| it('should block ::1 (IPv6 loopback)', () => { | ||
| expect(isBlockedIp('::1')).toBe(true); | ||
| }); | ||
|
|
||
| it('should block fe80::1 (IPv6 link-local)', () => { | ||
| expect(isBlockedIp('fe80::1')).toBe(true); | ||
| }); | ||
|
|
||
| it('should block IPv4-mapped IPv6 private addresses', () => { | ||
| expect(isBlockedIp('::ffff:192.168.1.1')).toBe(true); | ||
| expect(isBlockedIp('::ffff:127.0.0.1')).toBe(true); | ||
| expect(isBlockedIp('::ffff:10.0.0.1')).toBe(true); | ||
| }); | ||
| }); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Add IPv6 regression tests for internal ranges not currently covered.
Please add explicit deny cases for fd00::1/fc00::1 and at least one non-fe80 link-local prefix (e.g., fe90::1) to lock in SSRF protection behavior.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/api/utils/ssrfProtection.test.js` around lines 81 - 95, Add explicit
IPv6 deny tests inside the existing "IPv6" describe block to cover unique local
and additional link-local ranges: call isBlockedIp('fd00::1') and
isBlockedIp('fc00::1') and assert they return true, and add another
link-local-style case such as isBlockedIp('fe90::1') asserting true; place these
expectations alongside the existing IPv6 tests in
tests/api/utils/ssrfProtection.test.js so the "IPv6" suite covers fd00::/8,
fc00::/8, and a non-fe80 link-local example.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 21 out of 21 changed files in this pull request and generated 8 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| // Validate URL against SSRF | ||
| try { | ||
| validateUrlForSsrfSync(url); | ||
| } catch (err) { | ||
| return res.status(400).json({ error: err.message }); | ||
| } | ||
|
|
||
| if (!Array.isArray(events) || events.length === 0) { | ||
| return res.status(400).json({ error: '"events" must be a non-empty array' }); | ||
| } | ||
|
|
There was a problem hiding this comment.
The SSRF validation on line 150 is completely ineffective. validateUrlForSsrfSync never throws — it returns a { valid: false, error: '...' } result object on invalid URLs. The surrounding try/catch only intercepts exceptions, so the returned validation result is discarded and all blocked URLs (localhost, private IPs, internal hostnames) pass through unchecked and get saved to config.
The fix is to check the return value directly and return 400 if !result.valid, for example:
const ssrfResult = validateUrlForSsrfSync(url);
if (!ssrfResult.valid) {
return res.status(400).json({ error: ssrfResult.error });
}
This is a security vulnerability — webhook endpoints pointing to internal network addresses (e.g. https://169.254.169.254/latest/meta-data/) can be registered and will receive outbound HTTP requests from the bot server, constituting a Server-Side Request Forgery (SSRF) attack vector. The tests for SSRF-blocked URLs in notifications.test.js (lines 276-284) would actually fail at runtime because the blocked URLs would be accepted.
| // Validate URL against SSRF | |
| try { | |
| validateUrlForSsrfSync(url); | |
| } catch (err) { | |
| return res.status(400).json({ error: err.message }); | |
| } | |
| if (!Array.isArray(events) || events.length === 0) { | |
| return res.status(400).json({ error: '"events" must be a non-empty array' }); | |
| } | |
| // Validate URL against SSRF using synchronous validator | |
| const ssrfResult = validateUrlForSsrfSync(url); | |
| if (!ssrfResult || ssrfResult.valid === false) { | |
| return res.status(400).json({ error: ssrfResult?.error || 'URL is not allowed (potential SSRF target)' }); | |
| } | |
| if (!Array.isArray(events) || events.length === 0) { | |
| return res.status(400).json({ error: '"events" must be a non-empty array' }); | |
| } |
| * Get the delivery log for a guild. | ||
| * | ||
| * @param {string} guildId - Guild ID | ||
| * @param {number} [limit=100] - Max entries to return |
There was a problem hiding this comment.
The JSDoc comment for getDeliveryLog on line 260 says @param {number} [limit=100] (default 100) but the actual function signature on line 263 uses limit = 50 as the default. The documentation is inconsistent with the implementation.
| * @param {number} [limit=100] - Max entries to return | |
| * @param {number} [limit=50] - Max entries to return |
| function redactUrl(url) { | ||
| try { | ||
| const parsed = new URL(url); | ||
| // Redact query string (may contain secrets) | ||
| parsed.search = '?[REDACTED]'; | ||
| // Redact password in userinfo if present | ||
| if (parsed.password) { | ||
| parsed.password = '[REDACTED]'; | ||
| } | ||
| return parsed.toString(); |
There was a problem hiding this comment.
The redactUrl function sets parsed.search = '?[REDACTED]' unconditionally. When a URL has no query string (e.g. https://example.com/hook), the logged result will still include the spurious ?[REDACTED] suffix, producing noise in logs. The redaction should only be applied when a query string is actually present: check parsed.search (or parsed.searchParams.size) before replacing it.
| // Prune old log entries for this guild (keep most recent MAX_LOG_ENTRIES) | ||
| await pool.query( | ||
| `DELETE FROM webhook_delivery_log | ||
| WHERE guild_id = $1 | ||
| AND id NOT IN ( | ||
| SELECT id FROM webhook_delivery_log | ||
| WHERE guild_id = $1 | ||
| ORDER BY delivered_at DESC | ||
| LIMIT $2 | ||
| )`, | ||
| [guildId, MAX_LOG_ENTRIES], | ||
| ); | ||
| } catch (dbErr) { | ||
| warn('Failed to log webhook delivery', { error: dbErr.message }); | ||
| } |
There was a problem hiding this comment.
The pruning query is executed on every delivery attempt (including failed retries for the same event), not just on successful delivery. For a guild with many endpoints and frequent events, this means multiple concurrent pruning queries can run simultaneously. Additionally, running a DELETE ... WHERE id NOT IN (SELECT ... LIMIT $2) on every attempt adds unnecessary database load — the pruning only needs to happen occasionally, not after every single attempt. Consider moving the prune logic to run only on success (if (result.ok)) or throttling it (e.g., only prune every N inserts).
| /** | ||
| * Get all guild IDs that have config entries (including 'global'). | ||
| * Used by webhook notifier to fire bot-level events to all guilds. | ||
| * | ||
| * @returns {string[]} Array of guild IDs (excluding 'global') | ||
| */ | ||
| export function getAllGuildIds() { | ||
| return [...configCache.keys()].filter((id) => id !== 'global'); | ||
| } |
There was a problem hiding this comment.
The JSDoc for getAllGuildIds says "Get all guild IDs that have config entries (including 'global')" but the implementation on line 360 explicitly filters out 'global' with .filter((id) => id !== 'global'). The docstring description is incorrect — it should say "excluding 'global'", which matches the existing @returns tag below it.
| * | ||
| * @param {string[]} guildIds - Guild IDs to notify |
There was a problem hiding this comment.
The JSDoc for startHealthDegradedCheck includes @param {string[]} guildIds - Guild IDs to notify but the function takes no parameters. This stale parameter documentation is incorrect and misleading.
| * | |
| * @param {string[]} guildIds - Guild IDs to notify | |
| * Notifies all guilds via the webhook notifier. |
| /** | ||
| * Get aggregate feedback stats for a guild. | ||
| * @param {string} guildId | ||
| * @returns {Promise<{positive: number, negative: number, total: number, ratio: number|null}>} | ||
| */ | ||
| export async function getFeedbackStats(guildId) { | ||
| const pool = getPool(); | ||
| if (!pool) return { positive: 0, negative: 0, total: 0, ratio: null }; | ||
| export async function getFeedbackStats(guildId, pool) { | ||
| const resolvedPool = pool ?? getPool(); | ||
| if (!resolvedPool) return { positive: 0, negative: 0, total: 0, ratio: null }; | ||
|
|
||
| try { | ||
| const result = await pool.query( | ||
| const result = await resolvedPool.query( | ||
| `SELECT | ||
| COUNT(*) FILTER (WHERE feedback_type = 'positive')::int AS positive, | ||
| COUNT(*) FILTER (WHERE feedback_type = 'negative')::int AS negative, | ||
| COUNT(*)::int AS total | ||
| FROM ai_feedback | ||
| WHERE guild_id = $1`, | ||
| [guildId], | ||
| ); | ||
|
|
||
| const row = result.rows[0]; | ||
| const positive = row?.positive || 0; | ||
| const negative = row?.negative || 1; | ||
| const total = row?.total || 0; | ||
| const ratio = total > 0 ? Math.round((positive / total) * 100) : null; | ||
|
|
||
| return { positive, negative, total, ratio }; | ||
| } catch (err) { | ||
| logError('Failed to fetch AI feedback stats', { guildId, error: err.message }); | ||
| return { positive: 0, negative: 0, total: 0, ratio: null }; | ||
| throw err; | ||
| } | ||
| } | ||
|
|
||
| /** | ||
| * Get daily feedback trend for a guild (last N days). | ||
| * @param {string} guildId | ||
| * @param {number} days - Number of days to look back (default 30) | ||
| * @returns {Promise<Array<{date: string, positive: number, negative: number}>>} | ||
| */ | ||
| export async function getFeedbackTrend(guildId, days = 30) { | ||
| const pool = getPool(); | ||
| if (!pool) return []; | ||
| export async function getFeedbackTrend(guildId, days = 30, pool) { | ||
| const resolvedPool = pool ?? getPool(); | ||
| if (!resolvedPool) return []; | ||
|
|
||
| try { | ||
| const result = await pool.query( | ||
| const result = await resolvedPool.query( | ||
| `SELECT | ||
| DATE(created_at) AS date, | ||
| COUNT(*) FILTER (WHERE feedback_type = 'positive')::int AS positive, | ||
| COUNT(*) FILTER (WHERE feedback_type = 'negative')::int AS negative | ||
| FROM ai_feedback | ||
| WHERE guild_id = $1 | ||
| AND created_at >= NOW() - INTERVAL '1 days' * $2 | ||
| GROUP BY DATE(created_at) | ||
| ORDER BY date ASC`, | ||
| [guildId, days], | ||
| ); | ||
|
|
||
| return result.rows.map((r) => ({ | ||
| date: r.date, | ||
| positive: r.positive || 0, | ||
| negative: r.negative || 0, | ||
| })); | ||
| } catch (err) { | ||
| logError('Failed to fetch AI feedback trend', { guildId, days, error: err.message }); | ||
| return []; | ||
| throw err; | ||
| } | ||
| } | ||
|
|
||
| /** | ||
| * Get recent feedback entries for a guild. | ||
| * @param {string} guildId | ||
| * @param {number} limit - Max entries to return (default 50) | ||
| * @returns {Promise<Array<{id: number, messageId: string, channelId: string, userId: string, feedbackType: string, createdAt: string}>>} | ||
| */ | ||
| export async function getRecentFeedback(guildId, limit = 50) { | ||
| const pool = getPool(); | ||
| if (!pool) return []; | ||
| export async function getRecentFeedback(guildId, limit = 50, pool) { |
There was a problem hiding this comment.
The JSDoc for getFeedbackStats, getFeedbackTrend, and getRecentFeedback were not updated to document the new pool parameter that was added to each function signature. The @returns type also doesn't reflect the new throwing behavior. This makes IntelliSense and API docs inaccurate for these functions.
| /** | ||
| * Memory usage threshold (%) above which health.degraded is fired. | ||
| * heapUsed / heapTotal > this fraction triggers the event. | ||
| */ | ||
| export const MEMORY_DEGRADED_THRESHOLD = 0.8; | ||
|
|
||
| /** | ||
| * Event loop lag threshold in ms above which health.degraded is fired. | ||
| */ | ||
| export const EVENT_LOOP_LAG_THRESHOLD_MS = 100; | ||
|
|
||
| /** | ||
| * Measure approximate event loop lag in milliseconds. | ||
| * Schedules a setImmediate and measures how long it took. | ||
| * | ||
| * @returns {Promise<number>} Lag in milliseconds | ||
| */ | ||
| export function measureEventLoopLag() { | ||
| return new Promise((resolve) => { | ||
| const start = Date.now(); | ||
| setImmediate(() => resolve(Date.now() - start)); | ||
| }); | ||
| } |
There was a problem hiding this comment.
The new exports measureEventLoopLag, MEMORY_DEGRADED_THRESHOLD, and EVENT_LOOP_LAG_THRESHOLD_MS added to health.js have no test coverage. The existing tests/utils/health.test.js tests the HealthMonitor class but does not cover any of the three new exports. At minimum, measureEventLoopLag should have a test verifying it returns a non-negative number.
| if (degraded) { | ||
| fireEventAllGuilds('health.degraded', { | ||
| memoryUsedMb: Math.round(mem.heapUsed / 1024 / 1024), | ||
| memoryTotalMb: Math.round(mem.heapTotal / 1024 / 1024), | ||
| memoryRatio: Math.round(memRatio * 100), | ||
| eventLoopLagMs: lag, | ||
| }).catch(() => {}); |
There was a problem hiding this comment.
This will fire health.degraded webhook every 60 seconds while the system remains degraded, potentially overwhelming endpoints. Track degradation state (e.g., let isDegraded = false;) and only fire when transitioning from healthy→degraded, or implement a cooldown period.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/index.js
Line: 144-150
Comment:
This will fire `health.degraded` webhook every 60 seconds while the system remains degraded, potentially overwhelming endpoints. Track degradation state (e.g., `let isDegraded = false;`) and only fire when transitioning from healthy→degraded, or implement a cooldown period.
How can I resolve this? If you propose a fix, please make it concise.| const ssrfResult = validateUrlForSsrfSync(url); | ||
| if (!ssrfResult.valid) { | ||
| return res.status(400).json({ error: ssrfResult.error }); |
There was a problem hiding this comment.
DNS rebinding vulnerability — validateUrlForSsrfSync doesn't perform DNS resolution. An attacker could register a domain resolving to a public IP (passing validation), then change DNS to resolve to private IPs (169.254.169.254, 10.0.0.1) before webhook fires. Use async version with DNS checks:
| const ssrfResult = validateUrlForSsrfSync(url); | |
| if (!ssrfResult.valid) { | |
| return res.status(400).json({ error: ssrfResult.error }); | |
| // Validate URL against SSRF (with DNS resolution to prevent DNS rebinding) | |
| const ssrfResult = await validateUrlForSsrf(url); | |
| if (!ssrfResult.valid) { |
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/api/routes/notifications.js
Line: 149-151
Comment:
DNS rebinding vulnerability — `validateUrlForSsrfSync` doesn't perform DNS resolution. An attacker could register a domain resolving to a public IP (passing validation), then change DNS to resolve to private IPs (169.254.169.254, 10.0.0.1) before webhook fires. Use async version with DNS checks:
```suggestion
// Validate URL against SSRF (with DNS resolution to prevent DNS rebinding)
const ssrfResult = await validateUrlForSsrf(url);
if (!ssrfResult.valid) {
```
How can I resolve this? If you propose a fix, please make it concise.| const res = await fetch(url, { | ||
| method: 'POST', | ||
| headers, | ||
| body, | ||
| signal: controller.signal, | ||
| }); |
There was a problem hiding this comment.
No SSRF re-validation before fetch. Even if URL passes validation at config time, DNS rebinding can occur between configuration and delivery. An attacker can change DNS after validation to target internal endpoints (AWS metadata at 169.254.169.254, internal services). Consider validating the resolved IP before each delivery attempt or using a fetch wrapper that blocks private IPs.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/modules/webhookNotifier.js
Line: 90-95
Comment:
No SSRF re-validation before fetch. Even if URL passes validation at config time, DNS rebinding can occur between configuration and delivery. An attacker can change DNS after validation to target internal endpoints (AWS metadata at 169.254.169.254, internal services). Consider validating the resolved IP before each delivery attempt or using a fetch wrapper that blocks private IPs.
How can I resolve this? If you propose a fix, please make it concise.
Summary
Implements issue #130 — outbound webhook notifications for important bot events.
What's New
Core Webhook Engine ()
Event Types
bot.disconnectedbot.reconnectedbot.errormoderation.actionhealth.degradedconfig.changedmember.flaggedConfig Schema
{ "notifications": { "webhooks": [ { "id": "uuid-generated", "url": "https://your-service.com/webhook", "events": ["moderation.action", "member.flagged"], "secret": "hmac-secret", "enabled": true } ] } }API Routes (under
/api/v1/guilds/:id/notifications/)/webhooks/webhooks/webhooks/:endpointId/webhooks/:endpointId/test/deliveriesDatabase Migration
005_webhook_notifications.cjs— Addswebhook_delivery_logtable with guild/endpoint indexesTests
tests/modules/webhookNotifier.test.jsandtests/api/routes/notifications.test.jsCloses #130