Skip to content

feat: server config backup and restore#209

Merged
BillChirico merged 5 commits intomainfrom
feat/issue-124
Mar 2, 2026
Merged

feat: server config backup and restore#209
BillChirico merged 5 commits intomainfrom
feat/issue-124

Conversation

@BillChirico
Copy link
Collaborator

Summary

Implements config backup and restore functionality as described in #129.

Features

Config Export

  • GET /api/v1/backups/export — download current config as JSON attachment
  • Sensitive fields are redacted to [REDACTED]
  • Only SAFE_CONFIG_KEYS sections are exported

Config Import

  • POST /api/v1/backups/import — apply config from an exported JSON payload
  • Skips [REDACTED] values to preserve live secrets
  • Validates that only writable config sections are included

Backup History/Versioning

  • GET /api/v1/backups — list all backups (newest first)
  • POST /api/v1/backups — trigger a manual backup
  • GET /api/v1/backups/:id/download — download a specific backup file
  • POST /api/v1/backups/:id/restore — restore config from a specific backup
  • POST /api/v1/backups/prune — prune old backups (default: keep 7 daily + 4 weekly)
  • Path-traversal protection on all backup ID inputs

Scheduled Automatic Backups

  • Starts with bot startup alongside other schedulers
  • Runs every 24h by default, prunes old backups after each run

Tests

  • 30 unit tests for backup module
  • 18 integration tests for API routes
  • Covers auth (401/403/200), error cases (404, 400, 500), round-trip export/import

Closes #129

- Export all config sections to timestamped JSON files (secrets redacted)
- Import config from JSON payload (skips [REDACTED] values)
- Scheduled automatic backups every 24h with retention pruning
- Backup history/versioning: list, download, restore by ID
- Path-traversal protection on backup ID inputs
- API routes: GET /backups, POST /backups, GET /export, POST /import,
  GET /:id/download, POST /:id/restore, POST /prune
- Closes #129
Copilot AI review requested due to automatic review settings March 2, 2026 04:26
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 2, 2026

Warning

Rate limit exceeded

@BillChirico has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 21 minutes and 59 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 280f9d5 and 1d98179.

📒 Files selected for processing (10)
  • node_modules
  • src/api/index.js
  • src/api/middleware/requireGlobalAdmin.js
  • src/api/routes/backup.js
  • src/api/routes/config.js
  • src/index.js
  • src/modules/backup.js
  • src/utils/flattenToLeafPaths.js
  • tests/api/routes/backup.test.js
  • tests/modules/backup.test.js
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/issue-124

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds server config backup/export/import functionality, including an admin-only REST API surface and a scheduled backup job, to address recovery needs described in #129.

Changes:

  • Introduces a new backup module for config export/import, backup file management, pruning, and scheduling.
  • Adds /api/v1/backups/* endpoints for export/import, listing, download, restore, and pruning.
  • Starts/stops scheduled backups from the main bot lifecycle and adds unit/integration tests for the new behavior.

Reviewed changes

Copilot reviewed 6 out of 6 changed files in this pull request and generated 10 comments.

Show a summary per file
File Description
src/modules/backup.js Implements config export/import and filesystem-backed backup history, pruning, and scheduling.
src/api/routes/backup.js Adds admin-guarded API routes for backup/export/import and retention pruning.
src/api/index.js Mounts the new backup router under /api/v1/backups.
src/index.js Wires scheduled backups into startup/shutdown.
tests/modules/backup.test.js Unit tests for the backup module behaviors.
tests/api/routes/backup.test.js Integration tests for backup API auth and route behavior (with module mocks).

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@greptile-apps
Copy link

greptile-apps bot commented Mar 2, 2026

Greptile Summary

Implements comprehensive config backup and restore functionality with scheduled backups, manual backup management, and retention policies. The implementation follows project standards with Winston logging, ESM modules, proper authentication/authorization, and comprehensive test coverage (30 unit + 18 integration tests).

Key Features:

  • Export/import with sensitive field redaction ([REDACTED] placeholder)
  • Scheduled automatic backups (24h interval with 7 daily + 4 weekly retention)
  • Manual backup creation, listing, download, and restore via API
  • Path traversal protection with strict BACKUP_ID_PATTERN validation
  • Proper security with requireAuth() + requireGlobalAdmin middleware
  • Audit logging for all backup operations

Code Quality:

  • Follows ESM conventions with node: prefix for built-ins
  • Uses Winston logger throughout (no console.*)
  • Includes prototype pollution protection (DANGEROUS_KEYS)
  • Comprehensive error handling with try/catch
  • Well-documented with JSDoc and OpenAPI specs

Notable Design Decisions:

  • Uses synchronous fs operations (acknowledged with TODO comment for future async migration)
  • Extracts flattenToLeafPaths and requireGlobalAdmin to shared utilities for reusability
  • requireGlobalAdmin supports both direct middleware usage and wrapped calls with custom resource names

Confidence Score: 5/5

  • This PR is safe to merge with minimal risk
  • High confidence due to comprehensive test coverage (48 tests total), proper security controls (authentication, authorization, path traversal protection, sensitive data redaction), adherence to project standards (ESM, Winston logging, error handling), and well-structured implementation. The code includes both unit and integration tests covering happy paths and error cases. The use of synchronous fs operations is acknowledged with a TODO but poses no immediate risk.
  • No files require special attention

Important Files Changed

Filename Overview
src/modules/backup.js Core backup module with export/import, scheduling, and retention - uses synchronous fs operations (acknowledged in TODO)
src/api/routes/backup.js API routes for backup operations - properly secured with requireGlobalAdmin middleware
src/api/middleware/requireGlobalAdmin.js Authorization middleware restricting access to API-secret or bot-owner users - supports both 3 and 4 argument calling styles
src/utils/flattenToLeafPaths.js Utility to flatten nested objects to dot-paths - includes prototype pollution protection

Flowchart

%%{init: {'theme': 'neutral'}}%%
flowchart TD
    A[Bot Startup] --> B[startScheduledBackups]
    B --> C{Every 24h}
    C --> D[createBackup]
    D --> E[exportConfig]
    E --> F[Get config from getConfig]
    F --> G[Filter SAFE_CONFIG_KEYS]
    G --> H[sanitizeConfig - redact secrets]
    H --> I[Write to data/backups/*.json]
    I --> J[pruneBackups]
    J --> K{Apply retention policy}
    K --> L[Keep 7 daily + 4 weekly]
    L --> M[Delete old backups]
    
    N[API: GET /backups/export] --> E
    O[API: POST /backups/import] --> P[validateImportPayload]
    P --> Q[importConfig]
    Q --> R{For each config path}
    R --> S{Is REDACTED?}
    S -->|Yes| T[Skip - preserve live secret]
    S -->|No| U[setConfigValue]
    
    V[API: POST /backups/:id/restore] --> W[readBackup - validate ID pattern]
    W --> X{Path traversal check}
    X -->|Valid| Q
    X -->|Invalid| Y[Error: Invalid backup ID]
    
    Z[All API routes] --> AA[requireAuth]
    AA --> AB[requireGlobalAdmin]
    AB --> AC{authMethod?}
    AC -->|api-secret| AD[Allow]
    AC -->|oauth + bot-owner| AD
    AC -->|Other| AE[403 Forbidden]
Loading

Last reviewed commit: 1d98179

coderabbitai[bot]
coderabbitai bot previously approved these changes Mar 2, 2026
const filename = `${id}.json`;
const filePath = path.join(dir, filename);

if (!existsSync(filePath)) {

Check failure

Code scanning / CodeQL

Uncontrolled data used in path expression High

This path depends on a
user-provided value
.
This path depends on a
user-provided value
.

Copilot Autofix

AI 12 days ago

To fix the issue in a robust, generally recommended way, the path derived from user input should be normalized and checked to ensure it stays within a designated root directory. In this context, the root is the backup directory returned by getBackupDir(backupDir). Even though id is already strongly validated, we can still apply the standard pattern: resolve filePath relative to dir, then verify that the resolved path starts with dir (after both are normalized/absolute).

The best minimal fix without changing existing behavior is:

  1. In readBackup (in src/modules/backup.js), keep the existing strict BACKUP_ID_PATTERN validation.
  2. After constructing filePath = path.join(dir, filename), resolve it to an absolute path, and ensure it is inside dir:
    • const resolvedDir = path.resolve(dir);
    • const resolvedFilePath = path.resolve(resolvedDir, filename);
    • If !resolvedFilePath.startsWith(resolvedDir + path.sep) and resolvedFilePath !== resolvedDir, throw an error (or treat it as "not found").
  3. Use resolvedFilePath for existsSync and readFileSync instead of filePath.

This adds a strong defense-in-depth containment check and will address all the path-based variants CodeQL is flagging, while keeping the external behavior (input format, errors, etc.) effectively the same.


Suggested changeset 1
src/modules/backup.js

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/src/modules/backup.js b/src/modules/backup.js
--- a/src/modules/backup.js
+++ b/src/modules/backup.js
@@ -290,8 +290,14 @@
 
   const dir = getBackupDir(backupDir);
   const filename = `${id}.json`;
-  const filePath = path.join(dir, filename);
+  const resolvedDir = path.resolve(dir);
+  const filePath = path.resolve(resolvedDir, filename);
 
+  // Ensure the resolved file path is contained within the backup directory
+  if (filePath !== resolvedDir && !filePath.startsWith(resolvedDir + path.sep)) {
+    throw new Error('Invalid backup path');
+  }
+
   if (!existsSync(filePath)) {
     throw new Error(`Backup not found: ${id}`);
   }
EOF
@@ -290,8 +290,14 @@

const dir = getBackupDir(backupDir);
const filename = `${id}.json`;
const filePath = path.join(dir, filename);
const resolvedDir = path.resolve(dir);
const filePath = path.resolve(resolvedDir, filename);

// Ensure the resolved file path is contained within the backup directory
if (filePath !== resolvedDir && !filePath.startsWith(resolvedDir + path.sep)) {
throw new Error('Invalid backup path');
}

if (!existsSync(filePath)) {
throw new Error(`Backup not found: ${id}`);
}
Copilot is powered by AI and may make mistakes. Always verify output.
throw new Error(`Backup not found: ${id}`);
}

const raw = readFileSync(filePath, 'utf8');

Check failure

Code scanning / CodeQL

Uncontrolled data used in path expression High

This path depends on a
user-provided value
.
This path depends on a
user-provided value
.

Copilot Autofix

AI 12 days ago

In general, to fix uncontrolled path usage you either (a) strictly constrain the allowed filenames/IDs with a whitelist, or (b) normalize and then verify that the resulting path remains under a trusted root directory. This code already does (a) via a strict regex for id. To both satisfy CodeQL and further harden the logic, we can additionally enforce (b): after constructing the path from the validated ID, resolve it and ensure it is still under the backup directory before reading.

Concretely, in src/modules/backup.js inside readBackup:

  1. Keep the existing strict BACKUP_ID_PATTERN check, since it is good defense-in-depth.
  2. After computing const dir = getBackupDir(backupDir); and const filename = \${id}.json`;`, compute a normalized absolute path, and ensure it starts with the absolute backup directory path.
  3. Use that safe, normalized path when checking existence and when reading.
  4. Use only existing, already-imported modules (path and fs); no new dependencies are required.

This means modifying lines 291–299 of readBackup to resolve dir and filePath and add a containment check. No changes are needed to src/api/routes/backup.js because all sanitization happens in the module.


Suggested changeset 1
src/modules/backup.js

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/src/modules/backup.js b/src/modules/backup.js
--- a/src/modules/backup.js
+++ b/src/modules/backup.js
@@ -289,9 +289,15 @@
   }
 
   const dir = getBackupDir(backupDir);
+  const safeDir = path.resolve(dir);
   const filename = `${id}.json`;
-  const filePath = path.join(dir, filename);
+  const filePath = path.resolve(safeDir, filename);
 
+  // Ensure the resolved file path is within the backup directory
+  if (!filePath.startsWith(safeDir + path.sep)) {
+    throw new Error('Invalid backup ID');
+  }
+
   if (!existsSync(filePath)) {
     throw new Error(`Backup not found: ${id}`);
   }
EOF
@@ -289,9 +289,15 @@
}

const dir = getBackupDir(backupDir);
const safeDir = path.resolve(dir);
const filename = `${id}.json`;
const filePath = path.join(dir, filename);
const filePath = path.resolve(safeDir, filename);

// Ensure the resolved file path is within the backup directory
if (!filePath.startsWith(safeDir + path.sep)) {
throw new Error('Invalid backup ID');
}

if (!existsSync(filePath)) {
throw new Error(`Backup not found: ${id}`);
}
Copilot is powered by AI and may make mistakes. Always verify output.
Copilot AI review requested due to automatic review settings March 2, 2026 11:52
@BillChirico BillChirico merged commit b1bf421 into main Mar 2, 2026
9 of 12 checks passed
@BillChirico BillChirico deleted the feat/issue-124 branch March 2, 2026 11:52
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 10 out of 10 changed files in this pull request and generated 12 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

const DEFAULT_BACKUP_DIR = path.join(__dirname, '..', '..', 'data', 'backups');

/** Backup file naming pattern */
const BACKUP_FILENAME_PATTERN = /^backup-(\d{4}-\d{2}-\d{2}T\d{2}-\d{2}-\d{2}-\d{3})-\d{4}\.json$/;
Copy link

Copilot AI Mar 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

backupSeq grows monotonically and, once it reaches 10000, generated filenames/IDs will no longer match BACKUP_FILENAME_PATTERN / BACKUP_ID_PATTERN (both require exactly 4 digits). At that point backups become unlistable and unreadable. Consider changing the patterns to allow variable-length sequence digits (e.g., \d+) or switching from a global counter to a bounded scheme (e.g., per-timestamp counter) or a random/UUID suffix, and keep parsing/sorting consistent with the new format.

Suggested change
const BACKUP_FILENAME_PATTERN = /^backup-(\d{4}-\d{2}-\d{2}T\d{2}-\d{2}-\d{2}-\d{3})-\d{4}\.json$/;
const BACKUP_FILENAME_PATTERN = /^backup-(\d{4}-\d{2}-\d{2}T\d{2}-\d{2}-\d{2}-\d{3})-\d+\.json$/;

Copilot uses AI. Check for mistakes.
// Include milliseconds for precision; append an incrementing sequence to guarantee uniqueness
// within the same millisecond (e.g. rapid test runs or burst backup triggers).
const iso = date.toISOString().replace(/:/g, '-').replace(/Z$/, '').replace(/\./, '-');
const seq = String(backupSeq++).padStart(4, '0');
Copy link

Copilot AI Mar 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

backupSeq grows monotonically and, once it reaches 10000, generated filenames/IDs will no longer match BACKUP_FILENAME_PATTERN / BACKUP_ID_PATTERN (both require exactly 4 digits). At that point backups become unlistable and unreadable. Consider changing the patterns to allow variable-length sequence digits (e.g., \d+) or switching from a global counter to a bounded scheme (e.g., per-timestamp counter) or a random/UUID suffix, and keep parsing/sorting consistent with the new format.

Suggested change
const seq = String(backupSeq++).padStart(4, '0');
// Bound the sequence to 0–9999 so it always fits the 4-digit filename/ID pattern.
const seq = String(backupSeq++ % 10000).padStart(4, '0');

Copilot uses AI. Check for mistakes.
Comment on lines +228 to +249
function parseBackupMeta(filename, dir) {
const match = BACKUP_FILENAME_PATTERN.exec(filename);
if (!match) return null;

const filePath = path.join(dir, filename);
let size = 0;
try {
size = statSync(filePath).size;
} catch {
return null;
}

// Convert "2026-03-01T12-00-00-000" → "2026-03-01T12:00:00.000Z"
const isoStr = match[1].replace(/T(\d{2})-(\d{2})-(\d{2})-(\d{3})$/, 'T$1:$2:$3.$4Z');

return {
id: filename.replace('.json', ''),
filename,
createdAt: isoStr,
size,
};
}
Copy link

Copilot AI Mar 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The sort key only uses createdAt derived from the timestamp portion of the filename and ignores the sequence suffix. If multiple backups are created in the same millisecond, they will receive identical createdAt, and ordering becomes unstable—retention pruning may delete a “newer” backup arbitrarily. Consider including the sequence in metadata (capture it in the filename regex) and sorting by (createdAt, seq) (or simply sort by filename descending, since it embeds both timestamp+seq).

Copilot uses AI. Check for mistakes.
const backups = files
.map((filename) => parseBackupMeta(filename, dir))
.filter(Boolean)
.sort((a, b) => new Date(b.createdAt) - new Date(a.createdAt));
Copy link

Copilot AI Mar 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The sort key only uses createdAt derived from the timestamp portion of the filename and ignores the sequence suffix. If multiple backups are created in the same millisecond, they will receive identical createdAt, and ordering becomes unstable—retention pruning may delete a “newer” backup arbitrarily. Consider including the sequence in metadata (capture it in the filename regex) and sorting by (createdAt, seq) (or simply sort by filename descending, since it embeds both timestamp+seq).

Suggested change
.sort((a, b) => new Date(b.createdAt) - new Date(a.createdAt));
.sort((a, b) => b.filename.localeCompare(a.filename));

Copilot uses AI. Check for mistakes.
Comment on lines +10 to +22
export function flattenToLeafPaths(obj, prefix) {
const results = [];

for (const [key, value] of Object.entries(obj)) {
if (DANGEROUS_KEYS.has(key)) continue;
const path = `${prefix}.${key}`;

if (typeof value === 'object' && value !== null && !Array.isArray(value)) {
results.push(...flattenToLeafPaths(value, path));
} else {
results.push([path, value]);
}
}
Copy link

Copilot AI Mar 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The JSDoc says arrays are treated as leaves, but if obj itself is an array, Object.entries(obj) will iterate indices and flatten elements into prefix.0, prefix.1, etc. This also conflicts with validateImportPayload allowing section values to be arrays. Either (a) treat non-plain objects (including arrays) as leaf at the start of the function (return [[prefix, obj]]), or (b) tighten validation to disallow arrays at the section root so imports don’t produce index-based dot paths.

Copilot uses AI. Check for mistakes.
Comment on lines +258 to +260
const badFile = join(tmpDir, 'backup-2020-01-01T00-00-00.json');
writeFileSync(badFile, JSON.stringify({ bad: 'payload' }), 'utf8');
expect(restoreBackup('backup-2020-01-01T00-00-00', tmpDir)).rejects.toThrow(
Copy link

Copilot AI Mar 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These tests use backup IDs/filenames that don’t match the module’s strict ID format (backup-YYYY-MM-DDTHH-mm-ss-SSS-NNNN). As written, readBackup/restoreBackup should throw Invalid backup ID before it can reach “not found”, “corrupted”, or “Invalid backup format” paths. Update the test IDs/filenames to include milliseconds + sequence, or relax the production ID validation if older formats are intended to be supported.

Suggested change
const badFile = join(tmpDir, 'backup-2020-01-01T00-00-00.json');
writeFileSync(badFile, JSON.stringify({ bad: 'payload' }), 'utf8');
expect(restoreBackup('backup-2020-01-01T00-00-00', tmpDir)).rejects.toThrow(
const badFile = join(tmpDir, 'backup-2020-01-01T00-00-00-000-0001.json');
writeFileSync(badFile, JSON.stringify({ bad: 'payload' }), 'utf8');
expect(restoreBackup('backup-2020-01-01T00-00-00-000-0001', tmpDir)).rejects.toThrow(

Copilot uses AI. Check for mistakes.
Comment on lines +8 to +18
export function requireGlobalAdmin(forResource, req, res, next) {
// Support both requireGlobalAdmin(req, res, next) and requireGlobalAdmin('Resource', req, res, next)
if (arguments.length === 3) {
// Called as requireGlobalAdmin(req, res, next)
// Parameters are shifted: forResource=req, req=res, res=next, next=undefined
next = res; // res parameter is actually the next function
res = req; // req parameter is actually the res object
req = forResource; // forResource is the actual req object
forResource = 'Global admin access';
} else {
forResource = forResource || 'Global admin access';
Copy link

Copilot AI Mar 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The “argument shifting” overload makes the middleware hard to reason about and easy to misuse (especially if future callers pass unexpected arity). A clearer pattern is to export a standard (req, res, next) middleware and optionally provide a small factory/wrapper (e.g., requireGlobalAdminFor('Backup access')) that returns (req,res,next)=>..., avoiding reliance on arguments.length and parameter reassignment.

Suggested change
export function requireGlobalAdmin(forResource, req, res, next) {
// Support both requireGlobalAdmin(req, res, next) and requireGlobalAdmin('Resource', req, res, next)
if (arguments.length === 3) {
// Called as requireGlobalAdmin(req, res, next)
// Parameters are shifted: forResource=req, req=res, res=next, next=undefined
next = res; // res parameter is actually the next function
res = req; // req parameter is actually the res object
req = forResource; // forResource is the actual req object
forResource = 'Global admin access';
} else {
forResource = forResource || 'Global admin access';
export function requireGlobalAdmin(forResourceOrReq, resOrReq, nextOrRes, maybeNext) {
// Support both requireGlobalAdmin(req, res, next) and requireGlobalAdmin('Resource', req, res, next)
let forResource = 'Global admin access';
let req;
let res;
let next;
if (typeof forResourceOrReq === 'string') {
// Called as requireGlobalAdmin('Resource', req, res, next)
forResource = forResourceOrReq || 'Global admin access';
req = resOrReq;
res = nextOrRes;
next = maybeNext;
} else {
// Called as requireGlobalAdmin(req, res, next)
req = forResourceOrReq;
res = resOrReq;
next = nextOrRes;

Copilot uses AI. Check for mistakes.
Comment on lines +187 to +189
(_req, res) => {
const backups = listBackups();
res.json(backups);
Copy link

Copilot AI Mar 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These route handlers call synchronous filesystem-based operations (listBackups, createBackup, readBackup), which will block the Node.js event loop while reading/writing files—especially noticeable on slower disks or larger backup payloads. Consider moving the backup module to fs.promises + async APIs and awaiting them in the handlers so concurrent requests remain responsive.

Suggested change
(_req, res) => {
const backups = listBackups();
res.json(backups);
async (_req, res) => {
try {
const backups = await listBackups();
return res.json(backups);
} catch (err) {
return res.status(500).json({ error: 'Failed to list backups', details: err.message });
}

Copilot uses AI. Check for mistakes.
Comment on lines +233 to +239
try {
const meta = createBackup();
return res.status(201).json({ id: meta.id, size: meta.size, createdAt: meta.createdAt });
} catch (err) {
return res.status(500).json({ error: 'Failed to create backup', details: err.message });
}
},
Copy link

Copilot AI Mar 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These route handlers call synchronous filesystem-based operations (listBackups, createBackup, readBackup), which will block the Node.js event loop while reading/writing files—especially noticeable on slower disks or larger backup payloads. Consider moving the backup module to fs.promises + async APIs and awaiting them in the handlers so concurrent requests remain responsive.

Copilot uses AI. Check for mistakes.
Comment on lines +281 to +290
(req, res) => {
const { id } = req.params;

try {
const payload = readBackup(id);
const filename = `${id}.json`;
res.setHeader('Content-Disposition', `attachment; filename="${filename}"`);
res.setHeader('Content-Type', 'application/json');
return res.json(payload);
} catch (err) {
Copy link

Copilot AI Mar 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These route handlers call synchronous filesystem-based operations (listBackups, createBackup, readBackup), which will block the Node.js event loop while reading/writing files—especially noticeable on slower disks or larger backup payloads. Consider moving the backup module to fs.promises + async APIs and awaiting them in the handlers so concurrent requests remain responsive.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

feat: database backup/restore and config export/import

2 participants