Skip to content

Conversation

@mfts
Copy link
Owner

@mfts mfts commented Aug 14, 2025

Summary by CodeRabbit

  • Bug Fixes
    • Bulk link downloads are less likely to fail due to timeouts. The allowed processing window for bulk exports has been extended, so larger or slower batches complete more reliably under variable network or provider conditions. Users should see fewer retry prompts and improved success for large downloads; small downloads unaffected.

@vercel
Copy link

vercel bot commented Aug 14, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
papermark Ready Preview Comment Aug 14, 2025 6:46pm

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 14, 2025

Walkthrough

Increased the API route maxDuration from 180 to 300 seconds in pages/api/links/download/bulk.ts; memory remains 2048. No logic or control flow changes.

Changes

Cohort / File(s) Summary
API route config
pages/api/links/download/bulk.ts
Updated exported config: maxDuration 180 → 300; memory unchanged at 2048.

Sequence Diagram(s)

(omitted — change is a simple config update with no control-flow modifications)

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~2 minutes

Possibly related PRs

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these settings in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 904be9a and ebec744.

📒 Files selected for processing (1)
  • pages/api/links/download/bulk.ts (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • pages/api/links/download/bulk.ts
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Analyze (javascript-typescript)
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/bulk

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
pages/api/links/download/bulk.ts (2)

11-14: Consider making the Lambda call asynchronous to avoid long-held HTTP connections.

Keeping the request open for up to 15 minutes hurts UX, ties up concurrency, and is brittle across proxies. Prefer:

  • Fire-and-forget Lambda (InvocationType.Event) or a queue/Step Function
  • Return 202 + operationId immediately
  • Poll a status endpoint or use callbacks/webhooks to deliver the download URL when ready

This reduces timeouts and improves resilience for large bulk downloads.


11-14: Document the rationale for the 900s bump.

Future readers will benefit from knowing this aligns with Lambda’s 15-min sync ceiling and large dataroom zips.

Apply this small comment for context:

 export const config = {
+  // Allow up to 15 minutes for bulk downloads to complete (aligns with AWS Lambda's 15-min sync limit).
   maxDuration: 900,
   memory: 2048,
 };
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these settings in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between fa385d3 and 904be9a.

📒 Files selected for processing (1)
  • pages/api/links/download/bulk.ts (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Analyze (javascript-typescript)

Comment on lines 11 to 14
export const config = {
maxDuration: 180,
maxDuration: 900,
memory: 2048,
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Verify 900s maxDuration is actually honored for pages/api routes and your hosting plan.

Bumping to 900s is reasonable for large zips, but many platforms cap synchronous Serverless HTTP at lower limits (often 60–300s), or only allow 900s for background jobs. If unsupported, clients will still see timeouts/504s long before Lambda returns.

Run these to cross-check consistency and hosting constraints:

And confirm provider support/limits:


🏁 Script executed:

#!/bin/bash
# 1) Where else do we set maxDuration? Ensure consistency across API routes.
rg -n -A 2 -B 2 'maxDuration\s*:' --no-ignore

# 2) Check if any routes are set to edge runtime (AWS SDK won't work there).
rg -n -A 1 -B 1 'export const runtime\s*=' --no-ignore

Length of output: 4625


🌐 Web query:

For Next.js pages/api deployed on Vercel, does `export const config = { maxDuration }` apply, and is 900 seconds supported for synchronous Serverless Functions? If not, what are the supported max durations and the correct way to run up to 15-minute jobs?

💡 Result:

The export const config = { maxDuration } directive does apply to Next.js API routes (pages/api) deployed on Vercel, but the maximum value supported for synchronous Serverless Functions varies by plan and whether “fluid compute” is enabled. Running synchronous functions for 900 seconds (15 minutes) is generally only possible on Enterprise plans without fluid compute enabled, and is not supported for synchronous functions on most plans or configurations[1][8].

Key duration limits for Next.js API routes on Vercel:

Plan Fluid Compute Max Duration (seconds)
Hobby Enabled 300
Pro Enabled 800
Enterprise Enabled 800
Hobby Disabled 60
Pro Disabled 300
Enterprise Disabled 900
  • Fluid compute is enabled by default for most projects[1][8].
  • With fluid compute enabled, Enterprise and Pro both max out at 800 seconds (13 minutes 20 seconds).
  • Only Enterprise plans with fluid compute disabled can reach 900 seconds (15 minutes)[8][1].

How to set function duration:

  • Set maxDuration in the API route file as part of the exported config:

    export const config = { maxDuration: 800 };
  • Alternatively, configure maxDuration per-route or with glob patterns in your vercel.json[1].

Running synchronous jobs up to 15 minutes:

  • Not supported for synchronous serverless functions unless you are on Enterprise, with fluid compute disabled. For all other scenarios, you are limited to 800 seconds, or less depending on plan/configuration[1][8].
  • For background tasks longer than your duration limit, use asynchronous techniques: queue the job to a persistent background worker service (for example, using queues like AWS SQS, or third-party background services), and return an initial response to the client rather than blocking for completion[9].
  • If you need to perform additional work after the response, investigate the use of waitUntil or Next.js's experimental after API in conjunction with fluid compute. This allows background processing, but is only suitable for non-blocking, fire-and-forget tasks and may still be duration-constrained[9].

Summary of best practices for long-running jobs:

  • For jobs up to 13:20 minutes: use maxDuration: 800 and Pro/Enterprise plan with fluid compute[1][8].
  • For jobs up to 15 minutes: Only use synchronous API routes on Enterprise plans with fluid compute explicitly disabled[8].
  • For any jobs requiring reliable execution over 800 seconds, or if not on Enterprise, run the task asynchronously outside the Vercel serverless execution environment.

In conclusion:
900 seconds (15 minutes) is only supported for synchronous serverless functions on Enterprise plans with fluid compute disabled[8][1]. For most use cases on Vercel, the max achievable duration for synchronous serverless (API) routes is 800 seconds on Pro/Enterprise with fluid compute enabled. To run jobs up to 15 minutes or longer, delegate work to a background worker/service so that your API route can respond within platform-mandated limits.


References:
[1]: Vercel Docs – Maximum Duration Configuration
[8]: Vercel Docs – Vercel Functions Limits
[9]: Vercel Docs – Avoiding Timeout, Fluid Compute
[10]: Vercel Docs – Configuring Functions

Citations:


Action: don’t assume 900s is available — confirm your hosting plan supports 15 minutes or reduce/offload the work.

Short summary: I ran the grep and a docs check. pages/api/links/download/bulk.ts exports maxDuration: 900; no pages/api routes use edge runtime (edge runtimes are in app/*). Vercel only allows 900s for synchronous functions on Enterprise plans with fluid compute explicitly disabled — most setups (Pro/Enterprise with fluid compute enabled) max out at ~800s, and lower limits apply on Hobby. If you’re not on Enterprise+fluid-compute-disabled, clients will hit platform timeouts before Lambda returns.

Files needing attention:

  • pages/api/links/download/bulk.ts (lines ~11–14) — currently maxDuration: 900. Action: either confirm your hosting plan supports 900s, or change to 800 (safer for Vercel Pro/Enterprise) or move the long-running work to a background job/queue and return immediately.
  • lib/trigger/export-visits.ts (line ~58) — also uses maxDuration: 900; verify how this trigger runs (background/worker vs. synchronous API). If it’s a true background worker, 900 may be acceptable.
  • Note: other API routes use 180/120/60 — keep them as-is unless you need longer.

Suggested quick update (if you can’t guarantee 900s):

export const config = {
  maxDuration: 800, // use 800s unless you confirm Enterprise + fluid compute disabled
  memory: 2048,
};

If you plan to keep 900s, confirm with your infra/hosting owner that the project is on an Enterprise plan with fluid compute disabled.

🤖 Prompt for AI Agents
In pages/api/links/download/bulk.ts around lines 11–14 the exported config sets
maxDuration: 900 which may exceed the platform timeout for non-Enterprise (or
Enterprise with fluid compute enabled) Vercel plans; either confirm with your
infra/hosting owner that this project runs on Enterprise with fluid compute
disabled before keeping 900, or change the value to a safer 800 seconds and keep
memory 2048, or refactor the long-running work into a background job/queue and
return immediately; also check lib/trigger/export-visits.ts (around line 58) for
another maxDuration: 900 and apply the same decision (reduce to 800 or move to
background) as appropriate.

@mfts mfts merged commit d24064d into main Aug 14, 2025
9 checks passed
@github-actions github-actions bot locked and limited conversation to collaborators Aug 14, 2025
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants