Skip to content

fix: Replace batch requests with regular ones for Token Transfers#854

Merged
Woody4618 merged 6 commits intosolana-foundation:masterfrom
hoodieshq:fix-batching-requests-limitations
Feb 27, 2026
Merged

fix: Replace batch requests with regular ones for Token Transfers#854
Woody4618 merged 6 commits intosolana-foundation:masterfrom
hoodieshq:fix-batching-requests-limitations

Conversation

@C0mberry
Copy link
Contributor

@C0mberry C0mberry commented Feb 21, 2026

Description

  • replacing batch requests with one request per token to prevent batching limitations

Type of change

  • Bug fix

Screenshots

Screenshot 2026-02-21 at 14 13 43

Testing

  1. Open http://localhost:3000/address/2zMMhcVQEXDtdE6vsFS7S7D5oUodfJHE8vd1gnBouauv/transfers
  2. See console (no 429 errors)
  3. See network tab (tokens will be calling one by one)

Related Issues

https://hoodies-hq.slack.com/archives/C094FS7GGBZ/p1771606003845339

Checklist

  • My code follows the project's style guidelines
  • All tests pass locally and in CI
  • CI/CD checks pass
  • I have included screenshots for protocol screens (if applicable)

Additional Notes

IF POSSIBLE:

  • make calls from the backend
  • test it

@vercel
Copy link

vercel bot commented Feb 21, 2026

@C0mberry is attempting to deploy a commit to the Solana Foundation Team on Vercel.

A member of the Team first needs to authorize it.

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Feb 21, 2026

Greptile Summary

Replaced batched transaction fetching (getParsedTransactions) with sequential individual requests (getParsedTransaction) to prevent RPC rate limiting (429 errors).

Key changes:

  • Removed MAX_TRANSACTION_BATCH_SIZE constant (was 10)
  • Added TRANSACTION_FETCH_DELAY_MS constant (100ms delay between requests)
  • Implemented sequential fetching with delays instead of batching
  • Added per-transaction error handling with try-catch and warning logs
  • Individual transaction failures no longer block fetching remaining transactions

Trade-offs:

  • Slower overall fetch time (~2.5s for 25 transactions vs ~200ms with batching)
  • Better resilience to rate limits and individual transaction errors
  • Improved error visibility with per-transaction warnings

Confidence Score: 3/5

  • This PR is safe to merge but introduces performance trade-offs that should be monitored
  • The change successfully addresses rate limiting issues with a straightforward solution, and includes better error handling. However, it introduces significant performance degradation (10x+ slower) which could impact user experience. The approach is a workaround rather than a proper solution - the PR author notes making calls from the backend would be better. The implementation is correct and won't cause functional issues, but the performance impact and lack of backend implementation lower confidence.
  • No files require special attention - the single changed file has clear logic and proper error handling

Important Files Changed

Filename Overview
app/providers/accounts/history.tsx Replaced batch transaction fetching with sequential individual requests to avoid rate limiting, added error handling per transaction, introduced 100ms delay between requests

Last reviewed commit: 4c391bf

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 file reviewed, 2 comments

Edit Code Review Agent Settings | Greptile

Comment on lines 97 to 115
// Fetch transactions individually to avoid batch request limitations
for (let i = 0; i < transactionSignatures.length; i++) {
const signature = transactionSignatures[i];

if (i > 0) {
await new Promise(resolve => setTimeout(resolve, TRANSACTION_FETCH_DELAY_MS));
}

try {
const transaction = await connection.getParsedTransaction(signature, {
maxSupportedTransactionVersion: 0,
});
if (transaction !== null) {
transactionMap.set(signature, transaction);
}
});
} catch (error) {
console.warn(`Failed to fetch transaction ${signature}:`, error);
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fetching 25 transactions sequentially with 100ms delays takes ~2.5 seconds vs ~200ms for the previous batched approach (3 batches of 10). consider implementing exponential backoff or making the delay configurable

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i wanted to avoid batch requests because we have limitations from rps

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed

import React from 'react';

const MAX_TRANSACTION_BATCH_SIZE = 10;
const TRANSACTION_FETCH_DELAY_MS = 100; // Delay between individual transaction fetches to avoid rate limiting
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue: I'm still getting 429 errors on both localhost and Vercel. It doesn’t work at all on localhost 😔 Please test it more. We need to figure out what’s going on and why. Maybe we should enhance the UX so users know there are some errors, instead of leaving them with a broken/empty page.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue: I'm still getting 429 errors on both localhost and Vercel. It doesn’t work at all on localhost 😔 Please test it more. We need to figure out what’s going on and why. Maybe we should enhance the UX so users know there are some errors, instead of leaving them with a broken/empty page.

Header Header Header Header Header
Cell Cell Cell Cell Cell
Cell Cell Cell Cell Cell
Cell Cell Cell Cell Cell
Cell Cell Cell Cell Cell

@rogaldh
Copy link
Contributor

rogaldh commented Feb 25, 2026

I'd suggest rewriting the implementation. The current for approach just delays the entire dataset, and 429s might still occur.
We could try combining p-limit, which is already in the project and allows concurrency, with Promise.allSettled to replace the batch request.

The next step would be to use Triton's RPCs with x-ratelimit headers, but we do not use a custom fetcher. That is a critical change.

@rogaldh
Copy link
Contributor

rogaldh commented Feb 25, 2026

I'd suggest rewriting the implementation. The current for approach just delays the entire dataset, and 429s might still occur. We could try combining p-limit, which is already in the project and allows concurrency, with Promise.allSettled to replace the batch request.

The next step would be to use Triton's RPCs with x-ratelimit headers, but we do not use a custom fetcher. That is a critical change.

We could improve that inside a separate PR btw

@rogaldh
Copy link
Contributor

rogaldh commented Feb 26, 2026

Implementation is improved.
The batch RPC method getParsedTransactions is replaced with individual calls to getParsedTransaction to avoid Solana RPC rate limits and payload size constraints. Concurrency is managed by a new fetchAll utility (defaults to 2 concurrent requests), and duplicate in-flight fetches are prevented by a fetchOnce guard backed by a shared InFlightContext.

@Woody4618 Woody4618 merged commit d5ee71e into solana-foundation:master Feb 27, 2026
5 of 6 checks passed
@rogaldh rogaldh deleted the fix-batching-requests-limitations branch February 27, 2026 17:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants