fix(token): add batch keys endpoint to prevent 429 rate limit on bulk copy#4084
Conversation
… copy When users batch-copy 50+ token keys, the frontend previously fired N individual POST /api/token/:id/key requests simultaneously. This quickly exhausted the CriticalRateLimit (20 req/20min per IP), causing most requests to fail with HTTP 429. Solution: add a single POST /api/token/batch/keys endpoint that accepts an array of token IDs and returns all keys in one request. Backend: - model: add GetTokensByIdsAndUserId for batch query - controller: add BatchGetTokenKeys handler (max 100 IDs) - router: register POST /api/token/batch/keys with CriticalRateLimit Frontend: - helpers/token.js: add fetchTokenKeysBatch calling batch endpoint - useTokensData.jsx: rewrite batchCopyTokens to use single batch request
WalkthroughA new batch token key fetching feature is introduced across the stack. The backend adds a Changes
Sequence DiagramsequenceDiagram
participant Client as Browser Client
participant Helper as Token Helper
participant API as API Controller
participant DB as Database
participant Hook as useTokensData Hook
Hook->>Helper: fetchTokenKeysBatch([id1, id2, ...])
Helper->>API: POST /api/token/batch/keys<br/>{ ids: [...] }
API->>API: Validate ids (non-empty, ≤100)
API->>DB: Query tokens WHERE<br/>id IN (...)<br/>AND user_id = ?
DB-->>API: [Token, Token, ...]
API-->>Helper: { success, data: {<br/>keys: { id1: key1, ... } } }
Helper-->>Hook: { [id1]: key1, [id2]: key2, ... }
Hook->>Hook: Merge into resolvedTokenKeys
Hook->>Hook: Generate copy content<br/>from keysMap[token.id]
Hook-->>Client: Copy-to-clipboard
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
web/src/hooks/tokens/useTokensData.jsx (1)
412-425:⚠️ Potential issue | 🟠 MajorHandle selections larger than 100 to avoid hard failure.
batchCopyTokenscurrently sends all selected IDs in one request, but backend rejects payloads with more than 100 IDs. Large multi-select copy will fail entirely.💡 Proposed fix (chunk requests by backend limit)
- const ids = selectedKeys.map((token) => token.id); - const keysMap = await fetchTokenKeysBatch(ids); - setResolvedTokenKeys((prev) => ({ ...prev, ...keysMap })); + const ids = selectedKeys.map((token) => token.id); + const BATCH_SIZE = 100; + const keysMap = {}; + for (let i = 0; i < ids.length; i += BATCH_SIZE) { + const chunkIds = ids.slice(i, i + BATCH_SIZE); + const partial = await fetchTokenKeysBatch(chunkIds); + Object.assign(keysMap, partial); + } + setResolvedTokenKeys((prev) => ({ ...prev, ...keysMap })); let content = ''; for (const token of selectedKeys) { const fullKey = keysMap[token.id]; if (!fullKey) continue; if (copyType === 'name+key') { content += `${token.name} sk-${fullKey}\n`; } else { content += `sk-${fullKey}\n`; } } + if (!content) { + showError(t('未找到可复制的令牌密钥')); + return; + } await copyText(content);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web/src/hooks/tokens/useTokensData.jsx` around lines 412 - 425, The current logic sends all selectedKeys' ids to fetchTokenKeysBatch at once which fails for >100 ids; modify the code to chunk ids into batches of <=100, call fetchTokenKeysBatch for each chunk (sequentially or with Promise.allSettled), merge each returned keysMap into a single aggregate (then call setResolvedTokenKeys(prev => ({...prev, ...aggregate}))), and build the content from the aggregate before calling copyText; use the existing symbols selectedKeys, fetchTokenKeysBatch, setResolvedTokenKeys, copyType, and copyText and ensure errors in individual chunks are handled/ignored so the overall copy doesn't hard-fail.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@web/src/hooks/tokens/useTokensData.jsx`:
- Around line 412-425: The current logic sends all selectedKeys' ids to
fetchTokenKeysBatch at once which fails for >100 ids; modify the code to chunk
ids into batches of <=100, call fetchTokenKeysBatch for each chunk (sequentially
or with Promise.allSettled), merge each returned keysMap into a single aggregate
(then call setResolvedTokenKeys(prev => ({...prev, ...aggregate}))), and build
the content from the aggregate before calling copyText; use the existing symbols
selectedKeys, fetchTokenKeysBatch, setResolvedTokenKeys, copyType, and copyText
and ensure errors in individual chunks are handled/ignored so the overall copy
doesn't hard-fail.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 4314b5c4-ec16-4f32-a7f3-5a439e948685
📒 Files selected for processing (5)
controller/token.gomodel/token.gorouter/api-router.goweb/src/helpers/token.jsweb/src/hooks/tokens/useTokensData.jsx
Problem
When users batch-copy 50+ token keys from the token management page, the frontend fires N individual
POST /api/token/:id/keyrequests simultaneously viaPromise.all. TheCriticalRateLimit(20 req/20min per IP) rejects most of them with HTTP 429, making the batch copy feature completely broken.Solution
Add a single
POST /api/token/batch/keysendpoint that accepts an array of token IDs and returns all keys in one request, eliminating the N-request problem entirely.Changes (5 files, +63 / -6)
model/token.goGetTokensByIdsAndUserIdfor batch DB querycontroller/token.goBatchGetTokenKeyshandler (max 100 IDs)router/api-router.goPOST /api/token/batch/keysrouteweb/src/helpers/token.jsfetchTokenKeysBatchcalling batch endpointweb/src/hooks/tokens/useTokensData.jsxbatchCopyTokensto use single batch requestSecurity
CriticalRateLimit+UserAuthmiddleware on the new endpointWHERE user_id = ?)Before vs After
POST /api/token/:id/keyPOST /api/token/batch/keysSummary by CodeRabbit