perf(count-routes): parallelize sequential count queries with Promise.all#2805
perf(count-routes): parallelize sequential count queries with Promise.all#2805dougrathbone wants to merge 2 commits intoseerr-team:developfrom
Conversation
….all Replace sequential await chains in GET /issue/count and GET /request/count with Promise.all() so all independent count queries execute concurrently, reducing N sequential DB round trips to a single parallel batch. Add integration test suites for both endpoints.
📝 WalkthroughWalkthroughAdded integration tests for Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
server/routes/request.ts (1)
365-386: Consider extracting the duplicated approved+media-status count query shape.Both
processingandavailablecounts repeat the same join/base filter and only differ by operator; a tiny helper will reduce drift risk later.♻️ Optional refactor sketch
+const countApprovedByAvailability = (isAvailable: boolean) => + requestRepository + .createQueryBuilder('request') + .innerJoin('request.media', 'media') + .where('request.status = :status', { + status: MediaRequestStatus.APPROVED, + }) + .andWhere( + isAvailable + ? '(request.is4k = false AND media.status = :available) OR (request.is4k = true AND media.status4k = :available)' + : '(request.is4k = false AND media.status != :available) OR (request.is4k = true AND media.status4k != :available)', + { available: MediaStatus.AVAILABLE } + ) + .getCount(); ... - requestRepository - .createQueryBuilder('request') - ... - .getCount(), - requestRepository - .createQueryBuilder('request') - ... - .getCount(), + countApprovedByAvailability(false), + countApprovedByAvailability(true),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/routes/request.ts` around lines 365 - 386, The two count queries duplicate the same base join/filter (requestRepository.createQueryBuilder('request') with .innerJoin('request.media','media') and .where('request.status = :status', { status: MediaRequestStatus.APPROVED }) ) and only differ in the media-status predicate (the AND clause checking request.is4k vs media.status/media.status4k against MediaStatus.AVAILABLE); extract a small helper (e.g., buildApprovedMediaStatusQuery or approvedMediaStatusCountQuery) that returns a QueryBuilder or accepts a boolean/operator to apply either the "!= :available" or "= :available" predicate and then call .getCount() on its result to replace both occurrences, keeping parameter name MediaStatus.AVAILABLE and the request.is4k/media.status4k checks intact.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@server/routes/request.count.test.ts`:
- Around line 83-99: The tests around createMediaRequest and the /request/count
suite don’t assert non-zero counts or verify 4K-specific logic; update the tests
that call createMediaRequest (and the ones in the 142-181 block) to create
explicit requests for both is4k=true and is4k=false with differing
MediaRequestStatus values (e.g., PROCESSING, AVAILABLE), then call the
/request/count endpoint and assert that the returned processing and available
counts for both regular and 4K buckets are > 0 (and that the 4K counts reflect
requests where MediaRequest.is4k was set), ensuring the path that produces
status4k/status for 4K media is exercised and verified.
---
Nitpick comments:
In `@server/routes/request.ts`:
- Around line 365-386: The two count queries duplicate the same base join/filter
(requestRepository.createQueryBuilder('request') with
.innerJoin('request.media','media') and .where('request.status = :status', {
status: MediaRequestStatus.APPROVED }) ) and only differ in the media-status
predicate (the AND clause checking request.is4k vs media.status/media.status4k
against MediaStatus.AVAILABLE); extract a small helper (e.g.,
buildApprovedMediaStatusQuery or approvedMediaStatusCountQuery) that returns a
QueryBuilder or accepts a boolean/operator to apply either the "!= :available"
or "= :available" predicate and then call .getCount() on its result to replace
both occurrences, keeping parameter name MediaStatus.AVAILABLE and the
request.is4k/media.status4k checks intact.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: c1e74b2d-6d3c-445c-90d9-0d7b4329e1d8
📒 Files selected for processing (4)
server/routes/issue.test.tsserver/routes/issue.tsserver/routes/request.count.test.tsserver/routes/request.ts
- Add is4k parameter to createMediaRequest helper (defaults false)
- Add test 'counts processing and available correctly for HD and 4K
requests' -- seeds APPROVED requests against media with PROCESSING
and AVAILABLE status on both the standard and 4K status columns,
then asserts the /request/count endpoint returns the correct split
- Extract approvedMediaStatusCount(isAvailable) helper in the route
handler to eliminate the duplicated QueryBuilder setup for the
processing/available counts
- Restore settings.main.localLogin in a finally block in authenticatedAgent
- Explicitly set cookie: { secure: false } on the test session
|
Fixed in the latest commit:
|
There was a problem hiding this comment.
🧹 Nitpick comments (1)
server/routes/request.count.test.ts (1)
1-2: Use a non-literal session secret in tests to reduce secret-scanner noise.Hard-coded literal secrets in tests invite security scanners to flag them and risk accidental copy/paste into production code. This pattern appears in multiple test files; consider using an environment variable or random value.
♻️ Suggested change
+import { randomBytes } from 'node:crypto'; import assert from 'node:assert/strict'; import { before, describe, it } from 'node:test'; describe('GET /request/count', () => { let app: Express; before(async () => { app.use( session({ - secret: 'test-secret', + secret: + process.env.TEST_SESSION_SECRET ?? randomBytes(32).toString('hex'), resave: false, saveUninitialized: false, cookie: { secure: false }, })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/routes/request.count.test.ts` around lines 1 - 2, The test is using a hard-coded session secret literal which triggers secret scanners; update the test to use a non-literal secret by setting process.env.TEST_SESSION_SECRET (or similar) in the before() hook and falling back to a generated value (e.g., use crypto.randomBytes(...).toString('hex')) for the session secret used by the app/session middleware; ensure you import node:crypto if generating and replace any hard-coded string occurrences (sessionSecret, SESSION_SECRET, or where you pass the secret into your session setup) to read from process.env.TEST_SESSION_SECRET so the test no longer contains a literal secret.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@server/routes/request.count.test.ts`:
- Around line 1-2: The test is using a hard-coded session secret literal which
triggers secret scanners; update the test to use a non-literal secret by setting
process.env.TEST_SESSION_SECRET (or similar) in the before() hook and falling
back to a generated value (e.g., use crypto.randomBytes(...).toString('hex'))
for the session secret used by the app/session middleware; ensure you import
node:crypto if generating and replace any hard-coded string occurrences
(sessionSecret, SESSION_SECRET, or where you pass the secret into your session
setup) to read from process.env.TEST_SESSION_SECRET so the test no longer
contains a literal secret.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 3701d403-2d98-4b66-901f-09f7e825dcc1
📒 Files selected for processing (2)
server/routes/request.count.test.tsserver/routes/request.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- server/routes/request.ts
| async function authenticatedAgent(email: string, password: string) { | ||
| const agent = request.agent(app); | ||
| const settings = getSettings(); | ||
| settings.main.localLogin = true; | ||
|
|
||
| const res = await agent.post('/auth/local').send({ email, password }); | ||
| assert.strictEqual(res.status, 200); | ||
| return agent; | ||
| } |
There was a problem hiding this comment.
same issue here as i flagged in your other PR.
|
|
||
| try { | ||
| const query = requestRepository | ||
| const approvedMediaStatusCount = (isAvailable: boolean) => |
There was a problem hiding this comment.
This helper builds a QueryBuilder but lives outside the try block. I would recommend moving it inside so any unexpected throw is caught by the shared error handler.
There was a problem hiding this comment.
This should be request.test.ts the /count route is part of request route, and splitting test files per endpoint will get unwieldy.
Description
Both
GET /issue/countandGET /request/countshared the same performance problem: they created a single mutable TypeORMQueryBuilderinstance and called.where(...).getCount()on it in sequence. Each call awaited the previous one before starting, meaning 7 and 9 sequential database round trips respectively for what are entirely independent queries.The issue count handler ran these one after another:
The request count handler had the same pattern across 9 counts, with an unnecessary
innerJoinAndSelecton the base query (loading the fullmediaentity) even for counts that only filter onrequest.*columns.Fix: Replace both handlers with
Promise.all()over independentrepository.count({ where: ... })calls. The two request counts that genuinely need a join (processingandavailable, which filter onmedia.status/media.status4k) each use a sharedapprovedMediaStatusCount(isAvailable)helper to eliminate the duplicated QueryBuilder setup.AI Disclosure: I used Claude Code to help write the tests and validate the approach. I reviewed and understood all generated code before submitting.
Performance improvement
GET /issue/countGET /request/countThe gain is most visible on PostgreSQL where each query has non-trivial round-trip latency. On SQLite (single connection, serialised writes) the queries still execute in order but the code is clearer and the unnecessary JOIN is eliminated regardless.
Both endpoints are called on page load for the admin dashboard and issue management pages, so the improvement is user-visible on every navigation to those pages.
How Has This Been Tested?
Integration tests using the real SQLite test database via
setupTestDb().server/routes/issue.test.ts(3 tests):server/routes/request.count.test.ts(4 tests):is4k = falseandis4k = trueTo run:
Screenshots / Logs (if applicable)
N/A -- backend-only change with no UI impact.
Checklist:
pnpm buildpnpm i18n:extract(no new UI strings added)Summary by CodeRabbit
Tests
Refactor