Skip to content

Conversation

@gaby
Copy link
Member

@gaby gaby commented Sep 28, 2025

Summary

  • Add support for using sync.Pool across code base
  • Validate release of objects put in the pools

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Sep 28, 2025

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

✨ Finishing touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch review-and-optimize-sync.pool-usage

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gaby gaby added this to v3 Sep 28, 2025
@gaby gaby added this to the v3 milestone Sep 28, 2025
@gaby
Copy link
Member Author

gaby commented Sep 28, 2025

/gemini review

@gaby gaby changed the title Pool service state key slices ⚡ perf: Improve usage of sync.Pool Sep 28, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces sync.Pool across a wide range of the codebase to reduce allocations and improve performance. The changes are well-implemented, with pooling added for various data structures like maps, slices, and request/response-related objects. The acquire and release patterns are correctly applied, including logic to handle oversized objects to prevent memory bloat in the pools. Several areas have been refactored to be more allocation-friendly, such as string building and header processing. Additionally, some existing bugs, like a logic error in sorting Accept headers and potential resource leaks in CookieJar, have been fixed. I've found a minor redundancy in one of the new pooling implementations, which I've commented on. Overall, this is a solid performance improvement.

Comment on lines +132 to +134
if used > 0 {
clear(m)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The map is cleared here before being put back into the pool, but it's also cleared in acquireStringSliceMap when retrieved from the pool. While this doesn't cause any harm, it's redundant. To improve clarity and have a single point of responsibility for resetting the object's state, it's conventional to clear/reset objects on acquisition from a pool.

Comment on lines +160 to +162
if used > 0 {
clear(m)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Similar to releaseStringSliceMap, the map is cleared here before being put back into the pool, but it's also cleared in acquireFileHeaderSliceMap. This is redundant. It's better to only clear the map on acquisition.

Comment on lines +597 to +599

*bufPtr = buf
return buf, bufPtr, nil
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic for handling oversized buffers in the pool is correct, but it could be slightly simplified. Instead of creating a new copied slice, you can return buf directly since it's a new allocation and not part of the pool. The bufPtr is correctly released.

Suggested change
*bufPtr = buf
return buf, bufPtr, nil
return buf, nil, nil

Comment on lines +781 to +787
for i := range r.files {
if f := r.files[i]; f != nil {
ReleaseFile(f)
r.files[i] = nil
}
}
r.files = r.files[:0]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The previous implementation for resetting r.files was inefficient as it caused re-slicing in a loop. This new implementation is much more efficient by iterating over the slice, releasing each file, and then resetting the slice length to zero. This is a great improvement.

Comment on lines +166 to 176
for i := range r.cookie {
if c := r.cookie[i]; c != nil {
fasthttp.ReleaseCookie(c)
r.cookie[i] = nil
}
}
if cap(r.cookie) > responseCookieSliceMaxCap {
r.cookie = make([]*fasthttp.Cookie, 0, responseCookieSliceDefaultCap)
} else {
r.cookie = r.cookie[:0]
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The previous implementation for resetting r.cookie was inefficient. This new implementation is much better. It correctly releases each cookie, nils the pointers, and then resizes the slice or creates a new smaller one if the capacity grew too large. This is a good memory optimization.

Comment on lines 159 to 213
TagReqHeaders: func(output Buffer, c fiber.Ctx, _ *Data, _ string) (int, error) {
out := make(map[string][]string, 0)
if err := c.Bind().Header(&out); err != nil {
headerMapPtr := acquireReqHeaderMap()
headers := *headerMapPtr
defer func() {
*headerMapPtr = headers
releaseReqHeaderMap(headerMapPtr)
}()

if err := c.Bind().Header(&headers); err != nil {
return 0, err
}

reqHeaders := make([]string, 0)
for k, v := range out {
reqHeaders = append(reqHeaders, k+"="+strings.Join(v, ","))
var (
written int
firstPair = true
)

for key, values := range headers {
if !firstPair {
if err := output.WriteByte('&'); err != nil {
return written, err
}
written++
}
firstPair = false

n, err := output.WriteString(key)
written += n
if err != nil {
return written, err
}

if err := output.WriteByte('='); err != nil {
return written, err
}
written++

for i, value := range values {
if i > 0 {
if err := output.WriteByte(','); err != nil {
return written, err
}
written++
}

n, err = output.WriteString(value)
written += n
if err != nil {
return written, err
}
}
}
return output.Write([]byte(strings.Join(reqHeaders, "&")))

return written, nil
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The refactoring of TagReqHeaders is a significant performance improvement. The previous implementation involved multiple allocations for a map and intermediate string slices. The new version uses a pooled map and writes directly to the output buffer, which is much more efficient.

@codecov
Copy link

codecov bot commented Sep 28, 2025

Codecov Report

❌ Patch coverage is 72.54237% with 405 lines in your changes missing coverage. Please review.
✅ Project coverage is 89.43%. Comparing base (b25339d) to head (98f436a).
⚠️ Report is 2 commits behind head on main.

Files with missing lines Patch % Lines
helpers.go 69.84% 50 Missing and 26 partials ⚠️
middleware/logger/tags.go 57.31% 25 Missing and 10 partials ⚠️
binder/binder.go 46.15% 18 Missing and 10 partials ⚠️
redirect.go 67.53% 15 Missing and 10 partials ⚠️
res.go 72.82% 14 Missing and 11 partials ⚠️
client/cookiejar.go 77.01% 13 Missing and 7 partials ⚠️
client/request.go 80.95% 11 Missing and 9 partials ⚠️
state.go 65.21% 9 Missing and 7 partials ⚠️
req.go 85.71% 9 Missing and 6 partials ⚠️
middleware/idempotency/response.go 67.44% 9 Missing and 5 partials ⚠️
... and 17 more
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3767      +/-   ##
==========================================
- Coverage   91.70%   89.43%   -2.27%     
==========================================
  Files         113      114       +1     
  Lines       11940    13177    +1237     
==========================================
+ Hits        10949    11785     +836     
- Misses        728      976     +248     
- Partials      263      416     +153     
Flag Coverage Δ
unittests 89.43% <72.54%> (-2.27%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Performance Alert ⚠️

Possible performance regression was detected for benchmark.
Benchmark result of this commit is worse than the previous benchmark result exceeding threshold 1.50.

Benchmark suite Current: 98f436a Previous: a3c54bb Ratio
Benchmark_Ctx_GetRespHeaders 994.9 ns/op 835 B/op 11 allocs/op 372.1 ns/op 448 B/op 5 allocs/op 2.67
Benchmark_Ctx_GetRespHeaders - ns/op 994.9 ns/op 372.1 ns/op 2.67
Benchmark_Ctx_GetRespHeaders - B/op 835 B/op 448 B/op 1.86
Benchmark_Ctx_GetRespHeaders - allocs/op 11 allocs/op 5 allocs/op 2.20

This comment was automatically generated by workflow using github-action-benchmark.

@gaby gaby requested a review from Copilot October 5, 2025 13:09
@gaby
Copy link
Member Author

gaby commented Oct 5, 2025

/gemini review

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR implements extensive sync.Pool usage improvements across the codebase to reduce allocations and improve memory management. The changes introduce object pooling for various frequently allocated data structures such as slices, maps, and buffers throughout the application.

Key changes include:

  • Addition of sync.Pool implementations for commonly allocated data structures like string slices, header maps, and temporary buffers
  • Enhanced pool management with capacity limits and proper cleanup procedures
  • Validation of pool object releases to ensure proper state reset

Reviewed Changes

Copilot reviewed 49 out of 49 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
state.go Adds sync.Pool for service key slices with proper acquire/release functions
state_test.go Updates tests to use pooled slice management for service keys
services.go Implements sync.Pool for error slices in service operations
router.go Adds pooling for removed route sets in route deletion operations
res.go Introduces complex pooling system for response headers with scratch management
req.go Implements pooling for IP slices and encoding order slices
redirect.go Adds pools for redirect message buffers and input maps
path.go Implements pooling for route segments and parser components
helpers.go Extensive pooling additions for accepted types, header values, and language tags
middleware/*/*.go Various middleware-specific pooling implementations for their data structures
client/*.go Client-side pooling for requests, responses, cookies, and core objects
binder/*.go Pooling for binding operations using string slice and file header maps

return tagsPtr, tags
}

func releaseLanguageTagSlice(tagsPtr *([]string)) {
Copy link

Copilot AI Oct 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Invalid syntax for pointer to slice type. Should be *[]string instead of *([]string).

Suggested change
func releaseLanguageTagSlice(tagsPtr *([]string)) {
func releaseLanguageTagSlice(tagsPtr *[]string) {

Copilot uses AI. Check for mistakes.
Comment on lines +265 to +268
select {
case <-ch:
default:
}
Copy link

Copilot AI Oct 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This select statement attempts to drain the channel but may cause a deadlock if the channel is unbuffered and no goroutine is sending. Consider using a non-blocking receive or ensure the channel is buffered.

Copilot uses AI. Check for mistakes.
return
}

clear(buf)
Copy link

Copilot AI Oct 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using clear() on a byte slice sets all elements to zero, but the slice length remains unchanged. This should be buf = buf[:0] to reset the length properly, or handle the clearing in releaseEncryptCookieBuffer.

Suggested change
clear(buf)

Copilot uses AI. Check for mistakes.
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces significant performance improvements by leveraging sync.Pool across various parts of the codebase to reduce allocations. The changes are extensive and well-implemented, with new pooling mechanisms for maps, slices, and various structs. The addition of corresponding tests for the new pooling logic is also a great practice.

I've found one critical issue in client/cookiejar.go that could lead to a panic, and a couple of minor code quality suggestions in binder/binder.go and client/request.go.

Overall, this is a great contribution to improving the framework's performance.

Comment on lines +186 to +191
var matches []*fasthttp.Cookie
if matchesPtr != nil {
matches = *matchesPtr
}

matches = matches[:0]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This logic can cause a panic. If collectCookiesForRequest is called with matchesPtr == nil (which getByHostAndPath does), matches will be nil. Then, matches = matches[:0] on line 191 will cause a panic.

You should ensure matches is not nil before slicing, or change the logic to handle the nil case. The suggested change correctly resets the slice from the pool if available, or starts with a nil slice that append can handle safely.

Suggested change
var matches []*fasthttp.Cookie
if matchesPtr != nil {
matches = *matchesPtr
}
matches = matches[:0]
var matches []*fasthttp.Cookie
if matchesPtr != nil {
matches = (*matchesPtr)[:0]
}

Comment on lines +118 to +120
if m == nil {
return make(map[string][]string, stringSliceMapDefaultCap)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This if m == nil check appears to be redundant. The stringSliceMapPool is initialized with a New function that always returns a non-nil map[string][]string. Therefore, stringSliceMapPool.Get() will not return nil, and this code block is unreachable.

Comment on lines +146 to +148
if m == nil {
return make(map[string][]*multipart.FileHeader, fileHeaderSliceMapDefaultCap)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Similar to acquireStringSliceMap, this if m == nil check is redundant because fileHeaderSliceMapPool.New always returns a non-nil map, making this block unreachable.

keysPtr, keys := acquireHeaderKeySlice(len(peekKeys))
if len(peekKeys) > 0 {
keys = keys[:len(peekKeys)]
copy(keys, peekKeys) // It is necessary to have immutable byte slice.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The comment // It is necessary to have immutable byte slice. is correct, but the implementation copy(keys, peekKeys) does not achieve this. It only performs a shallow copy of the slice headers. The underlying byte arrays of peekKeys are not copied, so the keys slice still contains references to fasthttp's internal buffers, which can be modified. This makes subsequent use of utils.UnsafeString unsafe if the buffers are modified.

While this might be an intentional performance trade-off, the comment is misleading as it suggests the data is being made immutable. Consider either performing a deep copy for safety or updating the comment to reflect the unsafe nature of this operation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

Status: No status

Development

Successfully merging this pull request may close these issues.

2 participants