Skip to content

Fix audio buffer memory leak: use in-place del instead of slice-copy#228

Open
aliev wants to merge 2 commits intomainfrom
fix/audio-track-buffer-leak
Open

Fix audio buffer memory leak: use in-place del instead of slice-copy#228
aliev wants to merge 2 commits intomainfrom
fix/audio-track-buffer-leak

Conversation

@aliev
Copy link
Member

@aliev aliev commented Mar 24, 2026

Why

pymalloc allocates memory for small objects in large blocks (1MB arenas) to reduce syscalls. An arena can only be returned to the OS when all objects inside it are dead.

AudioStreamTrack.recv() runs every 20ms (50x/sec) and slices the buffer:

self._buffer = self._buffer[self._bytes_per_frame:]
  1. A new bytearray is allocated (new memory from the arena)
  2. Remaining data is copied into it
  3. The old bytearray's refcount drops to 0 and is freed

But "freed" means freed inside pymalloc, not returned to the OS. The old object lived in an arena alongside other objects. If any of those neighbors are still alive, the entire arena (1MB) stays reserved.

At 50 allocations/sec, arenas become fragmented: each one has a mix of live and dead blocks, so none of them can be fully released. RSS grows with every call and never comes back.

Arena (1MB)
├── Pool 1: [free][free][free][free]     <- all free
├── Pool 2: [free][LIVE][free][free]     <- 1 live object
├── Pool 3: [free][free][free][free]     <- all free
└── Pool 4: [free][free][LIVE][free]     <- 1 live object

Result: 64 bytes of live data, 1MB stuck in RSS.

The fix uses in-place deletion:

del self._buffer[:self._bytes_per_frame]

This calls memmove to shift data within the same buffer. No new object, no malloc, no GC, no arena fragmentation.

Changes

  • Replace self._buffer = self._buffer[n:] with del self._buffer[:n] in two places (line 133 and 195)

Memray profiling results (single call, 48kHz mono s16)

Before: audio_track.py:195 (recv) -> 900MB  (#1 allocator by size)
After:  audio_track.py            -> not in top 5

Summary by CodeRabbit

  • Refactor

    • Optimized audio buffer memory management to improve efficiency by using in-place operations instead of allocating new buffers during overflow and consumption.
  • Tests

    • Added test coverage to verify audio buffer memory behavior and integrity during data consumption and overflow handling.

bytearray slice assignment (self._buffer = self._buffer[n:]) creates
a new bytearray every 20ms. At 48kHz mono s16, that's 50 allocations
per second, each copying the remaining buffer. pymalloc retains these
in arena pools and never returns the memory to the OS.

del self._buffer[:n] removes data in-place without allocating a new
object. Memray profiling showed this single line accounted for 900MB
of allocations per call.
@coderabbitai
Copy link

coderabbitai bot commented Mar 24, 2026

📝 Walkthrough

Walkthrough

Replaces buffer re-slicing with in-place deletion of consumed or overflow bytes in the audio track implementation; adds tests verifying that _buffer identity is preserved and that overflow trimming happens without reallocating the buffer.

Changes

Cohort / File(s) Summary
Audio Buffer Optimization
getstream/video/rtc/audio_track.py
Replaced buffer re-slicing assignments (self._buffer = self._buffer[n:]) with in-place deletion (del self._buffer[:n]) in write and recv to avoid allocating a new bytearray.
Tests (buffer identity & overflow)
tests/test_audio_stream_track.py
Added async tests that assert id(track._buffer) remains the same after recv() and after writing data that triggers overflow trimming; also verify remaining buffer length and overflow behavior.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

🐰 I nibble bytes with careful cheer,
I slice no more, the buffer stays near.
Del takes the lead, trims just a part,
Saving hops and bytes from the start. 🥕

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Fix audio buffer memory leak: use in-place del instead of slice-copy' directly and specifically summarizes the main change in the PR: replacing slice-copy operations with in-place deletion to fix a memory leak in the audio buffer.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch fix/audio-track-buffer-leak

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@aliev aliev marked this pull request as ready for review March 24, 2026 19:31
@aliev aliev requested a review from dangusev March 24, 2026 20:02
Assert that recv() and overflow trim modify the same bytearray
object rather than creating a new one via slice-copy.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
tests/test_audio_stream_track.py (1)

306-355: Use fixtures for AudioStreamTrack setup in these new tests.

Both new tests duplicate track setup and direct object construction. Consider a fixture for the mono/s16 track variants used here.

As per coding guidelines, "Use fixtures to inject objects in tests; test client fixtures can use the Stream API client".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_audio_stream_track.py` around lines 306 - 355, The two tests
test_recv_does_not_reallocate_buffer and
test_buffer_overflow_does_not_reallocate duplicate AudioStreamTrack
construction; extract a pytest fixture (e.g., mono_s16_track) that yields an
AudioStreamTrack configured for mono s16 (sample_rate=48000, channels=1,
format="s16"), and update test_recv_does_not_reallocate_buffer to accept that
fixture instead of constructing track locally; for
test_buffer_overflow_does_not_reallocate add a separate fixture or a
parameterized fixture (e.g., mono_s16_track_with_buffer_size or use request
param) to supply audio_buffer_size_ms=100 while keeping the same fixture
pattern, then remove direct constructions and use the fixtures in both tests.
Ensure fixture names match the test parameters and yield a ready-to-use track
for write/recv operations.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/test_audio_stream_track.py`:
- Around line 330-358: The test test_buffer_overflow_does_not_reallocate only
checks identity of track._buffer, which can pass even if trimming didn't occur;
after writing pcm_large, also assert the buffer length equals the 100ms cap
(compute as sample_rate * channels * audio_buffer_size_ms / 1000) to prove the
overflow path executed — e.g., for sample_rate=48000, channels=1,
audio_buffer_size_ms=100 the expected length is 4800 samples; add this length
assertion immediately after the second await track.write(pcm_large) alongside
the existing id(track._buffer) check.

---

Nitpick comments:
In `@tests/test_audio_stream_track.py`:
- Around line 306-355: The two tests test_recv_does_not_reallocate_buffer and
test_buffer_overflow_does_not_reallocate duplicate AudioStreamTrack
construction; extract a pytest fixture (e.g., mono_s16_track) that yields an
AudioStreamTrack configured for mono s16 (sample_rate=48000, channels=1,
format="s16"), and update test_recv_does_not_reallocate_buffer to accept that
fixture instead of constructing track locally; for
test_buffer_overflow_does_not_reallocate add a separate fixture or a
parameterized fixture (e.g., mono_s16_track_with_buffer_size or use request
param) to supply audio_buffer_size_ms=100 while keeping the same fixture
pattern, then remove direct constructions and use the fixtures in both tests.
Ensure fixture names match the test parameters and yield a ready-to-use track
for write/recv operations.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 86bd7fce-5ac8-4b9c-b080-e74f924e7e53

📥 Commits

Reviewing files that changed from the base of the PR and between 7262262 and 4c2969d.

📒 Files selected for processing (1)
  • tests/test_audio_stream_track.py

Comment on lines +330 to +358
async def test_buffer_overflow_does_not_reallocate(self):
"""Test that buffer overflow trims in-place without creating a new buffer object."""
track = AudioStreamTrack(
sample_rate=48000, channels=1, format="s16", audio_buffer_size_ms=100
)

# Write 50ms of data first to get a buffer reference
samples_50ms = np.zeros(2400, dtype=np.int16)
pcm = PcmData(
samples=samples_50ms,
sample_rate=48000,
format=AudioFormat.S16,
channels=1,
)
await track.write(pcm)
buffer_id = id(track._buffer)

# Write 200ms of data (exceeds 100ms limit, triggers overflow trim)
samples_200ms = np.zeros(9600, dtype=np.int16)
pcm_large = PcmData(
samples=samples_200ms,
sample_rate=48000,
format=AudioFormat.S16,
channels=1,
)
await track.write(pcm_large)

assert id(track._buffer) == buffer_id, "overflow trim should modify buffer in-place, not create a new one"

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Assert trimmed length to prove overflow path executed.

At Line 357, identity-only validation can pass even if overflow trimming fails but _buffer is still the same object. Add a length assertion for the 100ms cap.

Suggested test hardening
         await track.write(pcm_large)

         assert id(track._buffer) == buffer_id, "overflow trim should modify buffer in-place, not create a new one"
+        expected_max_bytes = int(0.1 * 48000) * 2  # 100ms * sample_rate * bytes_per_sample
+        assert len(track._buffer) == expected_max_bytes, "buffer should be trimmed to configured max size"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_audio_stream_track.py` around lines 330 - 358, The test
test_buffer_overflow_does_not_reallocate only checks identity of track._buffer,
which can pass even if trimming didn't occur; after writing pcm_large, also
assert the buffer length equals the 100ms cap (compute as sample_rate * channels
* audio_buffer_size_ms / 1000) to prove the overflow path executed — e.g., for
sample_rate=48000, channels=1, audio_buffer_size_ms=100 the expected length is
4800 samples; add this length assertion immediately after the second await
track.write(pcm_large) alongside the existing id(track._buffer) check.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant