Skip to content

Fix realtime sync startup and directory deletes#11

Merged
Go1c merged 1 commit intomainfrom
fix/realtime-sync-and-delete
Apr 14, 2026
Merged

Fix realtime sync startup and directory deletes#11
Go1c merged 1 commit intomainfrom
fix/realtime-sync-and-delete

Conversation

@Go1c
Copy link
Copy Markdown
Owner

@Go1c Go1c commented Apr 14, 2026

Summary

  • fix startup stalls so the watcher can start even when FileSync returns edge-case download metadata
  • handle directory delete events by expanding tracked child files into per-file delete operations
  • queue attachment uploads through fixed workers so realtime note/file deletes are not delayed behind bulk startup uploads

Testing

  • .venv/bin/python -m unittest tests/test_file_sync.py tests/test_client.py tests/test_note_sync.py tests/test_folder_sync.py

Summary by Sourcery

Improve realtime sync robustness and responsiveness across files, notes, folders, and the WebSocket client.

New Features:

  • Add folder sync handler to apply server-pushed folder create, delete, and rename events locally.
  • Track known vault files in the filesystem watcher to expand directory deletes into per-file delete operations.
  • Support server-pushed note rename events to move notes on disk while keeping sync state consistent.
  • Introduce configurable upload_concurrency to control parallel file attachment uploads.

Bug Fixes:

  • Ensure file and note sync lastTime markers are only committed after all detail messages and downloads complete, preventing stalled startups from bad metadata.
  • Prevent file sync from stalling when chunked downloads are pending or when the server sends zero-chunk downloads for empty attachments.
  • Avoid startup stalls by detecting stalled file sync sessions and allowing the watcher to proceed when FileSyncEnd counts never arrive.
  • Handle websocket reconnect callbacks asynchronously so auth responses and message processing are not blocked.
  • Serialize WebSocket sends to avoid concurrent writes and races when connections drop or reconnect.
  • Treat server folder sync delete and rename operations as full directory tree updates, ensuring local folder trees match the server state.

Enhancements:

  • Queue file uploads through bounded worker tasks using the websocket client, so realtime deletes and other operations are not blocked behind bulk attachment uploads.
  • Rely on server-side WebSocket heartbeats instead of client pings to avoid false timeouts during heavy binary transfers.
  • Improve logging for unexpected or unhandled protocol actions to aid debugging of sync issues.

Tests:

  • Add unit tests for file sync edge cases including download completion timing, zero-chunk downloads, stalled detection, and upload concurrency behaviour.
  • Add unit tests for watcher move and directory delete scenarios to verify correct scheduling of uploads and per-file deletes when paths move into or out of the vault.
  • Add unit tests for note sync inbound state handling and rename behaviour, including deferred lastTime commits.
  • Add unit tests for WSClient reconnect behaviour and serialized send semantics.
  • Add unit tests for FolderSync to cover server-driven folder create, rename, and delete operations.

@sourcery-ai
Copy link
Copy Markdown

sourcery-ai Bot commented Apr 14, 2026

Reviewer's Guide

Improves realtime sync robustness by deferring lastTime commits until all details/downloads are complete, adding upload worker concurrency limits, enhancing watcher behavior for directory and cross-boundary moves/deletes, introducing folder-level sync handlers, and making WS client reconnect and send behavior non-blocking and serialized.

Sequence diagram for file upload queue and workers

sequenceDiagram
    participant Server
    participant Client as WSClient
    participant FileSync
    participant UploadQueue
    participant Worker as UploadWorker

    Server->>Client: ACTION_FILE_UPLOAD (FileUpload)
    Client->>FileSync: _on_upload_session(msg)
    FileSync->>FileSync: _ensure_upload_workers()
    FileSync->>UploadQueue: put(session_id, chunk_size, rel_path, full, completion)
    FileSync->>Worker: start _upload_queue_worker()
    FileSync->>FileSync: _await_upload_completion(completion)

    loop per queued upload
        Worker->>UploadQueue: get()
        Worker->>FileSync: _upload_session_worker(session_id, chunk_size, rel_path, full)
        FileSync->>FileSync: read_bytes(full)
        loop for each chunk
            FileSync->>Client: send_bytes(binary_chunk)
        end
        Worker-->>FileSync: completion result
    end

    Server-->>Client: ACTION_FILE_UPLOAD_ACK (FileUploadAck)
    Client->>FileSync: _on_upload_ack(msg)
    FileSync->>FileSync: log ack and continue
Loading

Sequence diagram for initial file sync with stall detection and lastTime commit

sequenceDiagram
    participant Server
    participant Client as WSClient
    participant FileSync
    participant Engine as SyncEngine
    participant Watcher

    Engine->>FileSync: request_sync()
    FileSync->>Client: send FileSyncRequest

    Server-->>Client: FileSyncUpdate / FileSyncDelete / FileSyncChunkDownload
    Client->>FileSync: _on_sync_update / _on_sync_delete / _on_chunk_download_start
    FileSync->>FileSync: _mark_sync_activity()
    FileSync->>FileSync: track _download_sessions and _pending_download_paths

    Server-->>Client: FileSyncEnd(lastTime, counts)
    Client->>FileSync: _on_sync_end(msg)
    FileSync->>FileSync: store _pending_last_time
    FileSync->>FileSync: set expected counts
    FileSync->>FileSync: _check_complete()

    alt all details and downloads complete
        FileSync->>FileSync: _check_complete()
        FileSync->>FileSync: _commit_last_time()
        FileSync->>Engine: is_sync_complete = True
    else still pending downloads or missing details
        loop until complete or stalled
            Engine->>FileSync: is_sync_complete?
            Engine->>FileSync: is_stalled(stale_seconds=5)?
        end
        alt stalled before completion
            Engine->>Engine: log stall warning
        end
    end

    Engine->>Watcher: start observer and enable realtime sync
Loading

Sequence diagram for watcher handling cross-boundary moves and file tracking

sequenceDiagram
    participant FS as Filesystem
    participant Watcher
    participant Engine as SyncEngine

    FS-->>Watcher: on_created(event)
    Watcher->>Watcher: _rel(event.src_path)
    Watcher->>Watcher: _track_file(rel)
    Watcher->>Engine: schedule on_local_change(rel)

    FS-->>Watcher: on_deleted(event)
    alt file delete
        Watcher->>Watcher: _rel(event.src_path)
        Watcher->>Watcher: _untrack_file(rel)
        Watcher->>Engine: schedule on_local_delete(rel)
    else directory delete
        Watcher->>Watcher: _handle_directory_delete(event)
        Watcher->>Watcher: enumerate victims from _known_files
        loop each victim
            Watcher->>Watcher: _untrack_file(rel)
            Watcher->>Engine: schedule on_local_delete(rel)
        end
    end

    FS-->>Watcher: on_moved(event)
    Watcher->>Watcher: old_rel = _rel_or_none(src_path)
    Watcher->>Watcher: new_rel = _rel_or_none(dest_path)
    alt old_rel None, new_rel not None
        Watcher->>Watcher: _track_file(new_rel)
        Watcher->>Engine: schedule on_local_change(new_rel)
    else new_rel None, old_rel not None
        Watcher->>Watcher: _untrack_file(old_rel)
        Watcher->>Engine: schedule on_local_delete(old_rel)
    else both not None and are files
        Watcher->>Watcher: _schedule_move_transition(old_rel, new_rel)
        Watcher->>Watcher: update _known_files
        Watcher->>Engine: schedule on_local_rename(new_rel, old_rel)
    end
Loading

Updated class diagram for sync and client components

classDiagram
    class SyncEngine {
        +vault_path: Path
        +config: AppConfig
        +state: SyncState
        +note_sync: NoteSync
        +file_sync: FileSync
        +folder_sync: FolderSync
        +ws_client: Client
        +on_local_change(rel_path: str) async
        +on_local_delete(rel_path: str) async
        +on_local_rename(new_rel: str, old_rel: str) async
        +run() async
        +_wait_file_sync(timeout: float) async
    }

    class SyncConfig {
        +sync_notes: bool
        +sync_files: bool
        +sync_config: bool
        +upload_concurrency: int
        +exclude_patterns: list~str~
    }

    class FileSync {
        +engine: SyncEngine
        +vault_path: Path
        +_sync_complete: bool
        +_download_sessions: dict~str, _DownloadSession~
        +_pending_download_paths: set~str~
        +_expected_modify: int
        +_expected_delete: int
        +_received_modify: int
        +_received_delete: int
        +_got_end: bool
        +_pending_last_time: int
        +_last_sync_activity_monotonic: float
        +_upload_tasks: set~Task~
        +_upload_worker_count: int
        +_upload_workers: set~Task~
        +_upload_queue: asyncio.Queue
        +register_handlers() void
        +request_sync() async
        +is_sync_complete: bool
        +is_stalled(stale_seconds: float) bool
        +_on_upload_session(msg: WSMessage) async
        +_on_upload_ack(msg: WSMessage) async
        +_upload_queue_worker() async
        +_upload_session_worker(session_id: str, chunk_size: int, rel_path: str, full: Path) async
        +_on_sync_update(msg: WSMessage) async
        +_on_sync_delete(msg: WSMessage) async
        +_on_sync_rename(msg: WSMessage) async
        +_on_sync_mtime(msg: WSMessage) async
        +_on_chunk_download_start(msg: WSMessage) async
        +_on_binary_chunk(session_id: str, chunk_index: int, data: bytes) async
        +_finalize_download(session_id: str, session: _DownloadSession) async
        +_finalize_empty_download(rel_path: str) async
        +_on_sync_end(msg: WSMessage) async
        +_check_complete() void
        +_commit_last_time() void
    }

    class NoteSync {
        +engine: SyncEngine
        +vault_path: Path
        +_sync_complete: bool
        +_expected_modify: int
        +_expected_delete: int
        +_received_modify: int
        +_received_delete: int
        +_got_end: bool
        +_pending_last_time: int
        +_echo_hashes: dict~str, str~
        +register_handlers() void
        +request_sync() async
        +_on_sync_modify(msg: WSMessage) async
        +_on_sync_delete(msg: WSMessage) async
        +_on_sync_rename(msg: WSMessage) async
        +_on_sync_mtime(msg: WSMessage) async
        +_on_sync_need_push(msg: WSMessage) async
        +_on_sync_end(msg: WSMessage) async
        +_reset_counters() void
        +_check_all_received() void
        +_commit_last_time() void
    }

    class FolderSync {
        +engine: SyncEngine
        +vault_path: Path
        +register_handlers() void
        +_on_sync_modify(msg: WSMessage) async
        +_on_sync_delete(msg: WSMessage) async
        +_on_sync_rename(msg: WSMessage) async
    }

    class Watcher {
        +engine: SyncEngine
        +loop: asyncio.AbstractEventLoop
        +_pending: dict~str, TimerHandle~
        +_known_files: set~str~
        +on_created(event) void
        +on_modified(event) void
        +on_deleted(event) void
        +on_moved(event) void
        +_seed_known_files() void
        +_track_file(rel_path: str) void
        +_untrack_file(rel_path: str) void
        +_handle_directory_delete(event) void
        +_handle_directory_move(event) void
        +_schedule_move_transition(old_rel: str, new_rel: str) void
    }

    class Client {
        +config: AppConfig
        +ws: WebSocketClientProtocol
        +_handlers: dict~str, Callable~
        +_binary_handler: Callable
        +_on_reconnect: Callable
        +_reconnect_task: asyncio.Task
        +_msg_queue: list~str|bytes~
        +_ready_event: asyncio.Event
        +_send_lock: asyncio.Lock
        +connect() async
        +send_json(data: dict) async
        +send_bytes(data: bytes) async
        +_raw_send(data: str|bytes) async
        +_flush_queue() async
        +_on_auth_response(msg: WSMessage) async
        +_run_reconnect_handler() async
        +wait_ready(timeout: float) async bool
        +close() async
    }

    SyncEngine --> SyncConfig : uses
    SyncEngine *-- FileSync
    SyncEngine *-- NoteSync
    SyncEngine *-- FolderSync
    SyncEngine *-- Watcher
    SyncEngine *-- Client

    FileSync ..> WSMessage
    NoteSync ..> WSMessage
    FolderSync ..> WSMessage
    Watcher ..> SyncEngine
    Client ..> WSMessage
Loading

File-Level Changes

Change Details Files
Make file sync completion and stalling logic robust to pending attachment downloads and implement concurrent but bounded upload workers.
  • Track pending download paths and download sessions, deferring sync completion until all expected updates are processed and downloads finalized (including zero-chunk cases).
  • Store lastTime from FileSyncEnd in a pending field and commit it only once sync is complete, with activity timestamps to detect stalled syncs and allow the engine to continue after a grace period.
  • Refactor FileUpload handling into a worker-queue model driven by upload_concurrency, serializing chunk sends per worker and logging upload acknowledgements.
fns_cli/file_sync.py
tests/test_file_sync.py
fns_cli/config.py
Improve the filesystem watcher to track known files and correctly translate directory deletes and cross-vault moves into per-file operations.
  • Maintain a set of known non-excluded files seeded from the vault at startup, updating it on create/modify/delete/move events.
  • Handle directory delete events by expanding to per-file delete operations from the known set, and process directory moves with tracking updates.
  • Support moves into or out of the vault by translating them into create or delete events and updating tracking accordingly.
fns_cli/watcher.py
tests/test_file_sync.py
Harden the WebSocket client so reconnect handling and sending are non-blocking and serialized, and rely on server-side heartbeats to avoid false timeouts under heavy load.
  • Serialize all outgoing sends with an asyncio.Lock so concurrent senders share the same underlying websocket safely.
  • Remove client-initiated ping/heartbeat parameters to avoid false timeouts during large uploads, relying on server pings instead.
  • Run the reconnect handler in a background task, canceling any prior run, and ensure it is canceled on client close.
fns_cli/client.py
tests/test_client.py
Extend note sync to support server-driven rename events and defer lastTime commits until all modifications/deletes are processed.
  • Register and implement handling for NoteSyncRename to move note files on disk, update echo hashes, and clean up empty parent directories.
  • Change NoteSyncEnd handling to keep lastTime pending and only commit it after all expected modify/delete events have been applied.
  • Add unit tests covering lastTime commit behavior and rename handling.
fns_cli/note_sync.py
tests/test_note_sync.py
Introduce explicit folder sync handling for server-driven folder create/delete/rename events and wire it into the sync engine and protocol.
  • Add FolderSync class to apply folder modify/delete/rename operations on the local vault, including recursive deletion and rename fallback to create.
  • Expose new folder sync actions in the protocol and register folder sync handlers in the sync engine.
  • Add tests verifying folder creation, rename, and delete behavior.
  • Add upload_concurrency configuration with validation in config loading.
fns_cli/folder_sync.py
fns_cli/protocol.py
fns_cli/sync_engine.py
fns_cli/config.py
tests/test_folder_sync.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@Go1c Go1c merged commit 0bf9c7d into main Apr 14, 2026
4 checks passed
@Go1c Go1c deleted the fix/realtime-sync-and-delete branch April 14, 2026 16:27
Copy link
Copy Markdown

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 2 issues, and left some high level feedback:

  • The new FileSync upload worker pool (_upload_queue_worker + _upload_workers and _upload_tasks) runs indefinitely and is never shut down or drained on client shutdown or reconnect; consider adding lifecycle management (e.g. cancellation/awaiting on close or reconnect) to avoid leaking background tasks tied to an old connection.
  • In FolderSync._on_sync_rename, the else branch for a missing old_full calls new_full.parent.mkdir(parents=True, exist_ok=True) and then new_full.mkdir(parents=True, exist_ok=True), which redundantly recreates the parent chain and may be clearer and safer if the second call is just new_full.mkdir(exist_ok=True).
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The new FileSync upload worker pool (`_upload_queue_worker` + `_upload_workers` and `_upload_tasks`) runs indefinitely and is never shut down or drained on client shutdown or reconnect; consider adding lifecycle management (e.g. cancellation/awaiting on close or reconnect) to avoid leaking background tasks tied to an old connection.
- In `FolderSync._on_sync_rename`, the `else` branch for a missing `old_full` calls `new_full.parent.mkdir(parents=True, exist_ok=True)` and then `new_full.mkdir(parents=True, exist_ok=True)`, which redundantly recreates the parent chain and may be clearer and safer if the second call is just `new_full.mkdir(exist_ok=True)`.

## Individual Comments

### Comment 1
<location path="fns_cli/file_sync.py" line_range="80-83" />
<code_context>
+        self._pending_last_time = 0
+        self._last_sync_activity_monotonic = time.monotonic()
+        self._upload_tasks: set[asyncio.Task] = set()
+        upload_concurrency = getattr(self.config.sync, "upload_concurrency", 2)
+        if not isinstance(upload_concurrency, int) or upload_concurrency < 1:
+            upload_concurrency = 2
+        self._upload_worker_count = upload_concurrency
+        self._upload_workers: set[asyncio.Task] = set()
+        self._upload_queue: asyncio.Queue = asyncio.Queue()
</code_context>
<issue_to_address>
**suggestion (performance):** Consider capping upload_concurrency to avoid unbounded worker creation on pathological configs.

Since `upload_concurrency` is only validated as ≥1, a very large value (e.g. thousands) will create that many long-lived workers. Please clamp this to a sane upper bound (e.g. a small fixed max or something based on CPU count), and consider enforcing the same limit when reading it in `load_config`, to avoid resource exhaustion from a bad config value.

Suggested implementation:

```python
import asyncio
import os

```

```python
        self._pending_last_time = 0
        self._last_sync_activity_monotonic = time.monotonic()
        self._upload_tasks: set[asyncio.Task] = set()

        # Determine desired upload concurrency from config, with validation and capping.
        upload_concurrency = getattr(self.config.sync, "upload_concurrency", 2)
        if not isinstance(upload_concurrency, int) or upload_concurrency < 1:
            upload_concurrency = 2

        # Cap concurrency to avoid unbounded worker creation on pathological configs.
        # Use a small multiple of CPU count as a sane upper bound, with a fallback default.
        cpu_count = os.cpu_count() or 4
        max_upload_concurrency = max(2, cpu_count * 4)
        upload_concurrency = min(upload_concurrency, max_upload_concurrency)

        self._upload_worker_count = upload_concurrency
        self._upload_workers: set[asyncio.Task] = set()
        self._upload_queue: asyncio.Queue = asyncio.Queue()

```

To fully implement your suggestion:
1. In the config-loading path (likely `load_config` or equivalent), apply the same validation and capping logic to `sync.upload_concurrency`, so an excessively large value is never accepted into the config object in the first place.
2. Consider centralizing the cap logic (e.g. a helper function like `get_capped_upload_concurrency(config_value: int | None) -> int`) to avoid divergence between config validation and runtime usage.
</issue_to_address>

### Comment 2
<location path="fns_cli/file_sync.py" line_range="220-225" />
<code_context>
+    async def _await_upload_completion(self, completion: asyncio.Future) -> None:
+        await completion
+
+    async def _upload_queue_worker(self) -> None:
+        while True:
+            session_id, chunk_size, rel_path, full, completion = await self._upload_queue.get()
+            try:
+                await self._upload_session_worker(session_id, chunk_size, rel_path, full)
+            except Exception as exc:
+                if not completion.done():
+                    completion.set_exception(exc)
+            else:
+                if not completion.done():
+                    completion.set_result(None)
+            finally:
+                self._upload_queue.task_done()
+
+    async def _upload_session_worker(
</code_context>
<issue_to_address>
**suggestion (bug_risk):** Upload worker error handling surfaces the exception but drops context and logs nothing.

In `_upload_queue_worker`, exceptions from `_upload_session_worker` are only passed into `completion` and never logged. If the caller doesn’t log when awaiting `completion` (or never awaits it), failures may be effectively invisible and lack context like `rel_path`/`session_id`. Consider logging a short error with those identifiers inside the `except` so upload failures are traceable even when the awaiter doesn’t log them.

```suggestion
            try:
                await self._upload_session_worker(session_id, chunk_size, rel_path, full)
            except Exception as exc:
                log.exception(
                    "Upload failed for %s (sessionId=%s, chunkSize=%d)",
                    rel_path,
                    session_id[:8],
                    chunk_size,
                )
                if not completion.done():
                    completion.set_exception(exc)
            else:
```
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment thread fns_cli/file_sync.py
Comment on lines +80 to +83
upload_concurrency = getattr(self.config.sync, "upload_concurrency", 2)
if not isinstance(upload_concurrency, int) or upload_concurrency < 1:
upload_concurrency = 2
self._upload_worker_count = upload_concurrency
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (performance): Consider capping upload_concurrency to avoid unbounded worker creation on pathological configs.

Since upload_concurrency is only validated as ≥1, a very large value (e.g. thousands) will create that many long-lived workers. Please clamp this to a sane upper bound (e.g. a small fixed max or something based on CPU count), and consider enforcing the same limit when reading it in load_config, to avoid resource exhaustion from a bad config value.

Suggested implementation:

import asyncio
import os
        self._pending_last_time = 0
        self._last_sync_activity_monotonic = time.monotonic()
        self._upload_tasks: set[asyncio.Task] = set()

        # Determine desired upload concurrency from config, with validation and capping.
        upload_concurrency = getattr(self.config.sync, "upload_concurrency", 2)
        if not isinstance(upload_concurrency, int) or upload_concurrency < 1:
            upload_concurrency = 2

        # Cap concurrency to avoid unbounded worker creation on pathological configs.
        # Use a small multiple of CPU count as a sane upper bound, with a fallback default.
        cpu_count = os.cpu_count() or 4
        max_upload_concurrency = max(2, cpu_count * 4)
        upload_concurrency = min(upload_concurrency, max_upload_concurrency)

        self._upload_worker_count = upload_concurrency
        self._upload_workers: set[asyncio.Task] = set()
        self._upload_queue: asyncio.Queue = asyncio.Queue()

To fully implement your suggestion:

  1. In the config-loading path (likely load_config or equivalent), apply the same validation and capping logic to sync.upload_concurrency, so an excessively large value is never accepted into the config object in the first place.
  2. Consider centralizing the cap logic (e.g. a helper function like get_capped_upload_concurrency(config_value: int | None) -> int) to avoid divergence between config validation and runtime usage.

Comment thread fns_cli/file_sync.py
Comment on lines +220 to +225
try:
await self._upload_session_worker(session_id, chunk_size, rel_path, full)
except Exception as exc:
if not completion.done():
completion.set_exception(exc)
else:
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (bug_risk): Upload worker error handling surfaces the exception but drops context and logs nothing.

In _upload_queue_worker, exceptions from _upload_session_worker are only passed into completion and never logged. If the caller doesn’t log when awaiting completion (or never awaits it), failures may be effectively invisible and lack context like rel_path/session_id. Consider logging a short error with those identifiers inside the except so upload failures are traceable even when the awaiter doesn’t log them.

Suggested change
try:
await self._upload_session_worker(session_id, chunk_size, rel_path, full)
except Exception as exc:
if not completion.done():
completion.set_exception(exc)
else:
try:
await self._upload_session_worker(session_id, chunk_size, rel_path, full)
except Exception as exc:
log.exception(
"Upload failed for %s (sessionId=%s, chunkSize=%d)",
rel_path,
session_id[:8],
chunk_size,
)
if not completion.done():
completion.set_exception(exc)
else:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant