Skip to content

fix: file sync delete, reconnect re-sync, and watcher race#4

Merged
Go1c merged 1 commit intomainfrom
fix/attachment-sync
Apr 14, 2026
Merged

fix: file sync delete, reconnect re-sync, and watcher race#4
Go1c merged 1 commit intomainfrom
fix/attachment-sync

Conversation

@Go1c
Copy link
Copy Markdown
Owner

@Go1c Go1c commented Apr 14, 2026

Summary

  • FileSync delete never applied: server sends FileSyncEnd before the individual FileSyncDelete messages. The old code set _sync_complete = True on FileSyncEnd, causing pull/sync-once to close the connection before delete messages arrived. Now tracks expected vs received counts (mirroring NoteSync) and marks complete only when all messages are processed.
  • FileSync update stall on unexpected content type: early return in _on_sync_update skipped _received_modify += 1, causing _wait_file_sync to hang until the 300 s timeout.
  • Watcher on_moved missing is_ignored check: server-triggered renames temporarily ignore both paths, but on_moved only checked is_excluded. After the ignore window closed the debounced handler still fired, pushing a redundant delete + upload back to the server.
  • Daemon reconnect loses all remote changes: _initial_sync was only called once at startup. After a network drop, deletions and updates on other clients during the outage were permanently missed. Added on_reconnect hook to WSClient; SyncEngine uses it to pause the watcher and re-run _initial_sync on every reconnect.
  • sync_once double NoteSync: _initial_sync (incremental) was called first, then request_full_sync was called again in the same session. sync_once now calls request_full_sync + file_sync.request_sync directly without going through _initial_sync.

Test plan

  • Delete a file on another client, confirm fns pull removes the local copy
  • Delete a file on another client while fns run is connected, confirm local delete happens live
  • Kill network while fns run is active, delete a file on another client, restore network — confirm local delete happens after reconnect
  • Run fns sync-once, confirm notes sync completes with a single NoteSync round-trip (check logs for no duplicate NoteSync requests)
  • Rename a file while fns run is watching; confirm the rename is not pushed back to the server a second time

Summary by Sourcery

Improve file synchronization reliability, especially around deletes, reconnects, and one-off syncs.

Bug Fixes:

  • Ensure file delete operations from FileSync are applied by waiting for all expected modify/delete messages before marking sync complete.
  • Prevent FileSync from stalling when updates use unexpected content types by always advancing received counters and completion checks.
  • Avoid redundant delete+upload cycles on server-triggered renames by honoring ignore checks in the watcher move handler.
  • Restore missed remote changes after websocket reconnects by re-running the initial sync on each reconnection.
  • Eliminate duplicate note synchronization in sync-once by directly requesting full note and file sync instead of going through initial sync.

Enhancements:

  • Track expected vs received file sync modify/delete counts and log a concise completion summary.
  • Add a reconnect hook on the websocket client that lets higher-level components run custom logic when the connection is re-established.

Five bugs fixed:

1. FileSync: _on_sync_end set _sync_complete immediately, but the server
   streams FileSyncDelete/Update messages *after* FileSyncEnd. pull/sync-once
   closed the connection before those messages arrived, so remote deletions
   were never applied locally. Now mirrors NoteSync: tracks expected vs
   received counts and only marks complete when all messages are processed.

2. FileSync: _on_sync_update returned early on unexpected content type
   without incrementing _received_modify, causing _wait_file_sync to
   stall until the 300s timeout.

3. Watcher: on_moved did not check is_ignored, so server-triggered renames
   (which temporarily ignore both paths) could fire on_local_rename after
   the ignore window closed, pushing a redundant delete+upload back to the
   server.

4. Daemon (fns run): after a WebSocket reconnect _initial_sync was never
   retried, so any changes made on other clients during the outage
   (including deletions) were permanently missed. Added on_reconnect hook
   to WSClient; SyncEngine registers a callback that pauses the watcher
   and re-runs _initial_sync on every reconnect.

5. sync_once: called _initial_sync (incremental NoteSync) then immediately
   called request_full_sync, sending two NoteSync requests per session.
   sync_once now calls request_full_sync + file_sync.request_sync directly.
@sourcery-ai
Copy link
Copy Markdown

sourcery-ai bot commented Apr 14, 2026

Reviewer's Guide

Adjusts file synchronization completion tracking, ensures counters are updated for all update/delete paths, adds reconnect-triggered resync, fixes sync-once double NoteSync, and prevents watcher from reacting to server-driven moves.

Sequence diagram for reconnect-triggered resync

sequenceDiagram
    actor User
    participant SyncEngine
    participant WSClient
    participant Server
    participant VaultWatcher
    participant FileSync

    User->>SyncEngine: run()
    SyncEngine->>WSClient: run()
    WSClient->>Server: connect and authenticate
    Server-->>WSClient: auth_response(success, connectCount=1)
    WSClient->>SyncEngine: set ready_event
    SyncEngine->>SyncEngine: _initial_sync() on first connect
    SyncEngine->>VaultWatcher: start watching vault

    Server--xWSClient: network drop
    WSClient->>WSClient: reconnect logic
    WSClient->>Server: reconnect and authenticate
    Server-->>WSClient: auth_response(success, connectCount>1)
    WSClient->>WSClient: detect reconnect
    WSClient->>SyncEngine: _on_reconnect()

    SyncEngine->>SyncEngine: _watch_enabled = False
    SyncEngine->>VaultWatcher: effectively pause handling via _watch_enabled
    SyncEngine->>SyncEngine: _initial_sync()
    SyncEngine->>FileSync: run incremental file and note sync
    FileSync->>Server: request_sync()
    Server-->>FileSync: FileSyncEnd, updates, deletes
    FileSync->>FileSync: track expected and received counts
    FileSync->>SyncEngine: mark file sync complete
    SyncEngine->>SyncEngine: _watch_enabled = True
    SyncEngine->>VaultWatcher: resume handling via _watch_enabled

    User-->>SyncEngine: daemon continues with up to date state
Loading

Sequence diagram for sync_once flow without double NoteSync

sequenceDiagram
    actor User
    participant SyncEngine
    participant WSClient
    participant Server
    participant NoteSync
    participant FileSync

    User->>SyncEngine: sync_once()
    SyncEngine->>WSClient: run()
    WSClient->>Server: connect and authenticate
    Server-->>WSClient: auth_response(success)
    WSClient->>SyncEngine: set ready_event

    alt sync_notes enabled
        SyncEngine->>NoteSync: request_full_sync()
        NoteSync->>Server: NoteSync request
        Server-->>NoteSync: note updates
        NoteSync->>SyncEngine: mark note sync complete
        SyncEngine->>SyncEngine: _wait_note_sync(timeout=120)
    end

    alt sync_files or sync_config enabled
        SyncEngine->>FileSync: request_sync()
        FileSync->>FileSync: _reset_counters()
        FileSync->>Server: FileSync request
        Server-->>FileSync: FileSyncEnd
        FileSync->>FileSync: record expected counts, _check_complete()
        Server-->>FileSync: FileSyncUpdate and FileSyncDelete messages
        FileSync->>FileSync: increment received counts and _check_complete()
        FileSync->>SyncEngine: mark file sync complete
        SyncEngine->>SyncEngine: _wait_file_sync(timeout=300)
    end

    SyncEngine->>WSClient: close()
    WSClient-->>Server: close connection
    SyncEngine-->>User: sync_once completes with single NoteSync round trip
Loading

Sequence diagram for FileSyncEnd and completion tracking

sequenceDiagram
    participant Server
    participant WSClient
    participant FileSync

    Server-->>WSClient: FileSyncEnd(needModifyCount, needDeleteCount)
    WSClient->>FileSync: _on_sync_end(msg)
    FileSync->>FileSync: set _expected_modify, _expected_delete
    FileSync->>FileSync: _got_end = True
    FileSync->>FileSync: if total expected == 0 then _sync_complete = True else _check_complete()

    loop for each modify
        Server-->>WSClient: FileSyncUpdate
        WSClient->>FileSync: _on_sync_update(msg)
        alt chunked transfer
            FileSync->>FileSync: _request_chunk_download(rel_path, data)
            FileSync->>FileSync: _received_modify += 1
            FileSync->>FileSync: _check_complete()
        else direct content
            FileSync->>FileSync: write file or log warning
            FileSync->>FileSync: unignore_file(rel_path)
            FileSync->>FileSync: _received_modify += 1
            FileSync->>FileSync: _check_complete()
        end
    end

    loop for each delete
        Server-->>WSClient: FileSyncDelete
        WSClient->>FileSync: _on_sync_delete(msg)
        FileSync->>FileSync: delete local file if exists
        FileSync->>FileSync: unignore_file(rel_path)
        FileSync->>FileSync: _received_delete += 1
        FileSync->>FileSync: _check_complete()
    end

    FileSync->>FileSync: when received >= expected
    FileSync->>FileSync: log completion and set _sync_complete = True
Loading

Updated class diagram for sync, client, and watcher components

classDiagram
    class SyncEngine {
        +config
        +ws_client
        +note_sync
        +file_sync
        +state
        +vault_path
        +_watch_enabled
        +run()
        +sync_once()
        +_initial_sync()
        +_wait_file_sync(timeout)
        +_wait_note_sync(timeout)
    }

    class FileSync {
        +engine
        +vault_path
        +_sync_complete
        +_download_sessions
        +_expected_modify
        +_expected_delete
        +_received_modify
        +_received_delete
        +_got_end
        +register_handlers()
        +request_sync()
        +_reset_counters()
        +_on_sync_update(msg)
        +_on_sync_delete(msg)
        +_on_sync_rename(msg)
        +_on_sync_end(msg)
        +_check_complete()
        +_request_chunk_download(rel_path, data)
        +_collect_local_files()
    }

    class WSClient {
        +config
        +_running
        +_handlers
        +_binary_handler
        +_on_reconnect
        +_msg_queue
        +_ready_event
        +_connect_count
        +on(action, handler)
        +on_binary(handler)
        +on_reconnect(handler)
        +run()
        +close()
        +_raw_send(data)
        +_flush_queue()
        +_on_auth_response(msg)
    }

    class VaultWatcher {
        +engine
        +loop
        +on_created(event)
        +on_deleted(event)
        +on_modified(event)
        +on_moved(event)
        +_rel(path)
        +_schedule(key)
    }

    SyncEngine *-- FileSync
    SyncEngine *-- WSClient
    SyncEngine *-- VaultWatcher
    FileSync --> SyncEngine
    VaultWatcher --> SyncEngine
    WSClient --> SyncEngine
Loading

Architecture flow diagram for sync engine, watcher, and server

flowchart LR
    subgraph Local
        CLI["CLI commands fns run, fns sync-once"]
        SE["SyncEngine"]
        WS["WSClient"]
        FS["FileSync"]
        NS["NoteSync"]
        VW["VaultWatcher"]
        FSys["Local filesystem"]
    end

    subgraph Remote
        Srv["Sync server"]
    end

    CLI --> SE
    SE --> WS
    SE --> FS
    SE --> NS
    SE --> VW

    WS <--> Srv
    FS <--> Srv
    NS <--> Srv

    VW --> FS
    VW --> NS
    FS <--> FSys
    NS <--> FSys

    WS --> SE
    SE --> VW

    subgraph Reconnect_behavior
        WS -- on_reconnect --> SE
        SE -- _initial_sync on reconnect --> FS
        SE -- _initial_sync on reconnect --> NS
        SE -- _watch_enabled flag --> VW
    end
Loading

File-Level Changes

Change Details Files
Track expected vs received file sync operations and only mark file sync complete once all modify/delete messages have been processed.
  • Introduce counters for expected and received modify/delete operations and a flag for having received FileSyncEnd.
  • Reset counters at the start of each file sync request.
  • Increment received modify/delete counters in update and delete handlers and call a shared completion check.
  • Update FileSyncEnd handler to set expected modify/delete counts, handle zero-work cases immediately, and otherwise rely on the completion check.
fns_cli/file_sync.py
Ensure file sync updates always advance progress even when using chunked downloads or encountering unexpected content types.
  • Increment received modify counter and check for completion when falling back to chunked download requests.
  • Remove early return on unexpected content types while still logging a warning so completion accounting continues.
  • Ensure unignore_file is still called in finally blocks while counting updates.
fns_cli/file_sync.py
Re-run initial sync on websocket reconnects so remote changes during outages are applied.
  • Add an on_reconnect hook to WSClient to register a coroutine callback.
  • Invoke the reconnect handler after successful re-auth on subsequent connections, with error logging.
  • In SyncEngine.run, register an on_reconnect handler that disables the watcher, performs _initial_sync, then re-enables the watcher.
fns_cli/client.py
fns_cli/sync_engine.py
Make sync_once perform a single full note sync and an explicit file/config sync instead of going through _initial_sync.
  • Remove the call to _initial_sync from sync_once.
  • Call note_sync.request_full_sync followed by waiting for note sync completion when enabled.
  • Kick off file_sync.request_sync and wait for file sync completion when file or config syncs are enabled.
fns_cli/sync_engine.py
Prevent the watcher from emitting redundant operations for server-triggered renames by honoring ignore flags on move events.
  • Update on_moved to short-circuit when either the source or destination path is currently ignored, in addition to exclusion checks.
  • Continue to schedule move operations only for non-ignored, non-excluded paths.
fns_cli/watcher.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@Go1c Go1c merged commit cfa34a2 into main Apr 14, 2026
4 checks passed
@Go1c Go1c deleted the fix/attachment-sync branch April 14, 2026 07:17
Copy link
Copy Markdown

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 1 issue, and left some high level feedback:

  • In SyncEngine.run's _on_reconnect handler, consider wrapping the _watch_enabled flip in a try/finally so that the watcher is always re-enabled even if _initial_sync raises, to avoid leaving the daemon in a permanently non-watching state.
  • The reconnect callback registration in WSClient only allows a single _on_reconnect handler; if you anticipate multiple components needing reconnect behavior, it may be cleaner to support a list of handlers or a small event mechanism instead of a single stored callable.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In `SyncEngine.run`'s `_on_reconnect` handler, consider wrapping the `_watch_enabled` flip in a `try/finally` so that the watcher is always re-enabled even if `_initial_sync` raises, to avoid leaving the daemon in a permanently non-watching state.
- The reconnect callback registration in `WSClient` only allows a single `_on_reconnect` handler; if you anticipate multiple components needing reconnect behavior, it may be cleaner to support a list of handlers or a small event mechanism instead of a single stored callable.

## Individual Comments

### Comment 1
<location path="fns_cli/file_sync.py" line_range="344-354" />
<code_context>
+            return
+        total_expected = self._expected_modify + self._expected_delete
+        total_received = self._received_modify + self._received_delete
+        if total_received >= total_expected:
+            log.info(
+                "FileSync complete: %d modified, %d deleted",
+                self._received_modify, self._received_delete,
</code_context>
<issue_to_address>
**suggestion (bug_risk):** Using '>=' here can hide protocol mismatches or double-processing of messages.

With `total_received >= total_expected`, duplicate or double-counted messages will still mark the sync as complete, hiding protocol mismatches. If the protocol guarantees exact counts, consider using `==` and logging when `total_received > total_expected` so these issues are visible. Even if you keep the tolerance, logging any overshoot would help surface subtle bugs.

```suggestion
    def _check_complete(self) -> None:
        if not self._got_end:
            return

        total_expected = self._expected_modify + self._expected_delete
        total_received = self._received_modify + self._received_delete

        # Not done yet; still waiting for more messages.
        if total_received < total_expected:
            return

        if total_received == total_expected:
            log.info(
                "FileSync complete: %d modified, %d deleted",
                self._received_modify,
                self._received_delete,
            )
        else:
            # We received more messages than expected; this may indicate a protocol
            # mismatch or double-processing, so surface it explicitly.
            log.warning(
                "FileSync received more messages than expected "
                "(expected=%d, received=%d; modified=%d, deleted=%d). "
                "Treating sync as complete.",
                total_expected,
                total_received,
                self._received_modify,
                self._received_delete,
            )

        self._sync_complete = True
```
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment thread fns_cli/file_sync.py
Comment on lines +344 to +354
def _check_complete(self) -> None:
if not self._got_end:
return
total_expected = self._expected_modify + self._expected_delete
total_received = self._received_modify + self._received_delete
if total_received >= total_expected:
log.info(
"FileSync complete: %d modified, %d deleted",
self._received_modify, self._received_delete,
)
self._sync_complete = True
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (bug_risk): Using '>=' here can hide protocol mismatches or double-processing of messages.

With total_received >= total_expected, duplicate or double-counted messages will still mark the sync as complete, hiding protocol mismatches. If the protocol guarantees exact counts, consider using == and logging when total_received > total_expected so these issues are visible. Even if you keep the tolerance, logging any overshoot would help surface subtle bugs.

Suggested change
def _check_complete(self) -> None:
if not self._got_end:
return
total_expected = self._expected_modify + self._expected_delete
total_received = self._received_modify + self._received_delete
if total_received >= total_expected:
log.info(
"FileSync complete: %d modified, %d deleted",
self._received_modify, self._received_delete,
)
self._sync_complete = True
def _check_complete(self) -> None:
if not self._got_end:
return
total_expected = self._expected_modify + self._expected_delete
total_received = self._received_modify + self._received_delete
# Not done yet; still waiting for more messages.
if total_received < total_expected:
return
if total_received == total_expected:
log.info(
"FileSync complete: %d modified, %d deleted",
self._received_modify,
self._received_delete,
)
else:
# We received more messages than expected; this may indicate a protocol
# mismatch or double-processing, so surface it explicitly.
log.warning(
"FileSync received more messages than expected "
"(expected=%d, received=%d; modified=%d, deleted=%d). "
"Treating sync as complete.",
total_expected,
total_received,
self._received_modify,
self._received_delete,
)
self._sync_complete = True

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant