feat(server): add plikd migrate command for live backend-to-backend migration#728
Open
feat(server): add plikd migrate command for live backend-to-backend migration#728
Conversation
…igration Add a new `plikd migrate` CLI command that migrates metadata and file data directly between backends without intermediate archive files. ## Metadata migration (server/metadata/migrator.go) - Streams all records from source to destination in FK-safe order: users → tokens → uploads (unscoped) → files → settings - Uses src.log.Infof/Warningf for progress (consistent with exporter/importer) - Supports --ignore-errors to skip individual record failures ## Data migration (server/data/migrator.go) - Parallel worker pool (default: 4, configurable via --workers) - Uses context cancellation for clean fatal-error propagation (no panics) - Copies only `uploaded` and `removed` files (both have backing data) - Skips `missing`, `uploading`, `deleted` (no data in backend) - Supports --dry-run and --ignore-errors ## CLI command (server/cmd/migrate.go) - Flags: --to (required), --metadata-only, --data-only, --workers, --dry-run, --ignore-errors - --data-only and --metadata-only are mutually exclusive - Dry-run enumerates and prints all items without writing; errors are reported ## Tests - server/metadata/migrator_test.go: basic migration, soft-deleted uploads, ignore-errors - server/data/migrator_test.go: blob streaming, status filtering, dry-run, ignore-errors, multi-worker concurrency - testing/migrate/run.sh: e2e test using `plikd fakedb` to seed a source SQLite DB, migrate to destination, verify record counts, dry-run writes nothing, re-run is idempotent ## Docs - docs/operations/migration.md: full migration guide with 4 real-world scenarios - Side-nav entry in docs/.vitepress/config.js - Cross-link in docs/backends/metadata.md - server/ARCHITECTURE.md + testing/ARCHITECTURE.md updated - AGENTS.md Key Files table updated
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What
Adds a new
plikd migrateCLI command for direct, live backend-to-backendmigration of metadata and file data — no intermediate archives needed.
Why
Currently the only way to move data between backends is via
plikd export+plikd import, which requires writing a compressed intermediate archive to disk.For large deployments (e.g. migrating from SQLite → PostgreSQL, or local files → S3)
this is wasteful and slow. The new
migratecommand streams data directly betweenbackends, using a parallel worker pool for file blobs.
Changes
server/cmd/migrate.go— CLI commandplikd migrate --to <target.cfg>migrates both metadata and file data--metadata-only,--data-only,--workers N(default 4),--dry-run,--ignore-errors--dry-runenumerates all items and prints them without writing anything--ignore-errorslets the migration continue past individual record/file failures (useful for re-runs)server/metadata/migrator.go— Metadata streamersrc.log) for progress outputserver/data/migrator.go— File blob parallel copieruploadedandremovedfiles; skipsmissing,uploading,deletedTesting
server/metadata/migrator_test.goandserver/data/migrator_test.gocover: basic migration, soft-deleted uploads (FK integrity), dry-run, ignore-errors,
status-based filtering, multi-worker concurrency
testing/migrate/run.shusesplikd fakedbto seed a source SQLite DB,runs
plikd migrate, verifies record counts match, dry-run writes nothing, re-runwith
--ignore-errorsis idempotent. No Docker required; included intest_backends.sh.Docs
docs/operations/migration.mdwith 4 real-world scenarios(SQLite→PostgreSQL, local→S3, full migration, resuming a failed run)