Codex legacy migration
Greenfield Codex releases do not rely on an unbounded chain of old SQL migrations as the primary story. Instead:
- Baseline schema — Arca applies one manifest-defined DDL snapshot on Turso;
schema_versionholds the single maintainedBASELINE_VERSION(seecrates/vox-db/src/schema/manifest.rs). AnyMAX(schema_version)not equal to that baseline is treated as non-baseline / legacy for normal opens. Legacy multi-row chains require export → fresh DB → import. - Importers — Rust modules read legacy exports or attached old DBs and write normalized rows into the new baseline.
API surface (crate)
vox_db::codex_legacyincrates/vox-db/src/codex_legacy.rs—verify_legacy_store,LegacyImportSource, JSONL export/import helpers.
Shipped CLI (minimal vox binary)
vox codex verify— connection +schema_version+ manifest-derived reactivity tables + legacy-chain flagvox codex export-legacy— dump portable JSONL artifact (LEGACY_EXPORT_TABLES— full baseline user tables exceptschema_version)vox codex import-legacy— full snapshot restore: DELETE allLEGACY_EXPORT_TABLESon the target, then INSERT rows from JSONL (fresh baseline DB only; not a merge)vox codex cutover— local legacy file → timestampedcodex-cutover-*.jsonl+.sidecar.json, new--target-db, import, verify
See cli.md.
Training telemetry SQLite sidecar (not JSONL cutover)
When the canonical vox.db is still on a legacy chain, VoxDb::connect_default returns LegacySchemaChain until you export, re-init on baseline, and import. Mens training does not open a separate telemetry file automatically. After you migrate the main DB, all training rows use the canonical file.
Operator guide: how-to-voxdb-canonical-store.
Import sources
| Source | Notes |
|---|---|
Turso file / remote CodeStore | Full relational + CAS |
Orchestrator memory/ files | vox codex import-orchestrator-memory --dir … --agent-id … |
| Skill bundles | vox codex import-skill-bundle --file … (JSON descriptor) |
See Codex vNext schema and ADR 004.