Summary
Locally edited encrypted dotfiles are not detected as changed and are never re-pushed to the sync repo. This means changes made between syncs are silently lost when syncing to other machines.
Why
Tether's change detection for encrypted dotfiles relies on the hash stored in ~/.tether/state.json rather than re-hashing the actual file on disk. When a file is edited locally after a successful sync, the state.json hash becomes stale, but the sync logic compares this stale hash against the remote encrypted copy (which also has the old content). Both match, so tether concludes "no changes" — it never re-hashes the local file on disk to detect the divergence.
This is especially problematic for files managed by other tools (e.g. claude plugin uninstall modifying installed_plugins.json, or settings editors modifying settings.json), where changes happen outside of tether's awareness.
As-Is
- File syncs successfully → state.json records
hash: "abc123", synced: true
- User or tool edits the file locally (new content, new mtime)
tether sync runs:
- Pulls remote encrypted file, decrypts it
- Compares remote decrypted content hash against state.json hash → they match (both are old)
- Skips re-hashing the local file on disk — assumes local matches state.json
- Marks
synced: true
- Local changes are never pushed. Other machines receive the stale version.
Workarounds tried — state manipulation:
- Setting
synced: false in state.json → tether re-evaluates but still doesn't re-hash the local file
- Clearing the hash in state.json → same result
- Removing the entry from state.json entirely → tether treats the remote as authoritative and pulls it (potentially overwriting local changes)
tether daemon restart → no effect
Workarounds tried — filesystem:
touch-ing the file to update mtime → not detected (consistent with diagnosis: mtime isn't checked)
Only workaround that works: deleting the .enc file from ~/.tether/sync/profiles/ and re-running tether sync, which forces re-encryption from the local file.
To-Be
Hash on plaintext, not ciphertext
A fundamental constraint: hashing must be performed on the plaintext file content, not the encrypted output. If encryption uses a non-deterministic cipher (e.g. random nonce per encryption), the same plaintext produces different ciphertext each time. Hashing ciphertext would cause spurious "changed" detection on every sync.
state.json should store plaintext content hashes.
Three-way comparison
On each sync, tether needs to compare three values: the local file on disk (plaintext hash), the state.json cached hash (last known synced state), and the remote encrypted file (decrypted plaintext hash). This produces four cases:
| Local vs State |
Remote vs State |
Action |
| same |
same |
No-op |
| different |
same |
Push local (this is the bug fix) |
| same |
different |
Pull remote |
| different |
different |
Conflict — error, skip this file, continue syncing others |
Additional cases to handle:
- Remote
.enc file doesn't exist (new dotfile, first sync) → push
- Local file deleted →
stat() will fail; error or propagate deletion to remote
- Concurrent syncs from two machines → last writer wins (existing
last-write-wins strategy); acknowledged as a known limitation
mtime fast-path
Re-hashing every tracked file on every sync is unnecessary. Use filesystem mtime + file size as a fast-path:
if file.mtime == state_mtime and file.size == state_size:
// no change, skip hashing
else:
// re-hash from disk
The comparison must use ==, not <=. Using <= would miss files restored from backup with an older mtime. Git uses mtime == cached_mtime for exactly this reason. Adding file size is cheap and catches in-place edits that preserve mtime but change content.
Important: the mtime comparison must read the actual filesystem mtime via stat(), not the cached last_modified value in state.json. The current bug is partly caused by trusting the cached value.
A filesystem watcher (fsevents/inotify) is not the right solution — tether sync is a point-in-time operation, and watchers miss changes during sleep/reboot/crashes. You'd still need the hash-on-sync fallback, making the watcher pure overhead.
Post-push/pull state update
state.json must only be updated after the I/O operation succeeds. This applies symmetrically to pushes and pulls:
- Push: encrypt → push to remote → on success → update state
- Pull: decrypt → write to disk → on success → update state
If state is updated optimistically and the operation fails (network error, disk full, permissions), state diverges from reality.
state.json writes should be atomic (write to temp file, then rename) to prevent corruption from crashes mid-write.
Proposed sync flow
for each tracked_file:
local_mtime = stat(file).mtime
local_size = stat(file).size
cached_mtime = state.json[file].last_modified
cached_size = state.json[file].size
cached_hash = state.json[file].hash
// mtime + size fast-path
if local_mtime == cached_mtime and local_size == cached_size:
local_hash = cached_hash // trust cache
else:
local_hash = sha256(read(file)) // re-hash from disk
// Remote may not exist (new file)
if remote_enc_file_exists:
remote_hash = sha256(decrypt(remote_enc_file))
else:
remote_hash = None
if local_hash == cached_hash and remote_hash == cached_hash:
continue // no changes
else if local_hash != cached_hash and (remote_hash == cached_hash or remote_hash == None):
// Local changed, remote didn't (or doesn't exist yet) → push
encrypt_and_push(file)
update_state(local_hash, local_mtime, local_size) // after successful push
else if local_hash == cached_hash and remote_hash != cached_hash:
// Remote changed, local didn't → pull
decrypt_and_write(remote_file)
new_mtime = stat(file).mtime // re-read after write
new_size = stat(file).size
update_state(remote_hash, new_mtime, new_size) // after successful write
else:
// Both changed → conflict
log_error("Conflict: {file} changed both locally and remotely. Skipping.")
continue // sync remaining files
Scope
The root cause (not re-hashing the local file) may also affect unencrypted dotfiles (encrypt_dotfiles = false). If the same comparison logic is shared, the fix should apply to both paths.
Environment
- tether 1.11.7 (latest as of 2026-04-04)
- macOS (Darwin 25.2.0, arm64)
encrypt_dotfiles = true
- Affected files: any encrypted dotfile edited between syncs
Summary
Locally edited encrypted dotfiles are not detected as changed and are never re-pushed to the sync repo. This means changes made between syncs are silently lost when syncing to other machines.
Why
Tether's change detection for encrypted dotfiles relies on the hash stored in
~/.tether/state.jsonrather than re-hashing the actual file on disk. When a file is edited locally after a successful sync, the state.json hash becomes stale, but the sync logic compares this stale hash against the remote encrypted copy (which also has the old content). Both match, so tether concludes "no changes" — it never re-hashes the local file on disk to detect the divergence.This is especially problematic for files managed by other tools (e.g.
claude plugin uninstallmodifyinginstalled_plugins.json, or settings editors modifyingsettings.json), where changes happen outside of tether's awareness.As-Is
hash: "abc123",synced: truetether syncruns:synced: trueWorkarounds tried — state manipulation:
synced: falsein state.json → tether re-evaluates but still doesn't re-hash the local filetether daemon restart→ no effectWorkarounds tried — filesystem:
touch-ing the file to update mtime → not detected (consistent with diagnosis: mtime isn't checked)Only workaround that works: deleting the
.encfile from~/.tether/sync/profiles/and re-runningtether sync, which forces re-encryption from the local file.To-Be
Hash on plaintext, not ciphertext
A fundamental constraint: hashing must be performed on the plaintext file content, not the encrypted output. If encryption uses a non-deterministic cipher (e.g. random nonce per encryption), the same plaintext produces different ciphertext each time. Hashing ciphertext would cause spurious "changed" detection on every sync.
state.json should store plaintext content hashes.
Three-way comparison
On each sync, tether needs to compare three values: the local file on disk (plaintext hash), the state.json cached hash (last known synced state), and the remote encrypted file (decrypted plaintext hash). This produces four cases:
Additional cases to handle:
.encfile doesn't exist (new dotfile, first sync) → pushstat()will fail; error or propagate deletion to remotelast-write-winsstrategy); acknowledged as a known limitationmtime fast-path
Re-hashing every tracked file on every sync is unnecessary. Use filesystem mtime + file size as a fast-path:
The comparison must use
==, not<=. Using<=would miss files restored from backup with an older mtime. Git usesmtime == cached_mtimefor exactly this reason. Adding file size is cheap and catches in-place edits that preserve mtime but change content.Important: the mtime comparison must read the actual filesystem mtime via
stat(), not the cachedlast_modifiedvalue in state.json. The current bug is partly caused by trusting the cached value.A filesystem watcher (fsevents/inotify) is not the right solution — tether sync is a point-in-time operation, and watchers miss changes during sleep/reboot/crashes. You'd still need the hash-on-sync fallback, making the watcher pure overhead.
Post-push/pull state update
state.json must only be updated after the I/O operation succeeds. This applies symmetrically to pushes and pulls:
If state is updated optimistically and the operation fails (network error, disk full, permissions), state diverges from reality.
state.json writes should be atomic (write to temp file, then rename) to prevent corruption from crashes mid-write.
Proposed sync flow
Scope
The root cause (not re-hashing the local file) may also affect unencrypted dotfiles (
encrypt_dotfiles = false). If the same comparison logic is shared, the fix should apply to both paths.Environment
encrypt_dotfiles = true