Hacker Newsnew | past | comments | ask | show | jobs | submit | dtkav's commentslogin

The Obsidian kanban plugin does this. I recently added support for it in my real-time-collaboration plugin (Relay).

I was early at Planet (and fresh out of college) and the transition internally towards govt money was very painful for the bright eyed save-the-world hackers internally.

The initial technical architecture was aligned with broad good (low res, global, daily, openly available), but the shift towards selling high res satellite capabilities directly to governments has been tough to see.

Their role of providing a public ledger is still a net good thing IMO, and i doubt Planet is adding much increased capability to the US war fighter (they have way better stuff). Harder to say for their deals with other governments that have fewer native space capabilities.


This is really wholesome. Thanks for sharing.

IMO Obsidian Sync is a fantastic solution for e2ee device sync in Obsidian. It is a good/honest business model to fund the development of Obsidian.

What complaints are you hearing?


Great work Kavin!

This is a super interesting space, and lots of fun and difficult problems to tackle.

A few trailheads of interesting complexity:

1. Concurrent machine edits - in particular handling links to renamed files across devices. This is a case where CRDTs fall over because they converge but are not idempotent. For example renaming a file [[hello 1]] to [[hello 2]] when multiple devices are online can result in [[hello 22]] because deletes merge before inserts.

2. Ingesting disk edits in the age of claude code. The intended behavior can change based on what I'm calling the "intent fidelity spectrum". I've been using that spectrum as a guide for when to apply merges in "text space" vs. "crdt space", including sometimes withholding ops based on origin (e.g. from obsidian processFile calls), cancelling them) or offline status. For example, if you made edits while offline and have a least-common-ancestor you may be able to look for conflicts via diff3 and then conditionally use diff-match-patch if there are no conflicts, or surface the conflict to the user if there's not a good merge strategy based on the low levels of intent.

3. History and memory management - how do you recover state if a user has a competing sync service which causes an infinite loop in file creation/deletion. This can be difficult with CRDTs because the tombstones just keep syncing back and forth between peers and can be difficult to clear. This is significantly worse if you use Y.PermanentUserData (do not recommend...).


Hey Daniel, It is so awesome to see you here.

1. Spot on. This is the ceiling of text-based CRDTs. Since we last spoke, I fixed the structural side of renames by moving path authority onto stable IDs, but links inside the note body are still plain text, so concurrent rename-driven rewrites can duplicate.

I realised that this problem is uniquely painful in Obsidian because of the "Automatically update internal links" setting. Since people use obsidian as PKM, the app itself is making machine-edits. It turns this CRDT edge-case into a guaranteed anomaly, which is bad.

Notion can make this work because of their AST based DB afaik. I'm sure you've heard of Ink & Switch's Peritext but that's quite experimental (sidenote: keyhive by them is a possible solution to marrying E2EE and CRDTs).

I'm basically accepting this tradeoff semantic intent-loss in exchange for simplicity.

2. I love the 'intent fidelity spectrum' framing. What I have today is a good solution to the 'mechanical filesystem-bridge' problem - trailing-edge coalescing, self-echo suppression, and active-editor recovery, but not yet a full answer to the semantic merge problem.

Though, if I had to implement merge with LCA, I'd have to store historical snapshots locally per file. Currently, I'm not sharding Yjs per file, so that'd be quite inefficient. Though relay could easily instantiate a ghost (I see the wisdom in your architecture here!)

But also, LCA would halt on hard conflicts, taking away from the core promise of a CRDT. I think what UX is better (LCA or not) is debatable, but you cover the bases with DMP and conflict markers.

3. Ah, a competing sync layer is still the classic "please don't do that" configuration.

I retain tombstones for anti-resurrection correctness so they can blow up (though i'm exploring an epoch-fenced vacuum for tombstone GC). I do have automatic daily snapshots with recovery UI built into the plugin, that would be my best answer.

..

Mentally, a blocker for me to refactor to sharded Yjs is large offline cross-file structural changes like folder renames, do you try to preserve a vault-level consistency boundary, or do you let the file docs converge independently and hide the intermediate tearing?

I can tell that you've spent a lot of time in the deep end. I’ll bump our email thread too, would love to compare scars.


We let docs converge independently. This is a problem for bases in the current sync engine, but something we're resolving soon with "continuous-background-sync". I think it is also more scalable and matches the file model better.

We landed on folder-level sync rather than vault-level sync, so we have a map CRDT that corresponds with each shared folder. In our model these CRDTs are the ones that can explode, whereas the doc-level ones can kind of be fixed up by dragging it out of the folder and back in again which grabs a new "inode" for it.

If I were to start again I think I'd try to build a file-based persistence layer based on prolly-trees to better adhere to the file-over-app philosophy.


I'm working on something similar with https://github.com/dtkav/agent-creds though I keep growing the scope.

The model is solid. It feels like the right way to use YOLO mode.

I've been working on making the auth setup more granular with macaroons and third party caveats.

My dream is to have plugins for upstreams using OpenAPI specs and then make it really easy to stitch together grants across subsets of APIs.

I think there's a product in here somewhere...


Hey, developer behind Relay here.

Yes, our sync engine is home-grown. We use CRDTs to provide real-time google-docs-like collaboration which is not something Obsidian supports (yet... i think they are working on it).

Note that you can self-host a Relay server and join it to our network. This gives you complete control over your data and unmetered storage. We do still charge for seats if you have more than 3 collaborators.


I've been working on something similar (with claude code).

It's a sandbox that uses envoy as a transparent proxy locally, and then an external authz server that can swap the creds.

The idea is extended further in that the goal is to allow an org to basically create their own authz system for arbitrary upstreams, and then for users to leverage macaroons to attentuate the tokens at runtime.

It isn't finished but I'm trying to make it work with ssh/yubikeys as an identity layer. The authz macaroon can have a "hole" that is filled by the user/device attestation.

The sandbox has some nice features like browser forwarding for Claude oauth and a CDP proxy for working with Chrome/Electron (I'm building an Obsidian plugin).

I'm inspired by a lot of the fly.io stuff in tokenizer and sprites. Exciting times.

https://github.com/dtkav/agent-creds


I work on a plugin/platform that makes Obsidian collaborative (relay.md).

Working with other people gives you good habits against hording because you have a sense of the audience and what might be useful to them.

We also support the kanban plugin so that works well to track and share what we're working on.


That’s a great point — collaboration creates a natural “audience filter”, which reduces hoarding because you’re writing for someone, not just storing for yourself.

Kanban as a shared representation of “active work” also feels like the cleanest project-context signal: it’s explicit, lightweight, and already part of how the team coordinates.

Curious: in your experience with relay.md, what actually changes behavior the most?

1. social accountability (others will see messy notes)

2. having a shared kanban/project board

3. conventions/templates for how notes get promoted from “rough” to “reference”

Details in my HN profile/bio if you want more context on the “active projects as constraints” angle I’m exploring.


I think mostly social accountability.

My cofounder actually has a bunch of skills with claude code that surface context into our daily notes (from our meeting notes, transcripts, crm, gmail, etc), but it's sort of on him to show that it is useful... so while he is still "hoarding" outside of the shared context it is with an eye toward delivering actual value inside of it.

Feels pretty different from the fauxductivity traps of solo second brain stuff.


That makes a lot of sense. Social accountability is a surprisingly powerful “noise filter” — once other people will see the mess, you naturally promote only what’s legible and useful.

And your cofounder’s setup is interesting because it’s not “PKM for PKM’s sake”, it’s context injection tied to an actual delivery surface (daily notes). That feels like the right wedge: the system earns its keep only if it helps someone ship something this week, not just accumulate.

Curious: what’s the single best signal that his context surfacing is “working”? Fewer missed follow-ups, faster re-entry into threads, or just less time spent searching across Gmail/CRM/transcripts?


We just have all meeting transcripts go to Obsidian (and get processed/mined) as well as our collaborative notes (our startup makes Obsidian collaborative) for our standups and then use claude code to summarize each day into the next.

We avoid the browser agents entirely, and thus avoid the scattered context. Claude code + markdown files in our vault.

It works remarkably well. I am bullish on unix tools and text files - dom parsing, rag, etc feels like solutions to unnecessary problems.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: