Nice work on this! CLI tools for encryption are underrated—I find people are more likely to actually encrypt things when the friction is low.
One thing I learned building PrivaVault (an encrypted document management app, just launched) is that the key management piece becomes the real UX challenge. We ended up implementing a zero-knowledge architecture where keys never touch our servers, but the tradeoff is users need to understand they're responsible for their master password.
I'm curious about your approach to key derivation and storage for the RTTY-SODA system. Are you using libsodium's password hashing (Argon2) or handling that separately?
I made a couple of explicit assumptions to reduce UX friction, and I try to document and test them rather than hide them:
1. I’m aware that using size=PrivateKey.SIZE is not ideal, since that constant is shared between public and secret schemes. I rely on the fact that the sizes match in libsodium, and I enforce that assumption with tests so it fails loudly if that ever changes:
https://github.com/theosaveliev/rtty-soda/blob/main/tests/te...
2. For the salt, I intentionally avoid asking the user for an additional input. Instead, I hash a fixed long quote from Henry Fielding together with the user password. The assumption is that combining a short password with a long, fixed string still provides sufficient entropy for the KDF input and forces an attacker to recompute rainbow tables with the quote included, rather than reuse generic ones.
These tradeoffs are deliberate. I’m open to critique, especially if there’s a way to improve this without increasing UX complexity.
The client-side encryption approach is the right call here. We built PrivaVault (encrypted doc management for immigration cases) and learned quickly that "we encrypt it" isn't enough reassurance for people dealing with passports, visas, and financial docs. End-to-end encryption, in which the keys never touch our servers, was a fundamental requirement.
One thing we wrestled with: how do you make encrypted search actually useful? You can't just grep through ciphertext. We ended up doing encrypted metadata tagging client-side before upload, but it's still limited compared to plaintext search. I am curious about how others have addressed this issue without jeopardizing the zero-knowledge architecture.
Love seeing more privacy-first tools in this space. One thing I've learned building PrivaVault (encrypted doc management, launching this weekend) is that users often underestimate how much metadata leaks even when the content is processed locally. For PDF tools specifically, creation timestamps, software versions, and author info can persist through merges unless explicitly stripped. Would be curious if merge-pdf.app handles metadata sanitization – it's one of those edge cases that matters a lot for privacy-conscious users but isn't always obvious at first glance.
Many privacy policies explicitly state they can change at any time, with or without notice, making the problem even worse. So even if you carefully read and understood what you agreed to today, it could be completely different tomorrow.
This phenomenon is especially concerning for sensitive documents like immigration paperwork, medical records, or legal files. I've been working on PrivaVault (launching in 3 days) specifically because of this using client-side encryption so the service provider literally can't access your files, regardless of what the privacy policy says or how it changes. The architecture makes privacy a technical guarantee rather than a legal promise.
For anyone dealing with AI tools and sensitive docs right now: assume anything you upload can be read by the company, their employees, and potentially used for training unless they explicitly state otherwise AND use encryption where they don't hold the keys.
I've been thinking about this problem from the document side. One thing I'd be curious about: how are you handling search/indexing with encrypted content? We're wrestling with this for PrivaVault (encrypted doc management, launching next week)
This is exactly why local-first architecture matters. If your documents never leave your machine unencrypted, OS-level network privacy controls actually work in your favor rather than against you.
I've been building PrivaVault (encrypted doc management, launching next week) specifically with this threat model in mind everything's encrypted client-side before any network activity happens. MacOS can sniff the traffic all it wants, but it's just seeing encrypted blobs going to storage.
The right to deletion is intriguing, but there's a practical gap most people don't realize: you need to actually know which companies have your data before you can request deletion. That's the challenging part.
I've been building in the privacy space, and the pattern I see is that people forget they uploaded documents to random services years ago (old apartment applications, one-off tax prep services, etc.). By the time you remember to request deletion, the company might have been acquired or gone under, or you can't even recall the service name.
The better approach, IMO, is treating deletion as part of your upload workflow: either use services with auto-deletion built in, or keep a personal audit log of where you've shared sensitive docs. Prevention beats remediation every time.
The zero-knowledge architecture is clever here. One thing I'd be curious about—how do you handle key derivation from the passphrase? I've been building PrivaVault (encrypted doc management, launching in a week) and spent far too much time on this exact problem. We ended up using Argon2id with high iteration counts, but it creates this tension between security and UX since key derivation on every decrypt can feel sluggish on mobile. I would be interested in understanding the tradeoffs you made in this situation.
The zero-knowledge architecture is crucial for this use case. One thing I've been contemplating while building PrivaVault (launching next week) is the tension between E2EE and search/organization features. Users need to find their documents quickly, but you can't build server-side search if you can't read the content.
We ended up implementing client-side encrypted search indices that sync across devices—adds complexity but preserves the zero-knowledge guarantee. Curious how Agam Space handles this? The demo looks clean, but I couldn't tell if search works on encrypted content or just filenames.
No currently it don’t have search functionality, and i have the same plan as yours, client side indexes, persisted encrypted at the server side, its the only way.
One thing I learned building PrivaVault (an encrypted document management app, just launched) is that the key management piece becomes the real UX challenge. We ended up implementing a zero-knowledge architecture where keys never touch our servers, but the tradeoff is users need to understand they're responsible for their master password.
I'm curious about your approach to key derivation and storage for the RTTY-SODA system. Are you using libsodium's password hashing (Argon2) or handling that separately?