The app is a local-first budgeting tool — one HTML file, no backend, no accounts, no tracking. That's the whole point. But it also means there's no server to check a license, no account to suspend, and no technical way to prevent copying. This is the story of choosing shareware-style trust-based monetization instead of DRM.
Yup, me too. In fact, I might consider simple copyright for something like a board game. Granted, I’ve never registered an actual copyright either. I suppose I should try it out.
Balance Buckets helps you quickly answer the question: “How much is safe to spend right now?”
You setup a few buckets then drop in your current balance. It immediately shows you what’s left. It’s local-first (just localStorage), no bank login, no account linking, and no transaction import rabbit hole. The goal is clarity in under a minute, not another finance app that demands setup overhead.
I built Balance Buckets because most small business bank accounts don't have a buckets or envelope saving feature. I wanted a dead-simple tool that helps me see where my money is. Define buckets (fixed dollars or percents), track what’s funded vs underfunded.
I recently had an entire meal at Chili’s comped by the manager, because I waited an hour for food. I guess their system flagged it, or they just noticed, because I didn’t complain. I was hanging with my grandson.
I tipped on the full amount but we had to get the manager again to figure out how. I was going to Venmo her but the manager just sent the $0.00 bill to the table.
I also cut my own hair, but sometimes I’m lazy and just hit up the Barber shop.
She charges me $15! I tip +$25 and it’s still a cheap haircut.
My haircut has to be one of the simplest around, but 9 out of 10 stylists will leave me fixing it myself later. Once I paid $50+tip for the same cut at a swanky joint and STILL went home and fixed it. She doesn’t know what she’s worth.
Ive been working on Peen, a CLI that lets local Ollama models call tools effectively. It’s quite amateur, but I’ve been surprised how spending a few hours on prompting, and code to handle responses, can improve the outputs of small local models.
Current LLMs use special tokens for tool calls and are thoroughly trained for that, nearing almost 100% correctness these days, allowing multiple tool calls per single LLM response. That's hard to beat with custom tool calls. Even older 80B models struggle with custom tools.