Hacker Newsnew | past | comments | ask | show | jobs | submit | heftykoo's commentslogin

I can't wait to debug my code and have the LLM hallucinate a syntax error just to upsell me a subscription to a linter service.


The hypocrisy is staggering. Big Tech scraped the entire open web, ignoring robots.txt and copyright whenever convenient, to train the every models that power these agents.

But the moment users employ a tool like OpenClaw to regain agency over that same data, it's branded as a "security threat" or "exploitation".

The "Authorized Scraping fro Me, But Not for thee" doctrine is becoming the standard ToS.


The problem isn't CI/CD; the problem is "programming in configuration". We've somehow normalized a dev loop that involves `git commit -m "try fix"`, waiting 10 minutes, and repeating. Local reproduction of CI environments is still the missing link for most teams.


Bingo.

These tool fails are as a consequence of a failure of proper policy.

Tooling and Methodology!

Here’s the thing: build it first, then optimize it. Same goes for compile/release versus compile/debug/test/hack/compile/debug/test/test/code cycles.

That there is not a big enough distinction between a development build and a release build is a policy mistake, not a tooling ‘issue’.

Set things up properly and anyone pushing through git into the tooling pipeline are going to get their fingers bent soon enough, anyway, to learn how the machine mangles digits.

You can adopt this policy of environment isolation with any tool - it’s a method.

Tooling and Methodology!


Yes AND… more. He discusses your (correct) sentiment before and during his bash temptation segment. It’s only one of the gripes, but imho this one’s the 80%/pareto


`act` should help most teams reproducing CI locally.


act is horrible if:

* you have any remote resources that are needed during build

* for some reason your company doesn't have standardize build images


The "innovator's dilemma" seems to have a new chapter: having enough TPU capacity to simply out-brute-force the competition once the direction is clear. It's less about "who built it first" and more about "who has the most H100s/TPUs and a decent enough compiler."


Google was building TPUs before OpenAI existed, they wrote the paper that made OpenAI, by most accounts they were first and have far more experience operating these systems at scale in both hardware and users

The writing has been on the wall since Ilya left, OpenAI is in decline. Recent news at Microsoft, Nvidia backing out of a deal, decreased enterprise usage... seems to indicate the trend is deepening and OpenAI is on a path to becoming the favorite example of the anti-Ai crowd when it all finally comes crumbling down


It's genuinely terrifying to think how much of the modern internet rests on the shoulders of a few people maintaining core utilities like sudo, curl, and openssl for decades. Todd is a legend.


Yes but that would be even more terrifying if it rested on the whims of some soulless corporation.


ms-sudo to Get-AdministratorPermissionForElevatedSecurityOperations


0xEFF0332: Operation could not be performed due to missing TPM flag.


LOL sob


For clarity: the final algorithm isn’t novel research — it’s inspired by watermark-removal implementations shared in the community, then adapted specifically to Gemini’s logo geometry and optimized for browser-side execution.

The main engineering challenge for me wasn’t detection quality, but finding a point that balanced:

• reliability on textured images • millisecond latency • small binary size • zero server-side processing

I also made the extension paid mostly as an experiment to understand pricing, payments, and distribution for small developer tools. Building the payment/licensing flow took more time than the algorithm itself.

If anyone is curious, I’m happy to share more details about the detection heuristics and fill strategy.


When Google Gemini started adding a watermark to generated images (including Nano Banana images), I wanted a way to download the original outputs without cropping or post-processing.

Over a few iterations I tried three approaches:

1) OpenCV heuristics Fixed-position detection + color estimation + inpainting. Fast, but fragile: works on flat backgrounds, fails on textured images.

https://geminiwatermarkcleaner.com/changelog/v1-1-0.html

2) LaMa inpainting High-quality reconstruction using a local LaMa model. Very accurate, but slow (~30s/image on CPU) and heavy to ship.

https://geminiwatermarkcleaner.com/changelog/v2-0-0.html

3) Lightweight watermark-specific algorithm Inspired by community implementations and optimized for Gemini’s logo pattern: geometry-aware detection + edge-preserving fill, no neural model. • Binary < 2MB • Millisecond latency • Runs fully locally in browser

https://geminiwatermarkcleaner.com/changelog/v3-0-0.html

I packaged this into a Chrome extension and a local web Gemini Watermark Remover tool: https://geminiwatermarkcleaner.com/gemini-watermark-remover....

Everything runs locally; images never leave the machine.

I also used this as a small experiment in building a paid micro-utility: payments, licensing, and basic marketing turned out harder than the algorithm itself.

Happy to answer questions about detection, inpainting tradeoffs, or browser-side image processing.


Work less time on work, work more time on my side project, because it can let me use less time to get job done, so I have more time on my side project.

And AI has expanded my boundaries — for example, I used to know nothing about image processing, but now with AI help, I’ve learned and use the technology and even built an initial product prototype using OpenCV, which helped my side project get off the ground successfully.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: