Hacker Newsnew | past | comments | ask | show | jobs | submit | cobertos's commentslogin

How do you even find motherboards on AliExpress properly? Do you have a methodology to split the chaff from the wheat?

what chaff? Just search, find what you want and buy. It's like ebay.

Being like eBay is why it's full of chaff. There's a lot of really bad hardware on Aliexpress.

You either take a gamble on something and hope it's good, or try to buy the same thing that someone else bought and reviewed.


I always figured that was the trade-off for paying 1/3 the price. Having to buy 3x as many to find a good one. :P

"Another Slot A motherboard :(, maybe the 4th one I buy from AliExpress will finally be that X870 motherboard I want!"

I've never received something other than what I've ordered. At worst the documentation is scant or missing entirely. Specifically with respect to motherboards, most of the aliexpress specials I've interacted with have had completely unlocked BIOSes. Which are easy to get yourself into trouble with, but kind of nice to have when you need them.

I think most of them just don't customize their bios and use the default fully-wide-open implementations from the upstream bios vendor.

Have you heard of paying with PayPal/credit card?

while possibly too sneery for this site, paypal and a real credit card will have buyer protections. Debit cards, and basically anything else will not.

I just migrated to Jellyfin and cancelled my Spotify subscription just last week (https://cobertos.com/blog/post/finally-cancelling-my-spotify). Paying off even more than I predicted. So sick of everything getting in the way of just listening to my music.

Why does a keyboard require an account in the first place?

it doesnt, but the hook is "service" such as a personal dictionary.

it reads like MS is interested in consolidating all digital accounts data to onedrive. i would be concerned there is keylogging and analysis going on.

i'd also be concerned about everything being gathered into a one hack account


Next step, age verification if you want to type four letter words

A shame the Android app hasn't been maintained in a while

> however retrospectively, it did not justify the hundreds of hours I invested in this project.

Trying to extrapolate this conclusion to the entire "quantified self" movement is not correct. The issue is the time cost, not the act itself. If a trusted company came along (as if...) that sucked up this much data to allow you to answer these questions with minimal effort, I'm sure this would be a different story.

Anything at the fringes of tech with no tried and true solution requires hundreds of hours of effort. The author's conclusions are also personal, there are other styles of living and conclusions to be drawn that change the calculus on whether to do it or not.


Is that a typo in the article? It's $5999 on Apple's website for that configuration

It’s what toggling the 256GB upgrade costs from the previous ram amount, not the computer total.

I think this means cost above. As in the extra cost you pay.

As long as all these next-gen terminal features don't make implementing a terminal from scratch as hard as implementing a browser.

I like that terminals have less surface area that browsers


Interesting, though a lot of the UI seems broken. For my state I see some notice dates in the future (it's not explained why, if this is when the filing will be executed or if it's an incorrect filing date, as the column is just "Notice Date")

Some of the entries pull up a page that says "Failed to load company data: No company name provided in URL" from the state specific view (e.g, any link on https://warnfirehose.com/data/layoffs/california ). Has a vibe-coded feel to it.

I saw a lot of "Purchase dataset for city details" in places which was annoying. Wondering how much processing is being done on the base dataset to justify the pricing. Could you explain a bit on the normalization/cleaning process?


Definitely vibe coded. It follows the same generic Claude UI patterns for a data app / data oriented website. Not necessarily a bad thing per say if it's still curated and tweaked with human taste. And ofc validated to work :)


Hi @skadamat - I am not very experienced with claude code and don't know what it has done previously but new opus 4.6 is great. I think I need to work on claude.md. That really seems to be the soul of Claude. I've incurred quite a lot of money on API cost and claude plan and my time. But I feel like that can be cut to 1/4th easily with a solid claude.md plan.


@cobertos The UI things should have been addressed. However I did to an extensive testing myself and everything seemed to have worked for me. I am sold that vibe coding is a thing, seriously. To your question about normalization cleaning - I have gone through multiple iterations - the data is definitely not clean and all. However, claude has made a function (instead of cleaning them on demand, it has a function which gets updated with every pattern so it could be reused).

For the pricing, to be honest I have zero hope to make money. Its just out there because I wanted to integrate this with stripe for payments and all. At the same time, if you look at competitor sites, they seem crap lol. plus look at their pricing. The site does have some operating cost and I will have to recover them if i can if I want it to be self sustainable but only if this is a value to someone. I am trying to make this a value. Please offer if you have ideas and I would appreciated it :)


Any suggestions for a specific claw to run? I tried OpenClaw in Docker (with the help of your blog post, thanks) but found it way too wasteful on tokens/expensive. Apparently there's a ton of tweaks to reduce spent by doing things like offloading heartbeat to a local Ollama model, but was looking for something more... put together/already thought through.


The pattern I found that works ,use a small local model (llama 3b via Ollama, takes only about 2GB) for heartbeat checks — it just needs to answer 'is there anything urgent?' which is a yes/no classification task, not a frontier reasoning task. Reserve the expensive model for actual work. Done right, it can cut token spend by maybe 75% in practice without meaningfully degrading the heartbeat quality. The tricky part is the routing logic — deciding which calls go to the cheap model and which actually need the real one. It can be a doozy — I've done this with three lobsters, let me know if you have any questions.


Maybe I’m out of touch but why do you need an LLM to decide if there’s any work to be done? Can’t it just queue or schedule tasks? We already have technology for that that doesn’t require an LLM.


Totally valid for fixed, well-defined tasks — a cron job is cheaper and more reliable there. The LLM earns its keep when the heartbeat involves contextual judgment: not just "is there a task in the queue" but "given everything happening right now, what actually matters?" If the agent needs to reason about priority, relevance, or context before deciding what to surface — that's where the local model pulls its weight. If your agents only do fixed tasks, you're totally right, you don't need it!


Tasks might have prerequisites or conditions.

Like "if it's raining, remind me to grab my umbrella before I leave for work"

-> "is it raining?" requires a tool call to a weather service

-> "before I leave for work" needs access to the user's calendar and information when they leave compared to the time their work day starts

-> "remind me" needs a way to communicate to the user in an efficient way, Telegram, iMessage or Whatsapp for example.


It seems to me like it would be a rather useful exercise to have the smaller model make the routing decision, and below certain confidence thresholds, it sends it to a larger model anyways. Then have the larger model evaluate that choice and perhaps refine instructions.


That's a cleaner implementation than what I described. Small model as meta-router: classify locally, escalate only when confidence is low. The self-evaluation loop you're suggesting would add a quality layer without much overhead — the large model's judgment of its own routing is itself a useful signal. Haven't shipped that yet but it's on the list.


> but found it way too wasteful on tokens/expensive

I fear this is intrinsic to its architecture. Even if you use smaller models for regular operational tasks (checking heartbeat), you'll inevitably need to promote back to bigger models to do anything useful, and the whole idea of openclaw is that it can do many useful things for you, autonomously. I think that means it's going to burn a lot of tokens if you're using it as intended.

This is presumably also why the default model mode is to try and oauth its way into coding agent harnesses instead of using lab API's?


Last night, I was able to modify nanoclaw, which runs in a container, to use iMessage(instead of whatsapp ) and use GPT-OSS-120B(instead of Claude) hosted on a Nvidia spark running llama.cpp.

It works but a bit slow when asking for web based info. Took a couple of minutes to return a stock price closing value. Trying it again this morning returned an answer in a couple of seconds so perhaps that was just a network blip.

It did get confused when scheduling times as the UTC date time was past midnight but my local EST time was before midnight. This caused my test case case of “tomorrow morning at 7am send me the current Olympic county medal count” test to be scheduled a day later. I told it to assume EST timezone and it appeared to work when translating times but not dates.


Based off the gp's comment, I'm going to try building my own with pocket flow and ollama.


I like ADK, it's lower level and more general, so there is a bit you have to do to get a "claw" like experience (not that much) and you get (1) a common framework you can use for other things (2) a lot more places to plug in (3) four SDKs to choose from (ts, go, py, java... so far)

It's a lot more work to build a Copilot alternative (ide integration, cli). I've done a lot of that with adk-go, https://github.com/hofstadter-io/hof


Just use Google flash for heartbeats


Can you explain how Google Search API fits into your point? I don't know enough about it


If I want to use google search in an automated way google does not want it. They prefer to show me ads. This applies to apis or agents. If google does not want that they will add friction by removing api access or making it difficult to use agents (fingerprinting, 2fa, captchas, etc)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: