Hacker Newsnew | past | comments | ask | show | jobs | submit | nhecker's commentslogin

I can corroborate this finding -- I think the horn switch is just a logic-level digital switch going into one or more MCUs somewhere, subjected to all manner of latency and (probably) CANBUS jitter. It's not great. Trying to send Morse, or even a quick 'toot toot' results in a garbled mangled mess, and I find that very annoying. My early cars & motorbike had what felt like direct, switched control over power to the horn, those were great to use. I've debated installing a dedicated pushbutton rated for the amperage or at least controlling a solenoid somewhere that would power the horn.

As an experiment, I've found that you can reliably detect the presence of crummy horn control by trying to pulse the horn for the shortest amount of time possible. The shorter my push on the horn button gets, the more likely it is that the timing will feel wrong somehow, or the horn doesn't even sound at all.

I've definitely tried friendly beeps at friends or neighbors and it came out sounding like an angry honk.


Welp, this puts my https://gist.github.com/nhecker/8e850773ff229724ce361967cc22... to shame.

I wonder about the battery SoC being reported with macpow; there are several different ways to calculate that metric and it's not clear which is being used. I may dig into that if I get the chance. Neat tool!


Thanks for the [1] link, I hadn't seen that before.

I can't find it immediately, but I've read about something even sneakier than this. A standard broadcast station was modified such that its carrier signal was modulated by a PSK signal. The intended listener would use e.g., a PSK-31 modem to listen to the carrier signal and would be able to obtain the encoded digital data. Everyday listeners would hear the regular broadcast. The station involved _might_ have been a BBC station, but I don't recall.

You could technically just transmit data via RDS, too. Change a letter here and there and nobody would know whether that’s a decoding error or actual ciphertext. (Would need some kind of checksum or so, of course.)

@windytan did a fascinating audio clip highlighting the RDS data stream in a radio recording some while ago:

https://soundcloud.com/windytan-1/rds-mixdown


I'm starting to believe this is [a] way forward. Or maybe an approach which is on a spectrum between <everything I have is on a phone behind a fingerprint and a four digit pin> and <I don't own a smartphone>.

Unfortunately, it's pretty common to only have a smartphone as your sole compute device, and increasingly onerous not to own one at all.


>Or maybe an approach which is on a spectrum between >increasingly onerous not to own one at all.

Yes, and I think this unfortunately demands a grey area. I'm starting to treat my smartphone more like a work device, and there are a few things I do on it:

- My work's authenticator app is there.

- Unfortunately Signal is tied to smartphone usage.

- Practically speaking, people will expect to be able to send you text messages.

- It's still useful for taking pictures.

- My banking app is on there.

Outside of rare occasions, that's really all I use my phone for. I don't carry it around the house. If I go somewhere with my wife, I don't even bring my phone most of the time. I'm "required" to have it, but in principle it's not even mine. It shouldn't be trusted or enjoyed.


(edit: I'm broadly in agreement with your comment & observations, so I don't at all mean to come off as argumentative for the sake of being argumentative. You just got me thinking about how that situation might have been handled thirty or a hundred years ago.)

> [...] my doctor can now approve my request for a prescription from anywhere in the world. That just wasn't possible before [...]

I'm picking nits, but wasn't this more or less instantaneous approval possible before with e.g., a fax and a telephone? Or (although this is a bit of a stretch) a telegram and telegraph?


Ditto. My personal equipment includes a home server (128GB DDR3 ECC) and a tablet with a keyboard. It's honestly astonishing what you can do without a full-fledged laptop, if you're willing to go through some gymnastics to get there. And it travels light compared to a laptop! (The tablet, that is. Not the headless box. :-))

In a similar vein: seek efficiency.

I.e., /if/ I am going to consume LLM tokens, I figure that a local LLM with 10s of billions of parameters running on commodity hardware at home will still consume far more energy per token than that of a frontier model running on commercial hardware which is very strongly incentivized to be as efficient as possible. Do the math; it isn't even close. (Maybe it'd be closer in your local winter, where your compute heat could offset your heating requirements. But that gets harder to quantify.)

Maybe it's different if you have insane and modern local hardware, but at least in my situation that is not the case.


But commodity hardware that's right-sized for your own private needs is many orders of magnitude cheaper than datacenter hardware that's intended to serve millions of users simultaneously while consuming gigawatts in power. You're mostly paying for that hardware when you buy LLM tokens, not just for power efficiency. And your own hardware stays available for non-AI related needs, while paying for these tokens would require you to address these needs separately in some way.

>And your own hardware stays available for non-AI related needs, while paying for these tokens would require you to address these needs separately in some way.

^ Fair. Yep, I agree the calculus changes if you don't have _any_ local hardware and you're needing to factor in the cost of acquiring such hardware.

When I did this napkin math, I was mostly interested in the energy aspect, using cost as a proxy. I was calculating the $/token (taking into consideration the cost of a KWh from my utility, the measured power draw of my M1 work machine, and the measured tokens per second processed by a ~20BP open-weight model). I then compared this to the published $/token rate of a frontier provider, and it was something like two orders of magnitude in favor of the frontier model. I get it, they're subsidizing, but I've got to imagine there's some truth in the numbers.

I wonder, does (or will) the $/token ratio fall asymptotically toward the cost of electricity? In my mind I'm drawing a parallel to how the value of mined cryptocurrency approximately tracks the cost of electricity... but I might be misremembering that detail.


I doubt it because you aren't going to get the utilisation that a commercial setup would. No point wasting tons of money on hardware that is sat idle most of the time.

If you're running agentic workloads in the background (either some coding agent or personal claw-agent type) that's enough utilization that the hardware won't be sitting idle.

Or songbirds.

I ran into a guy at a hardware store who ran just such a power supply attached our city's water (or was it natural gas?) infrastructure. I was incredulous, but the idea that it helped prevent corrosion did make sense.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: