Hacker Newsnew | past | comments | ask | show | jobs | submit | neonstatic's commentslogin

I have very mixed feelings about that model. I want to like it. It's very fast and seems to be fit for many uses. I strongly dislike its "personality", but it responds well to system prompts.

Unfortunately, my experience with it as a coding assistant is very poor. It doesn't understand libraries it seems to know about, it doesn't see root causes of problems I want it to solve, and it refuses to use MCP tools even when asked. It has a very strong fixation on the concept of time. Anything past January 2025, which I think is its knowledge cutoff, the model will label as "science fiction" or "their fantasy" and role play from there.


I think this is where LLMs shine. I experience the same difficulty with a lot of command line tools, .e.g find is a mystery to me after all these years. Whatever the syntax is, it just doesn't stick in my memory. Since recently I just tell the model what search I want and it gives me the command.

A more appropriate term is "chiptunes". I also heard people refer to it as keygen music.

If it's a tracker module of some kind with very short looping samples, then yeah, it's a chiptune.

Right. Basically using simple waveforms either using samples or onboard chip like ZX Spectrum's. For tracker modules with more "normal" samples, we simply referred to them as modules or mods for short.

IMHO, a “chiptune” is music for an FM synthesis chip, like on the NES, the SID chip in Commodore 64, or the AdLib sound card for PC. A “mod” or “tracker music” is music made for a range of platforms in a rather narrow time-band, that could play digital samples, but could not reasonably store entire songs recorded digitally, like the Amiga, Atari ST, or early PC’s like 386s or 486s.

Neither the NES nor the SID employs FM synthesis. I'm not even sure what the collective noun is for these. Wikipedia tells me it's PSG (programmable sound generator).

The same behavior could be (also was) teased out of a MOD player if you choose samples with a handful of sample points, like 12. You could also draw up a sawtooth in paint and use that as a sample. These are down-to-earth honest true Scotsman chiptunes.


Those are SID etc. tunes. Not chiptunes.

Look, we have always been at war with EastAsia.

> Top-tier models will never run on desktop machines

Sorry, but you don't know that


I mean it's not hard to understand that if good model can run on consumer hardware, even better models can run in data centers

If we get to the point where a local model can reliably do the coding for a good majority of cases, then the economic landscape changes significantly. And we are not that far from having big open weight models that can do that, which is a first step

Larger, yes, absolutely. Better? Right now it seems that bigger is better, but if we are thinking about long term future, it's not obvious that there isn't a point of diminishing returns with regards to size. I can also imagine a breakthrough, where models become much smaller, with the same or better capabilities as the current, very large ones.

You are always going to get the same scaling laws in model size regardless of what else you do, so the same degree of improvement seen now relative to the smaller models will be achievable in the future. Yes, small models may be on par with previous generation large models, but the same is true for processors and you don't see supercomputers going away. It's the same principle.

The model is the killer product

Just a heads up, that I found NVIDIA Parakeet to be way better than Whisper - faster, uses less compute, the output is better, and there are more options for the output. I am using parakeet-mlx from the command line. Check it out!

I've been trying both Whisper v3 large and Parakeet in MacWhisper, and I inevitably go back to Whisper large. Which one is better depends on what you dictate, how you speak, and which languages you use.

yeah, it came out after I stared on my project last year. Only issue is that you can't fine-tune it on Apple Silicon.

Brilliant

My understanding of OP was not a claim that "vibe coding doesn't work", but that the way Anthropic does it doesn't work. He seems to be specifically criticizing the "hands off the actual code, human" approach and advocating for keeping the human in the loop.

Sticking with the computation analogy, it could be a long-term memory look up. If memories were passed down the generations, people could simply memorize actions of individuals deemed smarter. Over a large sample size, a heuristic would emerge. Kind of like knowing there is always a sunset following a sunrise without understanding the solar system.

It is a zero sum game because you have a finite state budget for representing heuristics. Increasing the "smartness" (and therefore state required) of one heuristic necessarily requires reducing the smartness of other heuristics. The state is never not fully allocated, the best you can do is reallocate it.

This places an upper bound on the complexity of the patterns you can learn. At the limit you could spend 100% of resources building a maximally accurate model of a single thing but there are limits to ROI. Pre-digested learning makes it more efficient to acquire heuristics but it doesn't change the cost of representing it.

Some simple state machines are resistant to induction by design e.g. encryption algorithms.


I think that's kind of how all the religions were started. Smart people being tired of reasoning with dumb ones and instead going with "do this, because that's the will of God".

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: