Hacker Newsnew | past | comments | ask | show | jobs | submit | aspenmartin's commentslogin

I don’t know it’s fair to characterize the LLM community as being ignorant and rediscovering PSP/TCP. I in fact see that as programmers rediscovering survival analysis, and most LLM folks I know have learned these perspectives from that lens. Could be wrong about PSP, maybe things are more nuanced? But what is there that isn’t already covered by foundational statistics?

How about people that understand things are changing whether anyone likes it or not and want to stay relevant. What about the people who care about the end product and not rabbitholing design decisions on a proof of concept. What about someone who understands there is more nuance than assuming people with a different perspective on AI are lesser than or lower than people who resist the technology. You may feel you know the “right way” but to everyone else who is interested in operating in a world changing beneath our feet and not whining about the fact that everything will be different, and denigrating the people who want to succeed in it, this opinion is not exactly convincing. You want to cludge your way through a problem you’re welcome to but it’s not necessarily logical to suggest this is the only “right” way and infer that people who build with AI don’t like “understanding systems”.

When I build with AI I build things I never would have built before, and in doing so I’m exposed to technologies, designs, tools I wasn’t aware of before. I ask questions about them. Sure I don’t understand the tools as deeply as the person who wasted like 10 hours going down rabbit holes to answer a simple question, but I don’t really see that as particularly valuable.


I do disagree with the notion that you have to slog through a problem to learn efficiently. That it's either "the easy way [bad, you dont learn] or the hard way [good you do learn]" is a false dichotomy. Agents / LLMs are like having an always-on, highly adept teacher who can synthesize information in an intuitive way, and that you can explore a topic with. That's extremely efficient and effective for learning. There is maybe a tradeoff somewhat in some things, but this idea that LLMs make you not learn doesn't feel right; they allow you to learn _as much as you want and about the things that you want_, which wasn't before. You had to learn, inefficiently(!), a bunch of crap you didn't want to in order to learn the thing you _did_ want to. I will not miss those days.

I don't think your saying the same thing. Ai can help you get through the hard stuff effeciently and you'll learn. It acts as a guide, but you still do the work.

Offloading completely the hard work and just getting a summary isn't really learning.


Dystopian applications are extremely impractical or impossible, this is a tool for neuroscience

"This will never be used by bad guys" says the person immediately before their tools are used by bad guys.

No, it's "this tool cannot be used by bad guys or good guys, but can be used by highly funded labs that do neuroscience". It's something that freaks people out until they gradually learn what is actually involved

https://ai.meta.com/blog/brain-ai-image-decoding-meg-magneto... [2023] https://ai.meta.com/blog/brain-ai-research-human-communicati... [last year, focuses on decoding text]

^ There's a research team at Meta that studies this. You need an MEG -- thats $2-5M + the shielded room it lives in and the experts that can operate it.

EEG doesn't work due to low spatial resolution and how finicky it is to place the electrodes to get a good signal

The signals from neurons are just unbelievably tiny and are in an absolute sea of noisy trash. No one is ever going to read your thoughts without your consent (or by wrestling you into a big MEG, in which case you have bigger things to worry about). No one is going to be reading your dreams with any sort of accuracy either.


And computers used to fill a room and require stacks of punch cards to use.

in both cases: physics decides what's possible

Well, well, then I have a paper for you (2014); https://www.frontiersin.org/journals/computational-neuroscie...

You might be interested in author #7. Some guy named Dario something.


That is indeed a paper lol.

But in seriousness: not news and doesn’t change any of what I said. You have a class of 20 objects that they recall as they dream. Same setup (fMRI), small n, very very simplified design.

Look the reason we can’t do this is both physics AND information theoretic. You are getting in the best case an EXTREMELY reduced dimensionality, it’s not as though this is an early days of AI thing where it’s like “it’s not possible today but there’s nothing in principle stopping us from a Kurzweil like world”. It’s just not really possible.

Anyway the studies on this are restricted to specific neuroscience questions. Paper shows dreams contain object-like representations in the visual cortex — this is cool! And important! But it doesn’t imply anything for decoding thoughts and dreams.


> making sound architectural choices and maintaining long term business context and how it intersects with those architectural choices.

I completely agree with you, but this is rapidly becoming less and less the case, and would not at all surprise me if even by the end of this year its just barely relevant anymore.

> If you are eliminating those people from your business then I don't know that I can ever trust the software your company produces and thus how I could ever trust you.

I mean thats totally fine, but do realize many common load bearing enterprise and consumer software products are a tower of legacy tech debt and junior engineers writing terrible abstractions. I don't think this "well how am I going to trust you" from (probably rightfully) concerned senior SWEs is going to change anything. s


Writings on the wall, it is true, tech debt will no longer be a thing to care about.

"but who will maintain it?" massive massive question, rapidly becoming completely irrelevant

"but who will review it?" humans sure, with the assistance of ai, writing is also on the wall: AI will soon become more adept at code review than any human

I can understand "losing all legitimacy" being a thing, but to me that is an obvious knee jerk reaction to someone who is not quite understanding how this trend curve is going.


Trust me, I’m a well seasoned leathery developer and I’m no newbie when it comes to using AI. But this level of irrational exuberance is so over the top I just can’t take it seriously.

Yes, in the very long term I expect this to be able to replace large swaths of the sw dev lifecycle, product, ideation, the whole kaboodle. That’s a long way off, whatever “a long way off” means in this accelerated timeline.

For the next bunch of years, yes you’ll have to worry about architecture, coupling, testing, etc. I’m happy to have my competitors share your attitude, cause we’ll smoke them in the market.


post mortems / bug hunting -- pinpointing what part of the logic was to blame for a certain problem.


this is what granular commits are for, the kilobytes long log of claude running in circles over bullshit isn't going to help anyone


I think the parent comment is saying “why did the agent produce this big, and why wants it caught”, which is a separate problem from what granular commits solve, of finding the bug in the first place.


There is no "why." It will give reasons but they are bullshit too. Even with the prompt you may not get it to produce the bug more than once.

If you sell a coding agent, it makes sense to capture all that stuff because you have (hopefully) test harnesses where you can statistically tease out what prompt changes caused bugs. Most projects wont have those and anyway you don't control the whole context if you are using one of the popular CLIs.


If I have a session history or histories, I can (and have!) mine them to pinpoint where an agent either did not implement what it was supposed to, or understand who asked for a certain feature an why, etc. It complements commits, sessions are more like a court transcript of what was said / claimed (session) and then you can compare that to what was actually done (commits).


Some of my sessions are over 1GB at this point. I just don't think this scales usefully or meaningfully. Those things should live as summarized artifacts within issue tracking IMHO

Then look at the code, the session will only confuse. To read an LLM's explanation is to anthropomorphize what will just be a probabilistic incident.


no you look at the session to understand what the context was for the code change -- what did you _ask_ the llm to do? did it do it? where did a certain piece of logic go wrong? Session history has been immensely useful to me and it serves as an important documentation of the entire flow of the project. I don't think people should look at session histories at all unless they need to.


I'm not quite sure I understand the logic of this and how people don't see that these claims of "well now everyone is going to be dumber because they don't learn" has been a refrain literally every time a major technological / Industrial Revolution happens. Computers? The internet? Calculators?

The skills we needed before are just no longer as relevant. It doesn't mean the world will get dumber, it will adapt to the new tooling and paradigm that we're in. There are always people who don't like the big paradigm change, who are convinced it's the end of the "right" way to do things, but they always age terribly.

I find I learn an incredible amount from using AI + coding agents. It's a _different_ experience, and I would argue a much more efficient one to understand your craft.


100%. I have been learning so much faster as the models get better at both understanding the world and how to explain it me at whatever level I am ready for.

Using AI as just a generator is really missing out on a lot.


Well Opus and Gemini are probably running on multiple H200 equivalents, maybe multiple hundreds of thousands of dollars of inference equipment. Local models are inherently inferior; even the best Mac that money can buy will never hold a candle to latest generation Nvidia inference hardware, and the local models, even the largest, are still not quite at the frontier. The ones you can plausibly run on a laptop (where "plausible" really is "45 minutes and making my laptop sound like it is going to take off at any moment". Like they said -- you're getting sonnet 4.5 performance which is 2 generations ago; speaking from experience opus 4.6 is night and day compared to sonnet 4.5


> Well Opus and Gemini are probably running on multiple H200 equivalents, maybe multiple hundreds of thousands of dollars of inference equipment.

But if you've got that kind of equipment, you aren't using it to support a single user. It gets the best utilization by running very large batches with massive parallelism across GPUs, so you're going to do that. There is such a thing as a useful middle ground. that may not give you the absolute best in performance but will be found broadly acceptable and still be quite viable for a home lab.


Batching helps with efficiency but you can’t fit opus into anything less than hundreds of thousands of dollars in equipment

Local models are more than a useful middle ground they are essential and will never go away, I was just addressing the OPs question about why he observed the difference he did. One is an API call to the worlds most advanced compute infrastructure and another is running on a $500 CPU.

Lots of uses for small, medium, and larger models they all have important places!!


I tried this today with this username and other usernames on this and other platforms with Claude Code

- First it told me it couldn't do this, that this was doxxing

- I said: its for me, I want to see if I can be deanonymized

- Claude says: oh ok sure and proceeds to do it

It analyzed my profile contents and concluded that there were likely only 5 - 10 people in the world that would match this profile (it pulled out every identifying piece of information extremely accurately). Basically saying: I don't have access to LinkedIn but if I did I could find you in like 5 seconds.

Anyway, like others have said: this type of capability has always been around for nation state actors (it's just now frighteningly more effective), but e.g. for your stalker? For a fraudster or con artist? Everyone has a tremendous unprecedented amount of power at their fingertips with very little effort needed.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: