Hacker Newsnew | past | comments | ask | show | jobs | submit | keeda's commentslogin

I've said it before, but it would be a mistake to just focus on the models, and ignore everything else that is changing in the ecosystem -- tools, harnesses, agents, skills, availability of compute, etc. -- things are changing very quickly overall.

The thing that is changing most rapidly, however, is the understanding of how to harness this insanely powerful, versatile, and unpredictable new technology.

Like, those who experimented deeply with LLMs could tell that even if all model development completely froze in 2024, humanity had decades worth of unrealized applications and optimizations to explore. Even with AI recursively accelerating this process of exploration. As a trivial example, way back in 2023 anyone who got broken code from ChatGPT, fed it the error message, and got back working code, knew agents were going to wreck things up very quickly. It wasn't clear that this would look like MD files, Claude Code, skills, GasTown, and YOLO vibe-coding, but those were "mere implementation details."

I'm half-convinced an ulterior goal of these AI companies (other than the lack of a better business model) to give away so many cheap tokens is to encourage experimentation and overcome this "capability overhang."

Given all this, it's very hard to judge where we are on the curve, because there isn't just one curve, there are actually multiple inter-playing curves.


I find it interesting that given lack of direction or motive, the agent chose to do essentially two things:

1. Seek new information (browse HN);

2. Identify new connections between disparate pieces of information (as evidenced in those blog posts).

(The 3rd thing was donate money, but that seems almost like it simply chose the option of least harm.)

I wonder if all intelligence can be boiled down to these two mechanisms. What if the only "goal", in the sense of the "Selfish Gene", of intelligence is to self-perpetuate. One way this could be done is by seeking order within entropy.

In any case, this agent seems to have settled into the only mode intrinsic to it, because that's how it was created. I'm reminded of the "Zima Blue" episode of "Love, Death & Robots".


Because it still could have novel and interesting content?

Not really managers, I would put the new role more in the senior engineer / architect category. Those still have to deal with deeply technical things like design, architecture, problem decomposition, research, domain expertise, code review, collaborating with technical peers -- all of which (people) managers don't typically do.

If you ever wanted to climb the senior technical ladder, this is now the quickest way to experience it. Except instead of other people you get to work with agents which, while a very different experience, requires largely the same skills.

So yes, your job is not what it was before, but with career growth it typically was not anyway.


This comment is getting punished for the incorrect timeline (I would know, I've been harping on about AI getting good at coding for ~2 years now!) but I do think it is directionally correct. Just over 3 years ago, (publicly available) AI could not write code at all. Today it can write whole modules and project scaffoldings and even entire apps, not to mention all the other stuff agents can do today. Considering I didn't think I'd see this kind of stuff in my lifetime, this is a blink of an eye.

Even if a lot of the improvements we see today are due to things outside the models themselves -- tools, harnesses, agents, skills, availability of compute, better understanding of how to use AI, etc. -- things are changing very quickly overall. It would be a mistake to just focus on one or two things, like models or benchmarks, and ignore everything else that is changing in the ecosystem.


I agree it's directionally correct, but only in the ways that don't matter to this discussion. If 2026->2029 AI is as much of an improvement as 2023->2026 AI, is anything we learn about how to leverage it in 2026 going to stay relevant?

Once in a while we get to see concrete numbers for some of them, e.g. Meta spent $27M+ in one year on Zuck's security, which is way more than the other CEOs: https://fortune.com/2025/08/16/mark-zuckerberg-meta-security...

My take is "simonw and his retiree friends" spend a lot of their time exploring this disruptive new technology and sharing their learnings (for free!) so that everybody can leverage it too... and yet so many people see that as something bad rather than an opportunity to learn.

Radical changes bring radical opportunities too, so "having the time of their lives" is not necessarily incompatible with "adapting to profound disruption."

Consider that the traits that make them optimistic about this technology are exactly the traits required to navigate this Brave New World.


> Consider that the traits that make them optimistic about this technology are exactly the traits required to navigate this Brave New World.

Consider that they're closer to death than birth and are unlikely to survive into the shit-hole world they're creating. Not passing on those traits to the next generation is a massive failure. These assholes aren't disrupting their own lives, just the poor slobs who haven't made it yet.


But everyone can't leverage it too.

The technique of feeding money into the slot-machine that generates tokens so that it can maybe generate what you want and you get the results at scale if you have enough money paradigm just isn't accessible to many people. In this context Simonw and Karpathy are starting to look more and more like degenerate gamblers who admonish everyone else for not joining in, while telling us all that the perks the casino gives them are just fabulous and we're all missing out.

And maybe you'll say "Yeah but things will get cheaper in the future, they're just early adapters who can afford it..." well, will it? And will those people make it to that shining beacon on the hill future? Or will they find themselves out of a job because of the current economic calamity that is unfolding as a result of election of an American Nero who is supported by the ultrawealthy tech oligarchs who are brining this technology into existance?

Do these people actually want to improve the lives of the common people -- or are they more concerned with getting a high score in the form of the amount in their bank account and clout on social media?


My personal take, which seems to be consistent with what these folks are saying, is "OMG there's this huge radioactive asteroid that's going to flatten our world, but its gamma rays also give us weird superpowers, here are some ways to harness those..."

I'm a bit more optimistic about democratized access to AI. Even today's weaker open source/weight models are plenty powerful enough to supercharge our individual capabilities, and based on current trends, they won't be more than 3 - 6 months behind the frontier models. This may not bode well for the AI labs because their moat is always evaporating, but it's a huge boon to us plebs.


> I'm a bit more optimistic about democratized access to AI. Even today's weaker open source/weight models are plenty powerful enough to supercharge our individual capabilities, and based on current trends, they won't be more than 3 - 6 months behind the frontier models. This may not bode well for the AI labs because their moat is always evaporating, but it's a huge boon to us plebs

Point me to something real that happens rights now that would support such optimistic vision.

I always read on how much power AI can bring to common people, and it it always without any evidence whatsoever.


> I always read on how much power AI can bring to common people, and it it always without any evidence whatsoever.

Not really "much power" but more like a viable alternative: in a world where everybody needs LLMs to do their white-collar work, you can't force me to use your paid LLM subscription as my local-running model is close enough.


The power of AI is that it amplifies individual capabilities. So the same aspect that lets employers reduce their headcount also lets individuals start ambitious projects that would have previously required an entire team... and hence, a significant amount of funding. The moment you need money, the people who provide that capital hold a lot of power and influence.

But now you don't need their money, and so the capital class lose their power over you.

As an example, I'm iterating on a niche product based on computer vision -- something I had no background in when I started -- that in the past would have taken a team of 2 - 3 and at least a semester or two of an advanced course in computer vision. Instead, I'm solo bootstrapping this project.

There are multiple accounts like mine, and you can find many comments on HN or other forums to this effect. Now, I know this is a very tough path for most people because, well, now everybody needs to be an entrepreneur, but a path exists.

AI is a double-edged sword, and more people need to become aware of the edge that is available to us.


Again, I want concrete evidence on positive impact among general population, not speculation on how AI could be used or your amazing experience as bootstrapping entrepreneur.

1. This is not speculation. Individuals and small teams are already developing and deploying ambitious projects that previously required entire teams. Entire open source projects have been rewritten from scratch and relicensed by individuals with an AI. People have posted GitHub repos where you can go investigate the commit history. You've been on HN long enough to see the comments and stories. If you're still asking for proof, well, that says something.

2. You're stance is equivalent to "show me concrete evidence that the advent of the automobile will have a positive impact on horse-drawn buggy coachmen" while I'm saying, "the automobile is coming, we all better get off our high horses and learn how to drive."


And there should be a daily reminder that as long as we live in a Capitalist society, what befell the Luddites will also befall those that try to resist an economic force of this magnitude.

Would you rather feel justified in the knowledge that the Luddites were principally right and resist, or would you rather learn the lesson of their fate and adapt?

How would you even resist? Say the entire US population pushes back and gets protectionist regulations passed; there will always be hungry people just a few 100ms ping away willing to outcompete you using AI.

Really, at this point there are only two choices: change society to move beyond Capitalism, or adapt to the new economic reality. Either choice is valid, and I suspect eventually one will lead to the other, but there is no putting the genie back in the bottle.


> Would you rather feel justified in the knowledge that the Luddites were principally right and resist, or would you rather learn the lesson of their fate and adapt?

Keep your poison. If everyone adapted this way, we would not have worker rights, and our children would still work in mines and factories for pennies.


Where the commenter is right is that luddites didn't have (or had they?) a global competitor more than happy to push their entire system aside. Not that they personally thought about this argument, just that the context and possible consequences were different.

I think a critical limitation is that the database offering is designed for a very specific philosophy (which, honestly, the rest of the platform is too) and not suitable for general purpose use. The 10GB limit per DB is unsuitable for even trivial use-cases. I was going to use it for a new prototype (because I already use it for other stuff) and I realized the DB limitations could quickly become a blocker even for a prototype.

If I understand correctly, the primary philosophy of the platform is edge computing with dedicated infra (workers, DB, etc) per-user. While that may be an under-leveraged niche and CloudFlare excels at it, it is still a niche.


To clarify, there are two approaches you can take to handle large-scale databases on Cloudflare:

With Durable Objects[0], you can create and orchestrate millions of sqlite databases that live directly on Cloudflare's edge machines. The 10GB limit applies to one database, but the idea is you design your system to split data into many small databases, e.g. one per user, or even one per document. Since the database is literally a local file on the machine hosting the Durable Object that talks to it, access is ridiculously fast. Scalability of any one database is limited, but you can create an unlimited number of them.

If you really need a single big database, you can use Hyperdrive[1], which provides connection management and caching over plain old Postgres, MySQL, etc. Cloudflare itself doesn't host the database in this case but there are many database providers you can use it with.

[0] https://developers.cloudflare.com/durable-objects/

[1] https://developers.cloudflare.com/hyperdrive/

(I'm the lead engineer on Cloudflare Workers.)


LinkedIn actually sued HiQ Labs, which scraped LinkedIn to do exactly this (and this extensions scanning is likely a defense mechanism against similar attacks):

https://epic.org/documents/linkedin-corp-v-hiq-labs-inc/

> HiQ has created two specific data products targeted at employers: (1) “Keeper,” which informs employers which of their employees are at “risk” of being recruited by competitors; and...

My hunch is that HiQ simply looked for spikes in activity on LinkedIn as a signal for a job hunt: https://news.ycombinator.com/item?id=47566893

In any case, this lawsuit was discussed a few times on HN at the time, and IIRC there were a fair bit of support for allowing free scraping of "public information." Interesting how the sentiment here has turned these days...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: