Hacker Newsnew | past | comments | ask | show | jobs | submit | kseniamorph's commentslogin

> nothing but thieves! cool band btw

there is a real one though — https://www.anthropic.com/engineering/claude-code-sandboxing. needs to be enabled with /sandbox, not on by default.

Right, that's what I was referring to

i feel like moves like this make it even harder for new open-source tools to break through. there's already evidence that LLMs are biased toward established tools in their training data (you can check it here https://amplifying.ai/research/claude-code-picks). when a dominant player acquires the most popular toolchain in an ecosystem, that bias only deepens. not because of any skewing, but because the acquired tools get more usage, more documentation, more community content. getting a new project into model weights at meaningful scale is already really hard. acquisitions like this make it even harder.

I'm also concerned about this, but I feel as though uv and ruff's explosive growth happening alongside and despite that of LLMs demonstrates that it's not a show-stopper. I vividly recall LLM coding agents defaulting to pip/poetry and black/flake8, etc. for new projects. It still does that to some extent, but I see them using uv and ruff by default -- without any steering from me -- with far greater frequency.

Perhaps it's naive optimism, but I generally have hope that new and improved tools will continue to gain adoption and shine through in the training data, especially as post-training and continual learning improve.


However when improved tools are obtained by these capitalists whose business model has no hint of sustainability, what's next is a modern Phoebus cartel situation where further improvement will be restricted or even prohibited to fabricate need for things like "uv Enterprise Pro".

wow, not bad result on the computer use benchmark for the mini model. for example, Claude Sonnet 4.6 shows 72.5%, almost on par with GPT-5.4 mini (72.1%). but sonnet costs 4x more on input and 3x more on output


what's the point of this benchmark if sonnet is working great at my tasks and mini can't solve my tasks?

given specification approach: personally i found it useful in some cases to write preceding block-comments for functions. you can describe the desired behaviour there, input/output types, etc. you can even make a skeleton from comment blocks and run one-shot generation. but this approach is especially useful in iterative development and maintenance.


i like how this research (and others related) kind of supports the idea that free will might be lacking. I still keep a pinch of skepticism about this idea, understanding that it's just a concept. But personally i like it, because it even fells a bit relieving... not to say that it helps you abandon responsibility, but it makes your stance on life easier, and pushes you not to blame yourself too much for your weaknesses.


What is free will? In Friston’s predictive processing framework, free will isn’t a force that stands outside the brain and overrides it… it’s what the system calls the experience of higher level predictions outcompeting lower-level ones. The brain is a hierarchical prediction machine constantly minimizing surprise, and what feels like a decision is the resolution of competing models, where your prefrontal self model of who you are and what matters generates a stronger attractor than the opposing signal. The sense of “I chose this” is likely a post hoc narrative the DMN constructs after the resolution has already occurred.. agency as story rather than cause. There’s no ghost in the machine, just a very sophisticated model of a self that includes the prediction that it can choose.

Through the vagus nerve and serotonin availability, a dysbiotic gut amplifies lower level threat and conservation signals, making them harder for higher level prefrontal predictions to outcompete. What feels like weakness of will may partly be the system running on a degraded substrate… the DMN then constructs a story about discipline and character over a causal chain that started in the enteric nervous system.

So, you can’t even really perceive some of this. But you essentially can’t overcome it either. The decisions are made before you thought about it.


"It's not my will, it's the will of the bugs in my butt!" yes, very "relieving."

I kid, ;) but I see your point. The idea that you might, say, struggle to resist candy and sweets and it's because some population of your gut biome is fighting for its life if you don't eat sugar... makes sense.

The idea that "I just cut sugar out for six weeks and my willpower to resist sugar went through the roof" ... not because your willpower changed, but because you killed that part of your gut biome.


I remember reading a CF blog post about crawler separation and responsible AI bot principles where they argue every bot should have one distinct purpose. Now they're building crawling infrastructure themselves, and their own /crawl endpoint lists "training AI systems" as a use case alongside regular crawling. So not only are they in the crawling business now, they're not following the separation principle. To be fair, there's a business logic here. But it's hard not to notice the irony. https://blog.cloudflare.com/uk-google-ai-crawler-policy/


One has to be highly suspicious of any "fair, better for others" claims coming from corporate entities.

It is the ages old story of https://en.wikipedia.org/wiki/Quod_licet_Iovi%2C_non_licet_b...

Also brings back the irony now apparent in original Google paper: http://infolab.stanford.edu/pub/papers/google.pdf "To make matters worse, some advertisers attempt to gain people’s attention by taking measures meant to mislead automated search engines."


they are seeking talent, not buying the product. this is a valid strategy for devs - just to attract attention no matter what.


Over the years, Meta has bought a lot of "talent" based on a single hit, and they continue to be one-hit wonders despite being embedded at Meta, with ungodly amounts of resources at their disposal. e.g. none of the game studios they bought have produced new IP, all they do is produce content for the aging, pre-acquisition games


You're not wrong, I just wish you were lmao


Curious whether people here see value in this kind of research: using alternative public data to assess vendor risk before a breach, rather than after. We're aware that "we found signals before a known breach" is a weaker claim than "these signals predicted a breach we didn't know about yet." Is retrospective analysis like this useful to practitioners, or does it only matter if it can be made prospective?


This matches what I've seen too. Though I'd add another dimension: soft skills. In my experience, job searching has always been easier for people who communicate well regardless of their technical level. And soft skills might be what's making some people more resilient to this market shift specifically


That has always been true (not that I’m saying you don’t know that, I’m using your comment as a jumping off point) in this industry. I am a good developer, but I’m a very good teacher and leader, and soft skills are why I’ve had the career I’ve had over the past two decades.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: