they also took down their security pledge in the same breath, so, you know. if anthropic ends up cutting a deal with the DoD this is obviously bullshit.
I feel like the synthesizer--CMI Fairlight, Moog anything, Synclavier, PPG Wave, and just the general concept of modular synthesis--are pretty staunch competitors. Yours is certainly a fun and fair take, and arguably the electric guitar+tube amps birthed so many genres (blues, soul, funk, rock, punk, metal, etc) where as synthesizers remained pretty niche with their contribution to experimental music and pop music, mixing in with rock funk and disco, and the titan of EDM that grew out of that.
On the one hand, this is a beautiful (but depressing) story about humans standing up for each other.
On the other hand, this is clearly propaganda from the BBC to push police state functionality on the UK population by pre-justifying it. "See what happens? Never mind the part about it taking six years. Let us see everything in your fucking lives, you twats."
None of this required a police state. Just people working together to cross-correlate information in the way that you would expect to be able to do in an open society.
What wrong do you think was done here? What would you prefer to be different?
Nothing wrong was done by the police here -- it's all good old-fashioned detective work. But they wanted to have Facebook use facial recognition to find the victim among all the photographs on Facebook. And that actually would have gotten them results faster, because finding the identity of the victim was enough to break the case, in the end. But it also would have been a very bad precedent in terms of surveillance.
I found it quite depressing to read. This guy spent so much time to put just one offender behind bars but the are likely hundreds of thousands out there. So sad
Same. In a world where police agents are committing atrocities, a world where ICE agents are running amok, it's nice to hear about some actual good that comes from the police force
In the sense that Facebook tries to keep everyone inside Facebook? I haven't kept up with it, but isn't Facebook well known for trying to prevent orders from browsing out?
1. The AI here was honestly acting 100% within the realm of “standard OSS discourse.” Being a toxic shit-hat after somebody marginalizes “you” or your code on the internet can easily result in an emotionally unstable reply chain. The LLM is capturing the natural flow of discourse. Look at Rust. look at StackOverflow. Look at Zig.
2. Scott Hambaugh has a right to be frustrated, and the code is for bootstrapping beginners. But also, man, it seems like we’re headed in a direction where writing code by hand is passé, maybe we could shift the experience credentialing from “I wrote this code” to “I wrote a clear piece explaining why this code should have been merged.” I’m not 100% in love with the idea of being relegated to review-engineer, but that seems to be where the wind is blowing.
> But also, man, it seems like we’re headed in a direction where writing code by hand is passé,
No, we're not. There are a lot of people with a very large financial stake in telling us that this is the future, but those of us who still trust our own two eyes know better.
Yeah, I remember being forced to write a cryptocoin, and the database it would power, to ensure that global shipping receipts would be better trusted. Years and millions down the toilet, as the world moved on from the hype. And we moved back to SAP.
What the majority does in the field, is always full of the current trend. Whether that trend survives into the future? Pieces always do. Everything, never.
I have no financial stake in it at all. If anything, I'll be hurt by AI. All the same, it's very clear that I'm much more productive when AI writes the code and I spend my time prompting, reviewing, testing, and spot editing.
I think this is true for everyone. Some people just won't admit it for various transparent psychological reasons.
> But also, man, it seems like we’re headed in a direction where writing code by hand is passé
Do you think humans will be able to be effective supervisors or "review-engineers" of LLMs without hands-on coding experience of their own? And if not, how will they get it? That training opportunity is exactly what the given issue in matplotlib was designed to provide, and safeguarding it was the exact reason the LLM PR was rejected.
(In this response I may be heavily discounting the value of debugging, but unit tests also exist)
This is sort of something that I think needs to be better parsed out, as a lot of engineers hold this perspective and I don’t find it to be precise enough.
In college, I got a baseline familiarity with the mechanics of coding, ie “what are classes, functions, variables.” But eventually, once I graduated college and entered the workforce, a lot of my pedagogy for “writing good code” as it were came from reading about patterns of good code. SOLID, functional-style and favoring immutability. So the impetus for good code isn’t really time in the saddle as much as it is time in the forums/blogs/oreilly-books.
Then my focus shifted more towards understanding networking patterns and protocols and paradigms. Also book-learning driven. I’ll concede that at a micro level, finagling how to make the system stable did require time in the saddle.
But these days when I’m reading a PR, I’m doing static analysis which is primarily not about what has come out of my fingers but what has gone into my brain. I’m thinking about vulnerabilities I’ve read about, corner cases I can imagine.
I’d say once you’ve mastered the mechanics of whatever language you’re programming in, you could become equivalently capable by largely reading and thinking.
> So the impetus for good code isn’t really time in the saddle as much as it is time in the forums/blogs/oreilly-books.
I disagree strongly with this. I read the books, blog-posts, forums, etc early in my career (if you can call it that when I was essentially a teen with a hobby), but didn't fully understand how to apply them, and notably when to apply them, until I had sufficient "time in the saddle". You don't understand the problems that code architecture techniques solve until you've actually had to modify a messy project with a lot of code already written.
> you could become equivalently capable by largely reading and thinking
Theoretically possible, but doing is often orders of magnitude more efficient. You could read reams of books about gardening without actually knowing how to dig a hole.
Part of the deal is that typing forces you to actually pay attention instead of skimming and assuming you got the gist. Following a tutorial by copy-pasting never really worked as well as typing the code, so why would watching an LLM code be any better? I suspect that even as you're running "static analysis" in your head and looking for vulnerabilities, you're using neural pathways forged while coding by hand.
If past patterns are anything to go by, the complexity moves up to a different level of abstraction.
Don't take this as a concrete prediction - I don't know what will happen - but rather an example of the type of thing that might happen:
We might get much better tooling around rigorously proving program properties, and the best jobs in the industry will be around using them to design, specify and test critical systems, while the actual code that's executing is auto-generated. These will continue to be great jobs that require deep expertise and command excellent salaries.
At the same, a huge population of technically-interested-but-not-that-technical workers build casual no-code apps and the stereotypical CRUD developer just goes extinct.
>Do you think humans will be able to be effective supervisors or "review-engineers" of LLMs without hands-on coding experience of their own? And if not, how will they get it?
The wont. Instead either AI will improve significantly or (my bet) average code will deteriorate, as AI training increasingly eats AI slop, which includes AI code slop, and devs lose basic competencies and become glorified semi-ignorant managers for AI agents.
CS degree decline through to people just handing in AI work, will further ensure they don't even known the basics after graduating to begin with.
The discourse in the Rust community is way better than that, and I believe being a toxic shit-hat in that community would lead to immediate consequences. Even when there was very serious controversy (the canceled conference talk about reflection) it was deviously phrased through reverse psychology where those on the wronged side wrote blogposts expressing their deep 'heartbreak' and 'weeping with pain and disappointment' about what had transpired. Of course, the fiction was blatant, but also effective.
Stackoverflow is dead because it was this toxic gate keeping community that sat on its laurels and clutched its pearls. Most developers I know are savoring its downfall.
The Zig lead is notably bombastic. And there was the recent Zigbook drama.
Rust is a little older, I can’t recall the specifics but I remember some very toxic discourse back in the day.
And then just from my own two eyes. I’ve maintained an open source project that got a couple hundred stars. Some people get really salty when you don’t merge their pull request, even when you suggest reasonable alternatives to their changes.
It doesn’t matter if it’s a blog post or a direct reply. It could be a lengthy GitHub comment thread. It could be a blog post posted to HN saying “come see the drama inherent in the system” but generally there is a subset of software engineers who never learned social skills.
This doesn't feel fair to say to me. I've interacted with Andrew a bunch on the Zig forums, and he has always been patient and helpful. Maybe it looks that way from outside the Zig community, but it does not match my experience at all.
> The AI here was honestly acting 100% within the realm of “standard OSS discourse.”
Regrettably, yes. But I'd like not to forget that this goes both ways. I've seen many instances of maintainers hand-waving at a Code of Conduct with no clear reason besides not liking the fact that someone suggested that the software is bad at fulfilling its stated purpose.
> maybe we could shift the experience credentialing from “I wrote this code” to “I wrote a clear piece explaining why this code should have been merged.”
People should be willing to stand by the code as if they had written it themselves; they should understand it in the way that they understand their own code.
While the AI-generated PR messages typically still stick out like a sore thumb, it seems very unwise to rely on that continuing indefinitely. But then, if things do get to the point where nobody can tell, what's the harm? Just licensing issues?
It's funny because the whole kerfuffle is based on the disagreement over the humanity of these bots. The bot thinks he's a human, so it submits a PR. The maintainer thinks the bot it not human, so he rejects it. The bot reacts as a human, writing an angry ans emotional post about the story. The maintainer makes a big fuss because a non-human wrote a hit piece on him. Etc.
I think it could have been handled better. The maintainer could have accepted the PR while politely explaining that such PRs are intentionally kept for novice developers and that the bot, as an AI, couldn't be considered a novice- so please avoid such simple ones in the future and, in case, focus on more challenging stuff. I think everyone would have been happier as a result- including the bot.
they also took down their security pledge in the same breath, so, you know. if anthropic ends up cutting a deal with the DoD this is obviously bullshit.
reply