Hacker Newsnew | past | comments | ask | show | jobs | submit | jatins's commentslogin

> I feel like I really need to learn how to raise money

Well, cofounding Github helps


Title: Good Taste the Only Real Moat Left

Followed by an entire AI generated fluff piece https://www.pangram.com/history/347cd632-809c-4775-b457-d9bc...

Flagged


I got to the second section before I decided to scan how long it was, saw a wall of text, and decided that this article was low taste.

Moreover, the submitter of this article (probably not the author) spams ~4 submissions per day.


Every fucking time.

I hate this timeline.


> but could not recreate for a good hour.

For certain work, we'll have to let go of this desire.

If you limit yourself to whatever you can recreate, then you are effectively limiting the work you can produce to what you know.


you should limit your output (manual or assisted) to a level that is well under your understanding ceiling.

Kernighan’s Law states that debugging is twice as hard as writing. how do you ever intend on debugging something you can’t even write?


It's simple, they'll just let the LLM debug it!

This is why I believe the need for actually good engineers will never go away because LLMs will never be perfect.


Exactly. It's a force multiplier - sometimes the direction is wrong.

Same week I went into a deep rabbit hole with Claude and at no point did it try to steer me away from pursuing this direction, even though it was a dead end.


> Kernighan’s Law states that debugging is twice as hard as writing.

100%, but in a professional setting you often work with code _not_ written by you. What if that code is written by someone well above my ceiling?


That data could be entirely made up for all we know

Flagged for AI

Flagged for AI comment

This sh*t is getting out of hand right now. I deleted my Twitter recently because every tweet would have AI replies. And HN is dying the same fate.


You can't have extensions in mobile browsers, right? While this seems like it targets mobile users.

Not in Chrome or iOS probably. But Firefox for Android supports extensions.

Safari on iOS supports extensions

Here's one way to approach this: Imagine you work for a giant company where humans would push 10k lines of code per week. In a codebase like that there's no expectation that you'd understand everything. However, there is an expectation that teams contributing code will "own" it.

So if the client is contributing you should ask them if they are okay for long term maintenance and fixes of new code they are adding. If not, then maybe you should discuss pricing changes because now you are effectively maintaining code written by them which requires different set of skills and arguably higher cognitive overhead.


Correct. And also hiving off areas that they own, vs you own. Who has decision right and controls the burn down of the kanban board? Basically treat yourself like an API that they consume. You build the good stuff that you know is right. They are responsible for making their crap works right. (Understandably there is some obvious tension around the interfaces.)

Agree. "Owning" in this context should mean: understanding the domain, working on new capabilities and handling fallout if anything goes wrong. Whether AI or human ownership transfer this ends with the new owner just handling new work, while the other two remain with previous owner (who might emotionally provide support for it due to attachment of "I've built it")

This is likely the best answer so far. It would be the same as if they had subcontracted some software factory where many junior devs produced and shipped low quality code that “works”.

There's this weird thing about AI generated content where it has the perfect presentation but conveys very little.

For example the whole animation on this website, what does it say beyond that you make a request to backend and get a response that may have some tool call?


Also it's just randomly incorrect in places. For instance, it lists "fox" as one of the "Buddy" species, but that's not in the code.

The classification is pretty weird sometimes, too. For example the `/exit` slash command is filed under advanced and experimental commands...

That's been corrected, I did another fact checking pass!

Another? Why weren't all the facts checked on the first pass?

We've moved from "move fast and break things" to "hallucinate fast and patch later." It's the inevitable side effect of using AI to curate AI-written codebases.

When you're picking most likely tokens, you get least surprising tokens, ones with least entropy and least information per token.

That's fair. The site isn't meant to be a deep technical dive, it's more of a visual high-level guide of what I've curated while exploring the codebase while assisted by AI, 500k loc codebase is just too much to sift through in a short amount of time.

Really Weird but then it's so easy spot AI text by this pattern

I agree with you and I'm generally an AI "defender" when people superficially dismiss AI capabilities, but this is a more subtle point.

If you prompt with little raw material and little actual specification of what you want to see in the end, eg you just say make a detailed breakdown dashboard-like site that analyzes this codebase, the result will have this uncanny character.

I'd describe it as a kind of "fanfic", it (and now I'm not just talking about this website but my overall impression related to this phenomenon) reminds me a bit like how when I was 15 or so, I had an idea about how the world works then things turned out to be less flashy, less movie-like, less clear-cut, less-impressive-to-a-teenage-boy than I had thought.

If you know the concept of "stupid man's idea of a smart man", I'd say AI made stuff (with little iteration) gives this outward appearance of a smart man from the Reddit-midwit-cinematic-universe. It's like how guns in movies sound more like guns than real guns. It's hyperreality.

Again this is less about the capabilities of AI and it's more connected to the people-pleasing nature of it. It's like you prompt it for some epic dinner and it heaps you up some hmmm epic bacon with bacon yeah (referring to the hivemind-meme). Or BigMac on the poster vs the tray, and the poster one is a model made with different components that are more photogenic. It's a simulacrum.

It looks more like your naive currently imagined thing about what you think you need vs what you'd actually need. It's like prompting your ideal girlfriend into AI avatar existence. I'm sure she will fit your ideal thought and imagination much better but your actual life would need the actual thing.

This relates to the Persona thing that Anthropic has been exploring, that each prompt guides the model towards adopting a certain archetypal fiction character as it's persona and there are certain attraction basins that get reinforced with post training. And in the computer world, simulated action can be easily turned into real action with harnesses and tools, so I'm not saying that it doesn't accomplish the task. But it seems that there are more sloppy personas, and it seems that experts can more easily avoid summoning them by giving them context that reflects more mundane reality than a novice or an expert who gives little context. Otherwise the AI persona will be summoned from the Reddit midwit movie.

I'm not fully clear about all this, but I think we have a lot to figure out around how to use and judge the output of AI in a productive workflow. I don't think it will go away ever, but will need some trimming at the edges for sure.


I am reminded of Steve Jobs's video where he says Microsoft has no taste everytime MS pushes this stuff on it's users. True in 90s, true now

Video https://youtu.be/lahX_ARGTqA?si=AnULWzRbl7cc3UWu


Well Unfortunately I can also argue, Apple has no taste anymore. At least, less of it day by day.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: