Hacker Newsnew | past | comments | ask | show | jobs | submit | more MasterScrat's commentslogin

How much AI did you use to write up this article? It tripped up my "fake AI-written article" detector a few times despite being interesting enough to read to the end


used claude to polish the draft and tighten sentences. the thinking, analysis, and examples are all mine and based on personal experiences. spent the weekend reflecting on my past experiences with claude code and actually digging into why claude code feels the way it does. curious to know what tripped your detector.


Adding to this: too many negatives before making a point, which AI text is prone to do in order to give surface level emphasis to random points in an argument. For example: "I sat there for a second. It didn't lose the thread. It didn't panic. It prioritized like a real engineer would." Then there is the fact that the paragraph ends in just about the same way, which also activates one's AI-voice-detector, so to speak: "This wasn't autocomplete. This was collaboration."

In my opinion, to write is to think. And to write is also to express oneself, not only to create a "communication object," let's put it that way. I would rather read an imperfect human voice than a machine's attempts to fix it. I think it's worth to face the frustration that comes with writing, because the end goal of refining your own argument and your delivery is that much sweeter. Let your human voice shine through.


Lots of things - typical llm em-dash situations although using dash. Lists of 3s after a colon where the 3 items aren't great. Short sentences for "impact" that sounds kind of like a high school essay i.e. "God level engineer...Zero ego."

I cannot at all understand writing an essay and then having an llm "tighten up the sentences" which instead just makes it sound like slop generated from a list of bullets


“Here’s the thing” “The best part?”


"It's not just X, it's Y"

I find it really hard to read articles that use AI slop aphorisms. Please use your own words, it matters.


What if I no good in English?

Jokes aside, my English is passable and I'm fine with it when writing comments but I'm very aware that some of it doesn't sound native due to me, well, not being native speaker.

I use AI to make it sound more fluent when writing for my blog.


As long as your bullet points+prompt are shorter than the output, couldn't you post that instead? The only time I think an LLM might be ethically acceptable for something a human has to read is if you ask it to make it shorter.


I write the full article in my Czenglish (English influenced by Czech sentence structure). Then I let it rewrite it in proper English.

So it's me doing the writing and GPT making it sound more English.


> What if I no good in English?

It would still sound more human coming from you.


Yeah it’s hard to keep interest when there’s no voice, just the same AI feel that you see everywhere else.


Well, actually, what if my own words make me come across as a raging pedantic asshole, you feckless moron!? I don't actually think you're a feckless moron, but sometimes I'll get emotional about this or that, and run my words through an LLM to reword it so that "it's not assholey, it's nice". I may know better than to use the phrase "well actually" seriously these days, but when the point is effective communication, yeah I don't want my readers to be put off by AI-isms, but I also don't want them to get put off by my words being assholey or condescending or too snarky or smug or any number of things that detract from my point. And fwiw, I didn't run this comment through an LLM.


> The start is slow as well, skipping to generation 42168M is recomended.

I picture entities playing with our universe, "it starts slow but check it out at the 13.8B mark"


Philosophically and depending on what schools of thought you follow, reality is just a really complex GoL simulation. I'm sure I read about it once, but if we were living in a simulation, would we be able to know?


I enjoy the [GoL -> our “reality” -> outside-the-simulation] comparison. It really drives home how unlikely we would be to understand the outside-the-simulation world.

Of course, there are other variants (see qntm's https://qntm.org/responsibility) where the simulation _is_ a simulation of the world outside. And we have GoL in GoL :-)


Always a fun read :) they turned it into a futurama episode


I think of reality as of GOL but 3D, with more states other than 0 and 1, and conservative (follows conservation laws, no relation with any politics).


Universe could be probability based GoL simulation; basic Turing machine cannot handle that


HN has a hatred of K8s? That’s new to me


K8s is used in many situations it shouldn't be, and a lot of HNers (including me) are bitter about having to deal with the resulting messes


This is a site for startups. They have no business running k8s, in fact, many of the lessons learned get passed on from graybeards to the younger generation along those lines. Perhaps I'm wrong! I'd love to talk shop somewhere.


And even for "out of distribution" code you can still ask question about how to do the same thing but more optimized, could a library help for this, why is that piece of code giving this unexpected output etc


I think the concern of "blog comments" is best left to external platforms eg HN, Reddit etc

What would be more useful would be an automated list of places where the post has been discussed (and maybe pull the top comments from there through API?)


There used to be a time when comments were attached to the posts. Where anyone could come, leave their name and a comment, and let the author know if any edits, misspellings, or how they liked the article.

Social media ruined that. Everyone is now on their own soap box posting comments of drivel from their sub-optimal self-conscious parroting asinine talking points about how one characterized group of statistics ruined it for everyone else. Bots, drivel, linkbacks, social media, stupid laws, and an aversion to independence - we have what we have today. Large platforms that trick humans into use because they have the largest arenas.

Also, the author’s experience with seeing scammy ads on their site doesn’t mean that others are seeing the same ads. Because they ran ad-free for so long it’s possible their token in the AdTech ecosystem is stale in which case it hasn’t put it into any buckets yet. Ergo, you get the smoking/drinking/scamming/doesn’t fit category.

A “token” is a device or ident signature used to identify a viewer or user so that they can tabulate impressions, build personas, categorize your shopping habits, track the sites you visit, link your token with others in your proximity


> Also, the author’s experience with seeing scammy ads on their site doesn’t mean that others are seeing the same ads...

Well, so they may see worse ads.


> Social media ruined that. Everyone is now on their own soap box posting comments of drivel from their sub-optimal self-conscious parroting asinine talking points about how one characterized group of statistics ruined it for everyone else.

Partially agree, partially disagree. Blog comments were already dead when SEO fraudsters discovered that "linkbacks" could be abused for spam even easier than comments were.


Correct. Site owners moved their communities over to social media pages because they couldn't handle moderating the waves of spam comments that littered every single post on their site. They figured, let Facebook/Twitter handle moderation. Then FB closed the gates, de-emphasized posts with outbound links and now site owners are screwed.


Yep. Thats 99% of the culprit.



This looks interesting, but at least on mobile, it’s riddled with too many ads to be readable.


Congrats on shipping!

I’d love to hear a bit about the ML side of things: what was your experience with various models? Do you see a clear cost vs quality tradeoff with current state of the art models? How do open vs closed models compare?


We run an API to finetune text-to-image models (dreamlook.ai), as a two-person team.

When we launched 3 years ago our differentiator was that we could train both cheaper and faster by running on TPUs, these days GPUs have mostly caught up, and open source models are not as competitive as they once were.

It’s making ~5k/month these days, not bad as we’re no longer actively working on it, but a fraction of what we were doing a year ago.

The main challenge for us was the non-technical part. We built an API-first product because we love the tech and felt it’d allow us to focus on that part. But we still had to do marketing, sales support etc which we didn’t enjoy or excel at.

Now we’re both back in larger companies where we can focus on doing ML. It was satisfying to build a working business from scratch, no regrets, but I’m definitely happier now.


I used this neck in the day! You somehow did better Lora training than others in the space! Found you via discord


Interesting work! Do you have any figures about GPU costs your service would incur monthly and how much was spent while building it?


17 years later, this is still the most impressive demo of 3D audio I have ever seen.

I've tried various headphones with "Spatial Audio", I have Airpods which do spatialization, yet none of this comes close to that barber shop.


On MacOS, if you toggle the OS from dark to light mode and back, you can see for a second the HDR effect being turned off


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: