Hacker Newsnew | past | comments | ask | show | jobs | submit | tharant's commentslogin

When you’re digging around in tens to hundreds of PCs each day, the odds of zapping something are higher. I’ve killed a few chips and boards.


Yep, it's a numbers game. There are things that can increase your risk on single computers, like working on carpet with dry air of course. But when you have to build a ton of PCs moving fast things like anti static mats and ground strips make a huge difference.


Hi, it’s me.


I can find no such requirement in the App Store Guidelines. Or is there anecdotal evidence somewhere?


They require push notifications to be signed with your certificate. You must maintain the infrastructure to do this yourself because you can't share the certificate with third parties (obviously) and downtime will mean no push notification delivery.

I have no idea if that's in the guidelines but that's how it works.


Sure, the model would not “know” about your example, but that’s not the point; the penultimate[0] goal is for the model to figure out the method signature on its own just like a human dev might leverage her own knowledge and experience to infer that method signature. Intelligence isn’t just rote memorization.

[0] the ultimate, of course, being profit.


I don't think a human dev can divine a method signature and effects in the general case either. Sure the add() function probably takes 2 numbers, but maybe it takes a list? Or a two-tuple? How would we or the LLM know without having the documentation? And yeah sure the LLM can look at the documentation while being used instead of it being part of the training dataset, but that's strictly inferior for practical uses, no?

I'm not sure if we're thinking of the same field of AI development. I think I'm talking about the super-autocomplete with integrated copy of all of digitalized human knowledge, while you're talking about trying to do (proto-)AGI. Is that it?


> Sure the add() function probably takes 2 numbers, but maybe it takes a list? Or a two-tuple? How would we or the LLM know without having the documentation?

You just listed possible options in the order of their relative probability. Human would attempt to use them in exactly that order


Please stop giving me project ideas. :)


> I really fear that a number of engineers are going to us GPT to avoid thinking. They view it as a shortcut to problem solve and it isn't.

How is this sentiment not different from my grandfather’s sentiment that calculators and computers (and probably his grandfather’s view of industrialization) are a shortcut to avoid work? From my perspective most tools are used as a shortcut to avoid work; that’s kinda the while point—to give us room to think about/work on other stuff.


Because calculators aren't confidently wrong the majority of the time.


In my experience, and for use-cases that are carefully considered, language models are not confidently wrong a majority of the time. The trick is understanding the tool and using it appropriately—thus the “carefully considered” approach to identifying use-cases that can provide value.


In the very narrow fields where I have a deep understanding, LLM output is mostly garbage. It sounds plausible but doesn't stand up to scrutiny. The basics that it can regurgitate from wikipedia sound mostly fine but they are already subtly wrong as soon as they depart from stating very basic facts.

Thus I have to assume that for any topic I do not fully understand - which is the vast majority of human knowledge - it is worse than useless, it is actively misleading. I try to not even read much of what LLMs produce. I might give it some text and riff about it if I need ideas, but LLMs are categorically the wrong tool for factual content.


> In the very narrow fields where I have a deep understanding, LLM output is mostly garbage > Thus I have to assume that for any topic I do not fully understand - which is the vast majority of human knowledge - it is worse than useless, it is actively misleading.

Why do you have to make that assumption? An expert arborist likely won’t know much about tuning GC parameters for the JVM but that won’t make them “worse than useless” or “actively misleading” when discussing other topics, and especially not when it comes to the stuff that’s relatively tangential to their domain.

I think the difference we have is that I don’t expect the models to be experts in any domain nor do I expect them to always provide factual content; the library can provide factual content—if you know how to use it right.


There's a term corollary to what you're trying to argue here: https://en.wikipedia.org/wiki/Gell-Mann_amnesia_effect

> You open the newspaper to an article on some subject you know well... You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward—reversing cause and effect. I call these the "wet streets cause rain" stories. Paper's full of them. In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know.


A use-case that can be carefully considered requires more knowledge about the use-case than the LLM, it requires you to understand the specific model's training and happy paths, it requires more time to make it output the thing you want than just doing it yourself. If you don't know enough about the subject or the model, you will get confident garbage


> A use-case that can be carefully considered requires more knowledge about the use-case than the LLM

I would tend to agree with that assertion…

> it requires you to understand the specific model's training and happy paths

But I strongly disagree with that assertion; I know nothing of commercial models’ training corpus, methodology, or even their system prompts; I only know how to use them as a tool for various use-cases.

> it requires more time to make it output the thing you want than just doing it yourself.

And I strongly disagree with that one too. As long as the thing you want it to output is rooted in relatively mainstream or well-known concepts, it’s objectively much faster than you/we are; maybe it’s more expensive but it’s also crazy fast—which is the point of all tools—and the precision/accuracy of most speedy tools can be often deferred until a later step in the process.

> If you don't know enough about the subject or the model, you will get confident garbage

Once you step outside their comfort zone (their training), well, yah… they do all tend to be unduly confident in their responses—I’d argue however that it is a trait they learned from us; we really like to be confident even when we’re wrong and that trait is borne out dramatically across the internet sources on which a lot of these models were trained.


Did you grandpa think that calculators made engineers worse at their jobs?


I don’t know for certain (he’s no longer around) but I suspect he did. The prevalence of folks who nowadays believe that Gen-AI makes everything worse suggests to me that not much has changed since his time.

I get it; I’m not an AI evangelist and I get frustrated with the slop too; Gen-AI (and many of the tools we’ve enjoyed over the past few millennia) was/is lauded as “The” singular tool that makes everything better; no tool can fulfill that role yet we always try to shoehorn our problems into a shape that fits the tool. We just need to use the correct tools for the job; in my mind, the only problem right now is that we have a really capable tool and have identified some really valuable use-cases for that tool yet we also keep trying to use it for (what I believe are, given current capabilities) use-cases that don’t fit the tool.

We’ll figure it out but, in the meantime, while I don’t like to generalize that a tech or its use-cases are objectively good/bad, I do tend to have an optimistic outlook for most tech—Gen-AI included.


Is it possible that what happened was an impedance mismatch between you and the engineer such that they couldn’t grok what you told them but ChatGPT was able to describe it in a manner they could understand? Real-life experts (myself included, though I don’t claim to be an expert in much) sometimes have difficulty explaining domain-specific concepts to other folks; it’s not a flaw in anyone, folks just have different ways of assembling mental models.


Whenever someone has done that to me, it's clear they didn't read the ChatGPT output either and were sending it to me as some sort of "look someone else thinks you're wrong".


Again, is it possible you and the other party have (perhaps significantly) different mental models of the domain—or maybe different perspectives of the issues involved? I get that folks can be contrarian (sadly, contrariness is probably my defining trait) but it seems unlikely that someone would argue that you’re wrong by using output they didn’t read. I see impedance mismatches regularly yet folks seem often to assume laziness/apathy/stupidity/pride is the reason for the mismatch. Best advice I ever received is “Assume folks are acting rationally, with good intention, and with a willingness to understand others.” — which for some reason, in my contrarian mind, fits oddly nicely with Hanlon’s razor but I tend to make weird connections like that.


> is it possible you and the other party have (perhaps significantly) different mental models of the domain—or maybe different perspectives of the issues involved?

Yes, however typically if that's the case they will respond with some variant of "ChatGPT mentioned xyz so I started poking in that direction, does that make sense?" There is a markedly different response when people are using ChatGPT to try to understand better and that I have no issue with.

I get what you're suggesting but I don't think people are being malicious, it's more that the discussion has gotten too deep and they're exhausted so they'd rather opt out. In some cases yes it does mean the discussion could've been simplified, but sometimes when it's a pretty deep, technical reason it's hard to avoid.

A concrete example is we had to figure out a bug in some assembly code once and we were looking at a specific instruction. I didn't believe that instruction was wrong and I pointed at the docs suggesting it lined up with what we were observing it doing. Someone responded with "I asked ChatGPT and here's what it said: ..." without even a subsequent opinion on the output of ChatGPT. In fact, reading the output it basically restated what I said, but said engineer used that as justification to rewrite the instruction to something else. And at that point I was like y'know what, I just don't care enough.

Unsurprisingly, it didn't work, and the bug never got fixed because I lost interest in continuing the discussion too.

I think what you're describing does happen in good faith, but I think people also use the wall of text that ChatGPT produces as an indirect way to say "I don't care about your opinion on this matter anymore."


Definitely a possibility.

However, I have a very strong suspicion they also didn't understand the GPT output.

To flush out the situation a bit further, this was a performance tuning problem with highly concurrent code. This engineer was initially tasked with the problem and they hadn't bothered to even run a profiler on the code. I did, shared my results with them, and the first action they took with my shared data was dumping a thread dump into GPT and asking it where the performance issues were.

Instead, they've simply been littering the code with timing logs in hopes that one of them will tell them what to do.


I'm sorry, how is this a "senior engineer"? Is this a "they worked in the industry for 6 years and are now senior" type situation or are they an actual senior engineer? Because it seems like they're lacking the basics to work on what you yourself seem to consider senior engineer problems for your project.

Also, what is your history and position in the company? It seems odd that you'd get completely ignored by this supposed senior engineer (something that usually happens more often with overconfident juniors) if you have meaningful experience in the field and domain.


> how is this a "senior engineer"? Is this a "they worked in the industry for 6 years and are now senior" type situation...

Yeah, this is the situation exactly, though I've known a few seniors that were senior just because they've hung around and not experience.

> what is your history and position in the company? It seems odd that you'd get completely ignored by this supposed senior engineer

Been with the company for over a decade at this point. I think I have a pretty good reputation generally. Someone sent me a "This is why cogman10 is the GOAT" message for some of my technical interactions on large public team chats.

Why I'm being ignored? I have a bunch of guesses but nothing I'm willing to share.


It sounds like the engineer may have little/no experience with concurrency; a lot of folks (myself included) sometime struggle with how various systems handle concurrency/parallelism and their side effects. Perhaps this is an opportunity for you to “show not tell” them how to do it.

But I think my point still holds—it’s not the tool that should be blamed; the engineer just needs to better understand the tool and how/when to use it appropriately.

Of course, our toolboxes just keep filling up with new tools which makes it difficult to remember how to use ‘em all.


I was hoping for a Desk Set reference; thank you.


This is one reason I see to be optimistic about some of the hype around LLMs—folks will have to learn how to write high quality specifications and documentation in order to get good results from a language model; society desperately needs better documentation!


> As for not merging the PR - why are you entitled to have a PR merged?

I didn’t get entitlement vibes from the comment; I think the author believes the PR could have wide benefit, and believes that others support his position, thus the post to HN.

I don’t mean to be preach-y; I’m learning to interpret others by using a kinder mental model of society. Wish me luck!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: