To be fair, Google does it too. I just had the product I work on renamed to Gemini Enterprise. Sure we use Gemini but it’s confusing because it’s not really an “enterprise” version of Gemini. It’s just a way to name drop what it uses under the hood. This was our third rename in 4 years so probably will change again soon
I can't quite relate. Over the past few years I have been using facebook more and more. I use it almost solely for Marketplace and Groups. You can buy literally anything on marketplace for a fraction of the price new. You can sell things on marketplace for more than you bought it for from the store. It's actually quite amazing.
Groups are also really great. I have a lot of hobbies and you can join local groups where people trade stuff or just chat about things related to the topic. I have met some really cool people in real life from facebook groups. Into overlanding in your region? There is a group for that. Into rare Trichocereus or trading rare fig cuttings? There are groups for those. It feels much more personal than reddit because it's connected to a profile that actually has real information/photos associated with it.
Occasionally I end up scrolling videos on fb which appear to just be extensions of reels on Instagram. Doesn't appear to be any different, literally crossover comments even. OP is probably seeing the chum because facebook is going off of nothing.
I’m not sure everyone would agree with that statement. As a more senior engineer at a big tech company, our execs still believe more code output is expected by level. Hell they even measure and rate you on lines of code deltas.
I don’t agree with it or believe it’s smart but it’s the world we live in
This year the family favorite everyone was fighting to play including the adults was the new litebrite touch https://amzn.to/3MROaJs
Really satisfying to click the buttons and see the super bright lights as a young kid. The games like mirror were easy yet technical which had us all competing for high scores. Definitely well thought out
Everyone appreciated it didn’t involve cleanup of any little plastic pieces like the original litebright
No clue but don’t some states require that you prove your age to view content? That would force you to share private information that could be leaked like this which is even more worrisome.
I feel like there is an absurd amount of negative rhetoric about how AI doesn't have any real world use cases in this comment thread.
I do believe that the product leadership is shoehorning it into every nook and cranny of the world right now and there are reasons to be annoyed by that but there are also countless incredible use cases that are mind blowing, that you can use it every day for.
I need to write about some absolutely life changing scenarios, including: got me thousands of dollars after it drafted a legal letter quoting laws I knew nothing about, saved me countless hours troubleshooting an RV electrical problem, found bugs in code that I wrote that were missed by everyone around me, my wife was impressed with my seemingly custom week long meal plan that fit her short term no soy/dairy allergy diet, helped me solve an issue with my house that a trained professional completely missed the mark on, completely designed and wrote code for a halloween robot decoration I had been trying to build for years, saves my wife hundreds of hours as an audio book narrator summarize characters for her audio books so she doesn't have to read the entire book before she narrates the voices.
I'm worried about some of the problems LLMs will create for humanity in the future but those are problems we can solve in the future too. Today it's quite amazing to have these tools at our disposal and as we add them in smart ways to systems that exist today, things will only get better.
Call me glass half full... but maybe it's because I don't live in Seattle
Its not about the tech, the negativity is due to the mismatch between hype and reality. LLMs are incredibly useful, for certain things, like the ones you have found. Others simply don't work.
Is it going to deliver on even 1% of the hype any time soon? Unlikely.
1% of what hype? AGI? Because other than AGI, I think it's delivered on most of the hype already.
I think our tooling is holding us back more than the actual models, and even if they never advance at all from here (unlikely), we'll still get years of improvement and innovation.
I'm mostly saying the hype is real on a lot of things today. Is it working perfectly for everything? Definitely not, but I'm of the opinion giving it another 10 years and it just might be. I'm amongst the many working to make it better and all I see is a million possibilities of what can be done that we have only worked through a few of the issues. Did it change EVERYTHING over night? No, it was a big breakthrough, the rest is still catching up.
Hype along the lines of people not having work anymore and that "AGI" is around the corner, etc are real?
Yes strong AI is always about 10 years off.
But yes any new tech takes time to work itself out. No question that LLMs are useful but they will wildly under-deliver by current hype standards. They have their own strengths and weaknesses like everything, but they can be very misleading, thus the hype.
> I feel like there is an absurd amount of negative rhetoric about how AI doesn't have any real world use cases in this comment thread
Yep.
I feel like actually, being negative on AI is the common view now, even though every other HN commenter thinks they’re the only contrarian in the world to see the light and surely the masses must be misguided for not seeing it their way.
The same way people love to think they’re cooler than the masses by hating [famous pop artist]. “But that’s not real music!” they cry.
And that’s fine. Frankly, most of my AI skeptic friends are missing out on a skill that’s helped me a fair bit in my day to day at work and casually. Their loss.
Like it or not, LLMs are here to stay. The same way social media boomed and was here to stay, the same way e-commerce boomed and was here to stay… there’s now a whole new vertical that didn’t exist before.
Of course there will be washouts over time as the hype subsides, but who cares? LLMs are still wicked cool to me.
I don’t even work in AI, I just think they’re fascinating. The same way it was fascinating to me when I made a computer say “Hello, world!” for the first time.
I think the disconnect for me is that I want AI to do a bunch of mundane stuff in my job where it is likely to be discouraged so I can focus on my work. My employer's CEO just implemented an Elon-style "top 5" bi-weekly report. Would they find it acceptable for me to submit AI-generated writing? I just had to do my annual self and peer reviews. Is AI writing valid here? A company wanted to put me, a senior engineer, through a five stage interview process, including a software-graded Leetcode style assessment. Should I be able to use AI to complete it?
These aren't meant to be gotcha rhetorical questions, just parts of my professional life where AI _isn't_ desirable by those in power, even if they're some of the only real world use cases where I'd want to use it. As someone said upthread, I want AI to do my dishes and laundry so I can focus on leisure and creative pursuits (or, in my job, writing code). I don't want AI doing creative stuff for me so I can do dishes and laundry.
> I feel like actually, being negative on AI is the common view now, even though every other HN commenter thinks they’re the only contrarian in the world to see the light and surely the masses must be misguided for not seeing it their way
I have mostly seen people on HN criticizing the few people in tech who have attached themselves to the hype and senselessly push it everywhere, not "the masses." The masses don't particularly like AI. It seems like it's only people hyping it that think everyone but Luddites are into it.
You're both painting a narrative that anti-AI sentiment is a popular bandwagon everyone is doing to be cool, as well as not that big actually and everyone is loving AI. Which is it?
> I feel like there is an absurd amount of negative rhetoric about how AI doesn't have any real world use cases in this comment thread.
What I feel is people are denouncing the problems and describing them as not being worth the tradeoff, not necessarily saying it has zero use cases. On the other end of the spectrum we have claims such as:
> countless incredible use cases that are mind blowing, that you can use it every day for.
Maybe those blow your mind, but not everyone’s mind is blown so easily.
For every one of your cases, I can give you a counter example where doing the same went horribly wrong. From cases being dismissed due to non-existent laws being quoted, to people being poisoned by following LLM instructions.
> I'm worried about some of the problems LLMs will create for humanity in the future but those are problems we can solve in the future too.
No, they are not! We can’t keep making climate change worse and fix it later. We can’t keep spreading misinformation at this rate and fix it later. We can’t keep increasing mass surveillance at this rate and fix it later. That “fix it later” attitude is frankly naive. You are falling for the narrative that got us into shit in the first place. Nothing will be “fixed later”, the powerful actors will just extract whatever they can and bolt.
> and as we add them in smart ways to systems that exist today, things will only get better.
No, they will not. Things are getting worse now, it’s absurd to think it’s inevitable they’ll get better.
Yea I do think you make a lot of valid points about the tradeoffs of the advances. I think anything we do to progress humanity technologically will have negative outcomes on everything else. I think as humans make things better for ourselves it will almost always rely on destroying something in nature in return. The capitalistic world we live in will almost always drive that to the extreme quickly.
As for the other points, are the LLMs wrong sometimes, yes. But so are humans so it's not really a novel thing to point out. The question is, are they more correct than humans? I have seen they can be more accurate, less biased, etc... and we are driving toward higher accuracy and other ways to make them right.
And the fix later attitude is not great toward everything and I was referring to the accuracy issues that people often point out as why AI is hype. The things you mention are side effects and those should be controlled because the cat is out of the bag. You can spend your time yelling at the clouds or try to do something to make it better. I assure you, capitalism is a tough enemy. This is no different than another type of combustable engine that was created that has negative consequences on the environment in different ways.
I'm not disagreeing with you... mostly just saying: the hype is warranted
> are the LLMs wrong sometimes, yes. But so are humans so it's not really a novel thing to point out.
The thing with humans is that you can build trust. I know exactly who to ask if I have a question about music, or medicine, or a myriad of other topics. I know those people will know the answers and be able to assess their level of confidence in them. If they don’t know, they can figure it out. If they are mistaken, they’ll come back and correct themselves without me having to do anything.
Comparing LLMs to random humans is the wrong methodology.
> This is no different than another type of combustable engine that was created that has negative consequences on the environment in different ways.
Combustible engines don’t make it easy to spy on people, lie to them, and undermine trust in democracy.
I like this a lot better because you don’t have to visualize the way a number looks to remember the association, you say the word in your mind and just mentally say the number that rhymes. Seems faster to get the hang of
Yea so as someone who lives on a busy road with daily visibility into how many people flaunt the law I basically did this to force the city to make changes to the street. There really isn’t much you can do to the folks who break the law and drive away but high def video of daily shenanigans is great ammo for other types of solutions that force drivers into making better decisions.
I work for a big tech company that was already hiring a ton in Canada, I have to imagine this is going to add massive amounts of fuel to the flames. Are they just going to accept that offshoring is the next best alternative? And by offshoring, I mean, immigrants moving to Canada and working for American companies because their work visas are better
I think it’s more like Open AI has the name to throw around and a lot of credibility but not products that are profitable. They are burning cash and need to show a curve that they can reach profitability. Getting 15 people with 15 ideas they can throw their weight behind is worth a lot
Yeah, more or less. Being in the application space as well as the inference space hedges a variety of risks, that inference margins will squeeze, that competition will continue to increase, etc etc.
Yea and if you look at all of the job openings they have right now, they are mostly in the “applied AI” space which is a very different thing from what they have been doing altogether. This is mostly generic enterprise development which is how they will try to become profitable
reply