Hacker Newsnew | past | comments | ask | show | jobs | submit | threethirtytwo's commentslogin

The premise of the article is exactly the same as the title. You’re making an assumption that anyone pointing out these numbers is claiming men are being oppressed, and then arguing against that. There’s no need. Almost no one is making that claim.

The real issue is the opposite narrative. The idea that men broadly oppressed women across history, preventing them from participating in economic roles, and that modern outcomes are primarily the result of that dynamic being corrected.

There are a number of reasons at play here why that premise doesn’t make much sense (without further context). Here are just five points:

1. For most of history, labor was dictated by physical constraints and survival needs. Work like hunting, land clearing, construction, and early farming depended heavily on upper body strength and endurance, which skewed participation toward men. That’s a division of labor shaped by biology and environment, not a simple story of exclusion. As technology advanced, much of that physicality became unnecessary, which is a major driver of why participation has equalized in modern roles.

2. Women’s roles were not an absence from work, but a concentration in different forms of labor. Childbearing, childcare, food preparation, and household production were essential to survival, even if they don’t show up cleanly in modern employment statistics.

3. Many of the institutions we associate with modern economic life, including formalized science, engineering, and large-scale industry, were disproportionately created and originated in contexts where men were the primary participants. That’s not a claim about superiority, but it does mean the structure of what we now recognize as “work” and “progress” was heavily shaped by that historical imbalance.

4. The concept of a formal job market is relatively recent. In pre-industrial societies, most people, including men, were not participating in anything resembling today’s labor market. Applying modern employment categories backward creates a distorted picture of inclusion and exclusion.

5. Modern workforce participation is strongly driven by changes in technology and incentives. As physical constraints decreased and the returns to education and careers increased, more women entered and competed in the workforce. That shift is not well explained by a single narrative of oppression being lifted, but by broader structural changes.


bro listen to the guy above you. You need the lowest friction way to help users visualize what this is. By low friction I mean the exact way tik tok gets people to watch thousands of videos for hours. Only one click and zero brain power.

I wanted to buy this. I tried the demo, but then I hit a wall of no agent connected and gave up and came here looking for reviews on whether this is good or shit.


Yeah about an hour before you sent this I updated the homepage with a video.

Unfortunately, that video does not explain anything at all. I now know that the product can be used with a mouse, that I can select things and set some properties. Who is it for? What does it do? Why should I use it?

Just because the path is bad doesn't mean it won't happen.

The other thing you're failing to look at is momentum and majority opinion. When you look at that... nothings going to change, it's like asking an addict to stop using drugs. The end game of AI will play out, that is the most probably outcome. Better to prepare for the end game.

It's similar to global warming. Everyone gets pissed when I say this but the end game for global warming will play out, prevention or mitigation is still possible and not enough people will change their behavior to stop it. Ironically it's everyone thinking like this and the impossibility of stopping everyone from thinking like this that is causing everyone to think and behave like this.


> The other thing you're failing to look at is momentum and majority opinion. When you look at that... nothings going to change, it's like asking an addict to stop using drugs. The end game of AI will play out, that is the most probably outcome. Better to prepare for the end game.

Perhaps I didn't sound pessimistic enough lol? I completely agree what you're saying here. This is happening whether we like it or not.

On global warming I also agree you're not going to get every nation to coordinate, but least global warming has a forcing function somewhere down the line since there's only a limited amount of fossil fuels in the ground that make economical sense to extract. AI on the other hand really has no clear off-path, at every point along the way it makes sense to invest more in AI. I think at best all we can expect to do is slow progress, which might just be enough to ensure the our generation and the next have a somewhat normal life.

My p(doom) is near 99% for a reason... I think that AI progression is basically almost a certainty – like maybe a 1/200 chance that no significant progress is made from here over the next 50 years. And I also think that significant progress from here more or less guarantees a very bad outcome for humanity. That's a harder one to model, but I think along almost all axises you can assume there's about 50 very bad outcomes for every good outcome – no cancer cure without super viruses, no robotics revolution without killer drones, no mass automation without mass job loss which results in destabilising the global order and democratic systems of governance...

I am prepping and have been for years at this point... I'm an OG AI doomer. I've been having literal nightmares about this moment for decades, and right now I'm having nightmares almost every night. It's scares me because I know all I can do is delay my fate and that of those I love.


You would if there was one other company with a just as capable god like AI. You’d undercut them by 500 which would make them undercut you. Do that a couple of times and boom. 20 dollars.

That's still assuming that they're competing as consumer tools, rather than competing to discover the next miracle drug or trading algorithm or whatever. The idea is that there'd more profitable uses for a super-intelligent computer, even if there were more than one.

But would miracle drugs and trading algorithms be as profitable as AI research/chip design/energy research? Probably if AI is by far the biggest growth in the economy majority of the AI's usage internally should (as incentivized by economics) in some way work towards making itself better.

I had the opposite reaction. The second half was garbage, but the first half was so good and original I'd recommend it just for that.

Same! I just finished the book a few days ago. The first half is really good, a cool premise and interesting story. The second half just got a bit too weird for me and by the final chapter I was happy it was finished lol.

I liked piecing the story together in the SCP wiki.

Later I read the first version of the book and it was okay, but the vibes were a bit lost.

The new version of the book I didn't even finish.


The first few chapters of that book are some of the coolest I've ever read. I agree it really drops off in the second half, but would still recommend it to people.

> the first half was good and that the second half felt clunky

> The second half was garbage, but the first half was so good

so you had the same reaction?


> To the point where i don’t recommend it to anyone

> but the first half was so good and original I'd recommend it just for that

Attension span so short you couldn't even make it to the second half of the sentence before dismissing it


I think this comment is unnecessarily harsh.

To anyone confused (like me), the commenters above had opposite recommendations despite having similar opinions of the book.


They were being snarky about a comment when they literally didn't read the entire sentences they were being snarky about. No, I don't think I was unnecessarily harsh.

fair enough, i guess my brain got stuck on reconciling the first thing that i whiffed on the differing recommendations lol

I had the same issue. I figured it out before I'd have commented similarly, but I completely understand where the confusion came from and I don't attribute it to your reading skills.

It's not pure hype. The linear trendline of AI in the last couple of years from chatbot to autocomplete to agentic coding does point to developer replacement in a couple years.

Now mind you a trendline is a predictor and the trends don't always travel in a line. The future is unknown but a trendline that makes the prediction of AI taking over in a couple of years is not an unrealistic prediction simply because of past progress. That is the most probably conclusion given the information we have.

Discarding that as just "hype" just means you're not being very rational or logical which is normal given we're on HN.


Much of human behavior is evolved so that we don't understand why. For example human morality is an evolved trait, but you wouldn't know it.

Please explain walking to me so that I can explain it to a person who forgot how to walk such that he can walk after the explanation.


it has visual artifacts when inferencing.

Less than 6 months ago I would say about 50% of HN was at the denial phase saying it's just a next token predictor and that it doesn't actually understand code.

To all of you I can only say, you were utterly wrong and I hope you realize how unreliable your judgements all are. Remember I'm saying this to roughly 50% of HN., an internet community that's supposedly more rational and intelligent than other places on the internet. For this community to be so wrong about something so obvious.... That's saying something.


It doesn't actually understand anything...let alone code. And I think you are the one who is in denial.

If it doesn’t understand anything why the fuck are we letting it write all our code when it doesn’t understand code at all? Does that make any sense to you? Does that align with common sense? You’re still in denial.

You gonna give some predictable answer about next token prediction and probability or some useless exposition on transformers while completely avoiding the fact that we don’t understand the black box emergent properties that make a next token predicted have properties indistinguishable from intelligence?


I'm letting it write (type out) most (80-98%) of my code, but I see it as an idiot savant. If the idea is simple, I get 100 lines of solid Ruby. Good, saves me time. If the idea is complicated (e.g. a 400-LOC class that distills a certain functionality currently scattered across different methods and objects) and I ask 4 agents to come up with different solutions, I get 4 slightly flawed approaches that don't match how I'd personally architect the feature. And "how I'd personally architect the feature" is literally my expertise. My job isn't typing Ruby, it's making good decisions.

My conclusion is that at this point, LLMs are not capable of making good decisions supported by deep reasoning. They're capable of mimicking that, yes, and it takes some skill to see through them.


Follow the trendline. It went from autocomplete to agentic coding. What do you think will happen to your “good decision making” in a couple years?

As of right now the one shot complex solutions AI comes up with are actually frequently extremely good now. It’s only gonna get better and this was in the last 6 months. You could be outdated on frontier model progress. That’s how quick things are changing.


This is not an appeal to authority, but this video probably contains the answers to your questions if you are open minded about it

https://www.youtube.com/watch?v=qvNCVYkHKfg


What questions do I have? I didn’t even mention a single question and you hallucinated an assumption that I have questions.

I don’t have any questions about LLMs. At least not any more than say an LLM researcher at anthropic working on model interpretability.


Can't you count? Are you an LLM?

No. I'm not an LLM, but you have intellectual issues. Counting? What does that have to do with anything?

Go count the number of questions in your comment..

It's called rhetorical questions. Look it up.

Oh, I thought you were genuinely wondering...

> To all of you I can only say, you were utterly wrong and I hope you realize how unreliable your judgements all are.

They weren't wrong though. It objectively is just a next turn predictor and doesn't understand code. That is how the thing works.


Not true. You’re a next token predictor and clearly the tokens you predict indicate that the way you predict the next token is much much more then simply a probabilistic detection. You’re a black box and so is the LLM and the evidence is pointing at emergent properties we don’t completely understand but are completely inline with what we understand as reasoning.

Don’t make me cite George Hinton or other preeminent experts to show you how wrong you all are.

Use your brain. It is changing the industry from the ground up. It understands.


>Don’t make me cite George Hinton or other preeminent experts to show you how wrong you all are.

https://www.youtube.com/watch?v=qvNCVYkHKfg


Yann Lecunn was vocal about his stance against LLMs very early on and claimed they were a dead end. Well he's been proven fucking wrong. Completely.

George Hinton was his mentor and George is the main god father of AI while Yann is more of a malfunctioning student still holding onto the stochastic parrot monicker. Here's George saying what you need to know:

https://www.reddit.com/r/agi/comments/1qwoee7/godfather_of_a...


> Well he's been proven fucking wrong. Completely.

How was he "proven" wrong?

> Yann is more of a malfunctioning student...

lol what?


He’s proven wrong by reality. Look at what LLMs are doing right now. It’s utterly obvious now that hallucinations are getting reduced, AI is extremely effective now…

Yann is malfunctioning because he cant reconcile his past statements with reality. He can’t admit he’s wrong. As time goes on his past statements will look even more and more absurd as progress on AI keeps moving forward.

At the same time we have Terence Tao using ai to develop new math and Hinton saying the opposite with actual evidence and the entire industry. Yann is a clown: https://www.reddit.com/r/singularity/comments/1piro45/people... and his opinions are not mainstream at all.


>now that hallucinations are getting reduced..

Actually that is done by bolting more "fact checking" layers on top. Even that does not fix it very well..

So at a fundamental level, LLMs have not really progressed. On a superficial level, they have, but that is only because marketing wanted to show the "progress" over a short amount of time, so that the "uninitiated" will extrapolate that to mean some god like AI in near future, raking in all the investor money...

Smart move though. It is working very well....


>Actually that is done by bolting more "fact checking" layers on top. Even that does not fix it very well..

Reinforcement training is done as well. And it fixed it quite well such that we use it on a daily basis now.

>So at a fundamental level, LLMs have not really progressed. On a superficial level, they have,

No those fixes aren't superficial. They're the same fixes you have in your brain. You also fact check, people also hallucinate and people with brain damage hallucinate even more. You can bypass mechanisms in your brain that prevent hallucination by taking drugs.

Essentially the brain is a big hallucination machine with mechanisms to prevent it both low level and high level. We even consciously fact check ourselves and double check our own work. Is that superficial? No.

You look at progress by seeing how LLMs are used. At first they were used as a chatbot. Then it became autocomplete. Now basically most people don't code with their hands anymore and they use it as an agent. That is the most disruptive thing to ever happen to programming. This isn't an investor thing. This is REALITY.

>So at a fundamental level, LLMs have not really progressed. On a superficial level, they have, but that is only because marketing wanted to show the "progress" over a short amount of time, so that the "uninitiated" will extrapolate that to mean some god like AI in near future, raking in all the investor money...

This is you hallucinating. Investor money is raking in because they are closer than ever to creating AI that can replace developers and companies will pay top dollar for that. That's why AI is making money. Very few people are that speculative into making a god AI... but a few are and those are the people throwing money at Yann's AMI venture which is huge gamble and could have that money end up in the trash.

But LLM technology? We use it everyday. It's already a validated technology.

>Smart move though. It is working very well....

I can ask an LLM, "hey, human society is changing before our very eyes. Nobody programs directly anymore" The LLM is not so stupid as to say that's "superficial" progress. That's a smarter answer then a lot of the people here.

I would say 6 months ago, I would get like 5 or 6 detractors responding to one of my posts like this. Now I think this thread got 2 and a bunch of vote downs. People are realizing they're embarrassingly wrong. It'll hit you eventually. It will happen either in the next couple months or next couple years simply because humanity is pouring so much research into this area there is no way it won't progress.

For a good analogy you just need to look at self driving cars. HN used to be loaded with people saying it was a shit venture and totally useless and no progress has been made.... well now I regularly take waymo cars everywhere. Investors were wrong about crypto, but they weren't wrong about self driving.

I would say the HN crowd is just as stupid if not more stupid then investors.


> And it fixed it quite well

Not really. LLMs will still happily hallucinate and even provide "sources" for their claims and when you check the sources often it does not even exist..

So they will even hallucinate the sources to justify their hallucinated claims. LOL.


I never said it fixed it completely. But fixed it well enough that we can use it for agentic coding. Fixed it well enough that we don’t type code anymore.

And it’s only going to get better.

It’s clear the LLM hallucinates and understands at the same time.


Yes, I do find it a little funny how the developer community got it all wrong and the non technical people who were thinking AI is going to change everything in 2023 were the right ones. Maybe they know more than developers think.

They don't know more. Humanity mostly doesn't know how LLMs work because most of the properties just emerged from the soup of billions of weights whose sheer complexity is so high that understanding any of it holistically is impossible.

The difference is the arrogance. Developers think they know more. Developers think they're smart. And also there's an existential crisis where the LLM are poised to take over developer jobs first. So the developer calls every other layman an idiot and deludes himself into thinking his skills will always be superior to AI.


Whenever I come to HN I see a bunch of people say LLMs are just next token predictors and they completely understand LLMs. And almost every one of these people are so utterly self assured to the point of total confidence because they read and understand what transformers do.

Then I watch videos like this straight from the source trying to understand LLMs like a black box and even considering the possibility that LLMs have emotions.

How does such a person reconcile with being utterly wrong? I used to think HN was full of more intelligent people but it’s becoming more and more obvious that HNers are pretty average or even below.


I'm kinda one of those who believes they 'completely' understand LLMs. But I've also developed my understanding of them such that the internal mechanisms of the transformer, or really any future development in the space based on neural networks and machine learning is irrelevant.

1. A string of unicode characters is converted into an array of integers values (tokens) and input to a black box of choice.

2. The black box takes in the input, does its magic, and returns an output as an array of integer values.

3. The returned output is converted into a string of unicode characters and given to the user, or inserted in a code file, or whatever. At no point does the black box "read" the input in any way analogous to how a human reads.

Where people get "The AIs have emotions!!!" from returning an array of integers values is beyond me. It's definitely more complicated than "next token predictor", but it really is as simple as "Make words look like numbers, numbers go in, numbers come out, we make the numbers look like words."


Yeah nothing personal but my claim here is you’re not smart. The next token predictor aspect is something anyone can understand… the transformer is not quantum physics.

Like look at what you wrote. You called it black box magic and in the same post you claim you understand LLMs. How the heck can you understand and call it a black box at the same time?

The level of mental gymnastics and stupidity is through the roof. Clearly the majority of the utilitarian nature of the LLM is within the whole section you just waved away as “black box”.

> Where people get "The AIs have emotions!!!" from returning an array of integers values is beyond me

Let me spell it out for you. Those integers can be translated to the exact same language humans use when they feel identical emotions. So those people claim that the “black box” feels the emotions because what they observe is identical to what they observe in a human.

The LLM can claim it feels emotions just like a human can claim the same thing. We assume humans feel emotions based off of this evidence but we don’t apply that logic to LLMs? The truth of the matter is we don’t actually know and it’s equally dumb to claim that you know LLMs feel emotions to claiming that they dont feel emotions.

You have to be pretty stupid to not realize this is where they are coming from so there’s an aspect of you lying to yourself here because I don’t think you’re that stupid.


Of course LLMs display human emotions, if they have been trained on texts that have recorded humans displaying human emotions.

With an input context that contains words that excite certain human emotions, the output of the core LLM function will generate a token probability distribution that is representative for the human emotions displayed by humans in the training texts.

This is something expected and non-sensational. An LLM mimics the human behavior that was recorded in the training texts, much in the same way as a photographic image of a human face mimics the appearance of that human face.

A photographic image is designed to reproduce the light field created by a face that reflects the ambient light, a LLM is created to reproduce the typical conversational behavior that was recorded in the training texts.

Depending on how it was trained, one should expect a LLM to be affected by the choice of words used in the input in a similar way how a human would be affected.

However, that does not mean that a LLM that shows signs of emotional distress feels some pain because of that. A LLM is designed for mimicry and it does not feel more pain or more happiness than a photograph of a wound feels pain from the wound or a photograph of a smiley face feels happiness.

The fact that the current LLMs do not actually feel the human emotions that they may be able to mimic in an accurate way, does not mean that you could not build a robot which would have some built-in mechanisms for feeling pain and various emotions, which could be made to have similar functions like in an animal, serving a functional purpose and not being used for mimicry. However, for now it does not make any sense to attempt to do such a thing, because in a deterministic program there are better ways to ensure that a robot is "loyal" to its owner and acts in self-preservation when possible.


> Of course LLMs display human emotions

Yes, your entire expose as to why this occurs is obvious. I agree and I know this and it wasn’t my point.

> The fact that the current LLMs do not actually feel the human emotions

This was my point, and what you’re saying here as fact is categorically wrong. We actually don’t know, and the don’t know part is categorically true among industry and academia.

If you read carefully a big part of my point was we can’t even prove or confirm that the people around you feel emotions, your assumption that your family and friends feel the same emotions as you is as scientifically baseless as your assumption that LLMs don’t feel emotions.


As other posters have pointed, the core of a LLM is a pure function, which computes a token probability distribution from an input context.

An automaton, which can chat with you or write a program, is built externally to the LLM function, by storing the context and making it change, depending on the output of the LLM function.

However, the LLM pure function is exceedingly complex so it is essentially unpredictable what will it produce for a given input context.

So one may have to treat the LLM function as a black box and explore the huge space of the input contexts by varying them in various ways, inclusive by using words that express human emotions, and monitor how the output of the function changes, i.e. how the LLM "reacts" to the expressed emotions.

A "reaction" similar to that of a human is to be expected, because human emotions were expressed in the training texts, followed by reactions of humans to those emotions, and the LLM function will change its output token probability function in a manner mimicking the behavior of the humans from the training texts.

Even functions that are many orders of magnitude simpler than LLMs are still to complex for anyone to understand how their output changes when you move through the space of the possible input arguments.

The most essential part of cryptography is the existence of a class of functions which were named by Claude Shannon "good mixing transformations". All the important cryptographic primitives, e.g. block cipher functions or one-way hash functions, are built from such "good mixing transformations". The impossibility of breaking a cryptographic system with secret keys is based on the assumption that it is impossible to predict how the output of such a "good mixing transformation" changes when its input is changed. All such "good mixing transformations" have the so-called avalanche property, which means that even if you change a single input bit, any of the output bits may change with a probability of exactly 50%, so it is unpredictable for any output bit whether it will change, or not.

If such simple functions, e.g. with 128 input bits and 128 output bits, can have a completely unpredictable behavior, then it is not surprising that LLM functions that may have an input of up to a few million bits (the length of the context window) are completely unpredictable and you can just observe their behavior when given various kinds of contexts and search for empirical approximate rules describing the behavior.


If you read carefully my point is not about the external behavior of the LLM. It is the black box aspect of the LLM. The sheer complexity of the pure function is not something we can understand even though the high level structure is a feed forward network the core algorithm is in actuality encoded by weights.

Yes there are complex functions besides LLMs that we don’t understand but those functions usually aren’t compelling because the LLM, unlike those other functions has output that implicates reasoning and emotions. The problem is we can’t understand what’s going on under the hood so we don’t know either way.

This is what I mean by stupidity. You completely missed the point, and you’re also operating under the assumption that the human brain is also not following a similar deterministic pathway. You hold humanity and biological intelligence in such high regard that you cannot even imagine that all of physics implies that human intelligence is mechanical. So the emotions you feel are under a black box same as the LLM and you apply you biased assumptions in a singular direction assuming your emotions are not deterministic and that LLM emotions are fake but that reasoning has no basis.


I might have not explained it clearly, but my position is not what you have said.

I agree with you that in principle it will be possible to design an artificial automaton that will have something equivalent with human emotions (though I do not believe that it makes sense to attempt to design such a system).

However, I do not believe that an LLM is such a thing, because the training algorithm just ensures that an LLM will mimic whatever is recorded in the training inputs, with or without human emotions in them. There is nothing in the structure of an LLM that can generate emotions by itself. If you train an LLM, for example, only on programs without comments or only on mathematical formulae, it will never display any kind of emotions.

Regarding human emotions, they are recorded in a static way in a book or in a movie, but we do not say that the book or the movie has human emotions itself.

With an LLM, the behavior is much more complex, because it does not just play a sequential recording of human emotions, but it can combine them in various way, while responding to various stimuli that are similar to those that had elicited emotions in the training texts.

But regardless of this behavioral complexity, the human emotions are not generated somehow intrinsically by the LLM, but they correspond to those previously recorded in the texts used for training, so they just mimic humans.


>However, I do not believe that an LLM is such a thing, because the training algorithm just ensures that an LLM will mimic whatever is recorded in the training inputs, with or without human emotions in them.

This does not mean the underlying mechanism does not involve emotions. The logic does not follow. If you train a model to find a solution, it often in actuality becomes a models that finds the solution. It's not always the case that the model becomes a model that mimics finding the solution.

It's the same thing with emotions. You train it to output emotions, it is not always the case that the output of emotions is just a mimic of the emotions. We don't actually know.

>Regarding human emotions, they are recorded in a static way in a book or in a movie, but we do not say that the book or the movie has human emotions itself.

But the LLM is not not a book. It is something 'else'... an alien intelligence that emerges from training it on books. Your analogy does not follow.

>With an LLM, the behavior is much more complex, because it does not just play a sequential recording of human emotions, but it can combine them in various way, while responding to various stimuli that are similar to those that had elicited emotions in the training texts.

You don't know this. It may feel the emotion in it's own way. You're making a careless statement here without proof, knowledge or evidence.

>But regardless of this behavioral complexity, the human emotions are not generated somehow intrinsically by the LLM, but they correspond to those previously recorded in the texts used for training, so they just mimic humans.

Again you don't know this. You can't even formally define what a human emotion is which is a flaw on top of the fact that the black box nature of the LLM prevents you from understanding what an LLM id doing or "feeling".

Let's say human emotions produces a certain configuration of patterns of action potentials across the brain and we have sufficient sophistication to categorize these patterns in the same way we can categorize all the complex possibility of say rodents or fruit. If we had that WE still wouldn't know if the LLM felt emotions SIMPLY because it is a black box. It may be the thing we trained in order to "mimic" human emotions actually produces the same configuration pattern of numerical signals flowing through the feed forward network that fits in the "category" of an emotion.

One possible training outcome to meeting the requirement of "mimicking" emotions is to actually produce the emotion itself in order to mimic it.


One day I realized I needed to make sure I'm voting on quality stories/comments. I wonder if there was a call to vote substantively and often, if that might change the SNR.

The guidelines encourage substantive comments, but maybe voters are part of the solution too. Kinda like having a strong reward model for training LLMs and avoiding reward hacking or other undesirable behavior.


if voters are stupid then it doesn't really help.

I think what's happening is reality is asserting itself too hard that people can't be so stupid anymore.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: