Hacker Newsnew | past | comments | ask | show | jobs | submit | RivieraKid's commentslogin

> I don't see how this is the case if you're anything more than a junior engineer... it unlocks so many possibilities.

I really don't understand this way of thinking. Don't you think that AI could replace senior engineers? Sure, companies will be able to do bigger / better / more ambitious stuff - but without any software engineers.

> Why is it BS? I'm shocked that anyone with a love and passion for technology can feel this way. Have you not seen the long history of automation and what it has brought humanity?

I definitely think that AI will be a net benefit for society but it could easily end up being be bad for me.


there doesnt seem to be a limit in terms of the ceiling of what companies can do with software, probably the most elastic demand out of any industry ever

the swe role is going to change but problem solving systems thinkers with initiative won't go away


That's a possible outcome. Another possibility is that AI will handle all of the thinking and problem solving part. So the market value of thinking will drop. The bottleneck will still be humans, but their input will be (1) doing physical, real-world stuff (2) providing data that the AI doesn't have, e.g. information about a specific problem domain or how does a user interface feel.

assuming no asi, the market value of thinking without accountability trends to zero, the bottleneck will be thinking + accountability, at least for knowledge work

if ai truly solves novel thinking then nothing is a barrier. the physical world is downstream from robotics which is downstream from software. itll be able to persuade nation states to collect data for itself etc etc (insert sci fi ending)


So far AI doesn't seem even close to replacing senior engieeners. Hell, it can't even replace junior engieeners entirely.

I use AI agents every day at work and I'm happy with that, but it took over two years and billions of dollars in investment to deliver anything useful (Claude Code et al). The current models are amazing, but they still randomly make mistakes that even a junior wouldn't make.

There's another paradigm shift to be made certainly, because currently it feels like we scaled up a bug brain to spit out code. It works great for some problems, but it's not what software developers usually do at work.


If existing capital starts to generate excessive profits, more capital will be built, which will require human labor and will make the original capital less valuable.

In theory. In practice, the excessive capital of the incumbent allows them to price out or buy the budding competition, or the legislators, so as to protect their position.

The natural state of a capitalist system is the monopoly.


Usually yes.

Interesting, this shows growth in open positions: https://www.trueup.io/job-trend


True but there is also a massive proliferation of ghost jobs. Dirty secret for a bunch of Series A places


Something needs to be done about these bots, it is getting eerie. Yesterday a bot created an account named 100xLLM the moment I responded to it to respond back with.


I see everyone say this but what is the point of making a fake job opening? Are companies just doing that to see if a unicorn applies?


There's a dozen different angles all coming out at once. I'll try to summarize some.

- really wants to hire H1B, but needs to pretend to interview first for compliance. These usually have absurd requirements to make it viable to reject anyone.

- really wants to do an internal or referral hire or promotion, but needs to interview for HR compliance. These usually have such specific requirements that only the person they want qualifies.

- posts jobs because a company wants to look like its growing, even when it's not.

- posts jobs to either signal to an employee that they are replaceable, or to try and relieve a stressed employee that more help is coming. Either way, it's a bluff

- yes, sometimes you want to hold out for the perfect unicorn and are not in any way in a rush to find them. There's no distinction for this, but job posts are cheap so why not?

- outdated posts that still stay up because There's no rush to take it down.

- a technique used to lower compensation. They post a job, see how many applications it gets. If it's more than enough, they take it down (with no interviews) then put it up once more at a lower rate. Repeat until not enough people apply. This may or may not lead to interviews because the actual goal is market probing.

-purely to advertise the company instead of actually hire. Usually done at career fairs where you talk and realize there's no actual open positions.


> outdated posts that still stay up because There's no rush to take it down.

Can also happen when it takes 3 months to get a job posting approved, so once you get one you just leave it up.


Thanks for that. I have seen internal hire before or even "we know who we want but legal makes us post it for 7 days".

The comp technique you mentioned though seems like a lot of work for price discovery, surely there are data sets out there?


It's probably not the most efficient means, no. Probably one of the cheapest methods, though. It's definitely not something you can get away with in a good job market.


As a hiring manager, I have to write the job description. HR is responsible for posting the damn thing where people can see it, then into the ATS you go. We also know recruiting posts can be a source of competitive intelligence and signal for investors. We don't want it used that way, but we're aware of it. Bit of a dirty secret. That means, alas, the only people hurt are the applicants looking for work. I'll work through the queue when I have a billet to fill, but otherwise... You're shouting into the void. Not sure who is responsible for reporting headcount increases to BLS, but I've actively looked and never found the person. So... I honestly have no idea how they get their numbers unless there is a pipeline from the major payroll processors; which feels kinda ick if you think about it.


https://www.bls.gov/k12/teachers/posters/pdf/how-bls-collect... This says statistics, i've seen unsourced articles saying that they pull Unemployment Insurance numbers as part of it which are part of the payroll process, but BLS seems to say sampling and surveys.


I know, I've just... never found someone who says, "Oh! The BLS survey? That's meeeee! I fill it out!" Ever. Admittedly I haven't necessarily hit up any other platforms, so maybe those people don't visit HN. Totally viable explanation. But... Still pretty weird when I've been periodically poking for years now but never seem to locate anyone who even claims to know of or have been annoyed by having to implement the process of responding. Might bump this search up my todo list for hahas.


I had an AI-agenerates answer for you, but then I realized something deeper: moral hazard.

> Moral hazard is when one party takes actions that impose costs on others because they don’t fully bear those costs themselves. With ghost jobs, employers get benefits (brand signaling, resume mining, internal optics) while job seekers eat the time, emotional, and sometimes financial cost of chasing something that never really existed.


AI training?

There's a IT careers site that was sold, I believe, went through a re-branding. And now they also offer AI and "personal" resume reviews _and_ writing, cover letters, and they even have members do a 10-15 minute AI virtual interview that ostensibly could be shown to a hiring manager.

I was unemployed as a PM for about three month. I applied to in the order of 100 roles at this site, as well as applications on the other sites you'd expect, from LI to more niche.

I felt that this site was "underperforming". Jobs I'd applied to that I'd only really seen on there I'd never heard from. I saw jobs that were advertised in other places on there too.

What sealed it for me was that towards the end of the three months, I got an email from the site. "Your profile has been viewed". I open it, "An employer is looking at your profile". I'd never seen this type of email from them before, and sure enough: "Your profile has been viewed 1 time in the last 90 days". That was it. No contacts, and only one employer has even looked at my profile on the site (and this is the kind of site where that'd be the only place they could look at your application). And that employer didn't even have positions open.

But the site does ask you questions to "submit to the employer" about "why you want to work here" "why you'd make a good fit", etc.

And I'm entirely convinced that the jobs they're advertising are only (a very small) fractionally "real" and ever reviewed by anyone at all (maybe the "promoted" jobs?), and they're harvesting positions and jobs from other sites or employers (there's no positions that don't actually seem to exist, or at least not ads)...

... and that their chief motivation for this is getting all your answers to train their models for their actual revenue generator - AI resume writing, cover letter writing, etc. All pre-seeded with other people's real answers to such questions.


Two reasons. One, they have already filled it internally but legally have to post the job. Two, they are gathering data on market trends and what salaries people will take, which is useful if they are considering firing people and rehiring with lower salaries.

I've applied for many jobs where I was perfectly qualified and got rejection notices immediately. I applied on a Sunday and got rejected on Sunday an hour later. No human reviewed that application I made, it was auto rejected, and if that's the case, what other explanation is there than "ghost jobs."


> and if that's the case, what other explanation is there than "ghost jobs."

You didn't pass some arbitrary ruleset given to an AI or machine learning algorithm.

Companies can be very selective now, and usually implement this selectivity fairly stupidly. There also is the problem of being genuinely swamped with bullshit applicants for positions, so the false positive rate is likely quite high at the moment.

I've found it extremely difficult to sort the wheat from the chaff right now. Finding competent people is more difficult than ever, but the sheer number of applicants is at least an order of magnitude higher. Botting has made applying to jobs exceedingly low friction, so there is very little downside to someone entirely not qualified to apply to 600 jobs a day and hope they get lucky.

We have positions that have been open for months that go unfilled simply due to lack of time to sort through applicants, and the few we do have time to interview usually are obviously unqualified within the first 5 minutes of talking to them.


I can't imagine applying to a job where I didn't already have some sort of personal connection. That was already true, and that's even more true now. Likewise, these days as a hiring manager I'd be unlikely to hire someone that came in via random application for the same reason


This is undeniably happening as well. Totally agree.

I just have had lots rejections, and some where I did have a good fit, that I don't think "AI auto rejection" is the only story. I have good credentials, several F500 experiences, no big career gaps.

The only real success I have had in the last few years is targeted emails (from who is hiring on HN) or through my network.

It's very different than at any other time and I believe it is a combination of a terrible market, AI rejections, and ghost jobs. And I'm sure there are more than a few ghost jobs.


If you don't have the time to sort them through, there's not much urgency to actually find someone, is there?

It also might point to a filtering mismatch of your get a high false positive rate.


Oh definitely. And our hiring practices are not exactly state of the art. I'll be the first to admit they need a giant amount of improvement.

Most of the good folks have come in via word of mouth and networks, as they typically do.

For those outstanding positions they are "very nice to haves" but obviously not critical. When the right candidate gets matched we'll jump on the opportunity, but it's not an existential problem for the moment.


> Two reasons. One, they have already filled it internally but legally have to post the job.

This scenario isn't a "fake job," which are more akin to ghost/scam/non-existent openings.


I'm looking for job (in Rust) now and is absurd how many positions are for training LLMS -in Rust!- (yeah, lets help the people that wanna put everyone out of jobs)


Well looking for a “programming job in $x” is going to be a problem going forward. Programming itself is going to be a commodity between AI and its harder to stand out in a saturated market. I sell myself as someone who can get stuff done using technology and can lead larger initiatives


What companies are hiring people to use rust?


Loads, I come across it frequently and I'm not actively looking for them.


If you don’t then someone else will… genie is out of the bottle


The issue is that before AI, 1% of the population was capable of creating 1 side project per year. After AI, 10% of the population is capable of creating 10 side projects per year. The competition grew by 100x. The pessimist in me thinks that the window of opportunity to create something successful is shrinking.


> The pessimist in me thinks that the window of opportunity to create something successful is shrinking.

Dunno man. Ideas alone aren't worth anything [0] and execution is everything [1], but good ideas and great execution will never go out of style regardless of how much competition is out there. I'm of the opinion that even if 10% of the population is now capable of creating a side project, there's still the same relatively-fixed amount of people capable of making a good side project, and even fewer who will see it through to a real product. Nothing has really changed in the aggregate. It's like architecture, there are always improvements in materials, tools and processes, and Claude and Codex can provide more laborers for almost free, but most people are still gonna be building uninspired McMansions instead of the Guggenheim.

[0] https://youtu.be/YYkj2yYaGtU?t=112

[1] https://youtu.be/YYkj2yYaGtU?t=160


Disagree. Ideas were a necessary component of the one project I had success with. BTW, the line between ideas and execution is blurred. Is coming up with innovative UI and features ideas or execution?


Ideas are obviously a prerequisite, but they aren't "worth anything" because there is so many of them and without them being executed well (or sometimes, executed at all), they don't really bring any value.

So really, they are comparatively cheap. I, for one, have hundreds of ideas, but always lacked the time to execute on 5% of them.


Good ideas (and the ability to recognize them) are very valuable in my opinion. It also depends on what you mean by an idea:

- A todo app better than the existing ones

- A todo app with these 3 features

- A todo app with these 3 features, here's how the UI would look

I have tens of ideas, but maybe 1 - 3 that I believe have a meaningful chance to become successful and generate income ($20k annually or more) with great execution. I find it hard to come up with ideas that have a fairly clear path to success and can generate income.


I have hundreds of ideas which I think can generate revenue of the sort you describe, but they need significant work each (execution). Note that $20k annually is already full annual salary in half of the world too.


Can you share some out of curiosity? Yes, I know that $20k is a full salary in many countries.


A very simple one: interview scheduling tool integrated with multiple calendars.

Eg. when interviewing, sometimes I have a pool of interviewers, and I want a pair to be offered to a candidate (they only see time slots, obviously) with certain internal conditions (one expert from this pool, another from this pool; eg. tech stack or timezone), while equally loading all of them.

Sounds simple but I could never find a tool that does it, and I believe companies might be interested just like they get tools like calendly just for limited purposes.

iCal format is simple on the face of it, but companies have restrictions on the feeds, and accounting for recurring events and working hours is not as trivial.


> I'm of the opinion that even if 10% of the population is now capable of creating a side project, there's still the same relatively-fixed amount of people capable of making a good side project, and even fewer who will see it through to a real product. Nothing has really changed in the aggregate.

What do you mean "nothing has changed"? Using your numbers, the SNR went off a cliff.

Use HN as an example - I used read the new stories all the time before they hit the frontpage, and upvote as needed.

But with 100s of slop submitted for every 1 actual good article, I can't do that anymore.

IOW, I have finite time. If 10% of the population is now able to vomit out side-projects, I am never going to find the one good one because it will be lost in a sea of rubbish.


Correct, but I was replying to the assertion that more slop == decreasing ability to create something good and successful. That's a common trope that people deploy with regards to everything: music, movies, books, social media accounts, brands, blogs, pizza shops, whatever, and it's consistently shown to be false. Plus, we don't live in a monoculture anymore, the SNR you're thinking of is proportional to the mainstream. Successful things nowadays are far more siloed, specific, and serve distinct niches.

And you're right that people still have limited, fixed bandwidth with regards to attention available to give to things.. and the same amount of things that break through doesn't change from what could break through and stick before (in the monoculture). But the amount of niches/verticals where you have the opportunity to break through inside of is significantly higher than ever. That gives you a better chance for success, because your audience is more targeted, more receptive, hungrier for authenticity, hungrier for quality, and desperate for connection to something they like.

TL;DR if you have a good, valuable idea that people want (or don't yet know that they want), execute it well, deliver something that is undeniable, promote it effectively, and stick it out for the long haul, you'll find success. There's no magic formula beyond that, and it doesn't matter if there are 10 or 10 million amateurs clogging the toilet bowl next to you.


> Correct, but I was replying to the assertion that more slop == decreasing ability to create something good and successful. T

True; I misunderstood.

You are contending the assertion "more slop == decreasing ability to create quality", I am asserting that "more slop == lower overall quality".

FWIW, there's probably an argument to be made against your assertion as stated above, but it's probably going to be a long-winded and ultimately weak one. I'm not really in the mood to explore weak arguments, TBH.


No, why?

Why do you look at it that way? Why does anyone beside you have to care about what you do?

Just build something for yourself. You will always have things you'd like to build for yourself. You will be in competition with yourself only and your target audience will be yourself.

Market forces do not apply to side-projects, because that's what people do for fun.

Just because there are chess computers, doesn't mean that no one plays chess anymore at home.


Isn't it obvious? The reward that a personal project can generate for you is limited. It's not remotely close to what a successful project would give you - money, fulfillment, social capital, feeling good about yourself, etc.


Yes but you see, maybe all of that was wrong in the first place?

This is just a correction of something that managed to remain in an invalid state for an impressively long time.


It was wrong to write software you hoped others would use? The entire open source ecosystem works on this idea otherwise there would be no point in sharing and we can move to closed software.


Yeah but we've told ourselves that writing software was some kind of higher mathematics, where in reality it was mostly just plumbing that, surprise, a computer can do too.


> It was wrong to write software you hoped others would use?

Yes.

> The entire open source ecosystem works on this idea otherwise there would be no point in sharing and we can move to closed software.

No.

The _actual_ open source system consisted of hackers scratching their own itch and sharing the artifacts, because (it was assumed that) sharing is free. So if the work is already done and solved their problem, why not also share it as gift.

This remains unchanged.

The driving force of FOSS is not "how can I fix someone else's problem". It never has been.

Well.. maybe on HN it was different, but that's not "the open source ecosystem". And, yes, maybe some corps have gaslit naive people into believing that they must donate their lives to said corps.


> The _actual_ open source system consisted of hackers scratching their own itch and sharing the artifacts, because (it was assumed that) sharing is free. So if the work is already done and solved their problem, why not also share it as gift.

If you have the time tona scratch your own itch and gift the results, it implies you have a source of income that gives you the time/lifestyle to do such a thing. You might be a tenured academic, or live in a society with a strong safety net. Or you might be able to do your day job in 1/2 the allotted time.

The problem is that a those scenarios are eroding precipitously, leaving more to seek compensation for their work output, whether it is closed or open source.


You think there won't be students or academics anymore? Arguably, most non-corporate-supported (when that became a thing) FOSS was created by students and academics.

So what is really changing?


> So what is really changing?

Higher education is less affordable and accessible to more families, and the value proposition is eroding. CS academics survive by joint ventures with corporations, not by their University salaries.

Escalating cost of living and reduction in institutional support systems push more people toward allocating their scarce spare time toward fundamental needs rather than contributing to the software commons.


I see your point, thanks — it definitely rings true!

I agree the scale will change, but most of the core FOSS we depend on today has started off when software development was not as lucrative as it was in the past 2+ decades — which means it can still happen. It does change the dynamics as you say.


I can’t speak for everyone but it seems to me to be a very human drive to want to be useful to others.

If you are good at something that you enjoy doing and that is valued by others, that’s the ideal scenario. And that’s what writing software looked like for many people for a long time.

That doesn’t mean you should do things just to please others. And it also doesn’t mean you can’t do something just because you enjoy doing it. But it means that these people now have a diminished ability to employ their unique skills to help others while doing something they love doing. That can sting, understandably.


Sure, AI replacing intelligence (simply speaking) is good for the society on average, but probably bad for me.


Not only that, I have a feeling a lot of people are gonna be disappointed now they can implement their side projects in a week instead of 6 months. Finally - the thing is there, ready. And the likely outcome is

a) Almost no one but you cares and

b) Now that this has become trivial, there's no much joy in it. The struggle we had before A.I was the real joy; prompting agents for a few days and getting what you want isn't that joyful.


Ironically I had a very smart and otherwise reasonable math professor who, shortly after Kasparov lost to Deep Blue, said in class that chess was no longer interesting.


In that sense running competition is no longer interesting since I can jut ride in the car


I mean I would too be concerned if such a major event _wouldn't_ make people question their assumptions/beliefs/ideas/visions.

If you have no reaction at all, you probably weren't paying attention.

Eventually though, people _should_ recover and return after having processed the changes. So maybe the professor was still recovering at the time?


It's possible. At that time people were talking about Go as the next frontier (that didn't last long). IMO, the game is the same, and for 99.9999% of folks who ever play it, whether a computer can beat the best human is irrelevant in how fun it is to play.


Maybe, but LLMs solve but one issue (maybe two). Take me, for example. I am highly proficient regarding software development in most aspects. Except for that tiny problem: I wouldn't even know what to build. And at least for me, LLMs could not help with that.

The whole side project or even private project thing doesn't just hinge on being able to produce software. There's a lot more.


It's like the business of selling electric drills. People don't really want drills they want holes. But holes are difficult to sell so the selling the drills is a proxy for that.

In software it's the same thing. People don't really want software they want data and data transformation. But traditionally the proxy for that has been selling the software (either as a desktop app or then later as sole kind of service).

You could argue that in either case the proxy is not what people want but yet because of the difficulty of selling the "actual" thing the proxy market has flourished.

We're now inventing a new tool that will completely disrupt that market and any software business that is predicated on the complexity required to create the software to transform the data is going to get severely disrupted. Software itself will be worthless.


Software is not becoming worthless.

The value of computers since its inception was that it's capable of transforming data very, very fast and autonomously. But someone has to input that data from the real world or capture it using some device, and someone has to write the rules.

What happened is that we created a whole world of information and the rules has become very complex. Now we have multiple layers stacked vertically and multiple domains spread horizontally. At one time, ASCII was enough, now we have to deal with Unicode.

Software becoming worthless will mean that everyone has learned the rules of the systems we created and capable of creating systems with good enough quality. I'm not seeing that happens anytime soon.


Software is just means to an end. Data and data transformation is what people want. Software has sellable dollar value only because creating the software to do the data transformation has had real associated cost. I.e anyone who wanted a particular data transformation had to pay to get the software that does it.

When you drive down that cost you drive down the potential value of the software products. Remember that what is a cost to one party is revenue to the other party. Without revenue there cannot be profit and without revenue software has no dollar value.

If anyone can create "photoshop" with minimal cost and there are thousands of said "photoshop" apps what will be the retail sell value of those apps. Close to zero.

This same lifecycle already happened with games. Driving down the cost of producing games resulted in a proliferation of games that are mostly worthless that you can't even give away.


> Software is just means to an end. Data and data transformation is what people want. Software has sellable dollar value only because creating the software to do the data transformation has had real associated cost. I.e anyone who wanted a particular data transformation had to pay to get the software that does it.

I do agree with you on that point.

> If anyone can create "photoshop" with minimal cost and there are thousands of said "photoshop" apps what will be the retail sell value of those apps. Close to zero.

This is the point that I cannot agree with. Not anyone can create photoshop because of the amount of knowledge you need about the data and transformations that needs to be applied to get a specific result. And then make a coherent system around it. You can create isolated function just fine, just like a lot of people knows how to build a shed with planks and nails. But even when given all the materials and tools, only a few can build a skyscraper or a mansion.

That knowledge of how to create a coherent systems that does something well is the real cost of software. Producing code isn't it.


You're right and I agree with you to an extent. Also as of now the tools aren't quite intelligent enough for one to produce software of that complexity without having someone competent at the helm.

That being said what already exists was already enough to shutter the stock prices of many software companies precisely because the fear is that their clients will just re-create the software themselves instead of buying it from someone else.

I guess we'll see how this will pan out in the next few years.


Yes it become much easier to fail fast and iterate, but also a lot of these fail fast projects are trivial for anyone to implement themselves. Differentiating your project is going to be tougher too.

A lot of the moats are gone, but quality (and security) is in a nose dive. AI built project might be the Ikea furniture. Good for the masses, but there's still a market (much smaller) for well crafted applications and services. It's hard to say what it'll look like in a couples years though. Maybe even the crafting is eventually gone. /shrug


I think we need to change our perspective of what success is. I believe there will be a ton of small companies popping up instead of a few big ones that eats everyone's lunch. Like Google, Microsoft and others giants have done until now.


The big ones are successful based on vendor lock-in, network effects, and regulatory capture. AI doesn’t change that dramatically.


But the total market size (in number of products) also multiplied. For instance, as a relatively tiny example, I create a nutrition tracker. There's hundreds already out there, but they never met my specific desire for one. So I created one with Claude (took maybe 2 hours total over a few days) that completely matches my desire, plus I can tweak it as want for my needs.

No one else will want this specific piece of software. But I love it.

Sure, there will be 100x the competition, but there will be also 100x the software needs. Now, if you want to get crazy rich building software, that does get tougher, but that's a good thing, I think.


Are most side projects in competition? I wouldn't think so.

Even if they were I disagree that 10x more ideas being produced means 10x more products in competition. You could leverage AI to execute but still have terrible ideas, leadership, product stewardship etc.

I think some clever people with a real and valuable insight will finally be able to turn that insight into a product. I also think the other 9 products will be get rich quick attempts by people with nothing to offer.


If the competition just grew by 100x, where's all the great, high-quality, AI-vibe coded side products? Something just isn't adding up here. Could it be that vibe coding on its own just isn't all that useful, and most of those 10% are wasting their time?


The counterpoint is that it's only 2 months since AI got really useful and it will presumably continue to improve. It takes a while until it spreads through the society.


I think the window of opportunity to create boring also-ran software is shrinking.

I think there's more opportunity to do something novel.

AI can't do it, and the humans with the skills to do it are rapidly disappearing.


I can relate. Sincerely debating whether I quit my well-paying and comfortable corporate job and just go full-time entrepreneur before the opportunities disappear.


The game is all about content now. Forget software. Games, movies, books, music, etc. Things that people will always want regardless of how much there already is. Look at the success of AI slop authors and YouTube channels. That's our future.


Regarding expanding role:

The scenario I'm somewhat worried about is that instead of 1 PM, 1 designer and 5 developers, there will be 1 PM, 1 designer and 1 developer. Even if tech employment stays stable or even slightly increases due to Jevons paradox, the share of software developers in tech employment will shrink.


I think more likely - no PM, no Designer, one stressed out Mega PM-D-SWE


1 PM, 1 designer, 1 developer, and 10 SREs to clean up the mess.


Is making effective weapons evil?


Given the history of US military adventurism and that we’re about to start another completely unjustified war of aggression against Iran, yes. Absolutely yes.


If it wasn't for US military power, Russia would have already overrun Ukraine. And if Iranian nuclear program is destroyed and the regime falls, it would be a good thing. For context, I'm from Czechia.


I'm from the US and strongly disagree that either of those things are a benefit to me as a US citizen. All it's doing is taking my money and putting me more at risk, and in the case of the attack on Iran: making me complicit in the most immoral acts imaginable.


As a US citizen you benefit from the status quo and global peace being maintained.


Whether it's justified or not depends on what you're trying to achieve. If your goal is to deny nukes from Iran, then the war is entirely justified.


The same admin that tore up the agreement for this we already had with Iran?


Not the same admin (that was Trump as the 45th), but I don't see the argument you're making.


A weapon is a tool.

Whether they are good or evil depends on the hands that hold it.

In good hands, weapons provide defense, deterrence, and protection.

In bad hands, weapons hurt the innocent, instill fear, and oppress.

The hands that wield them make all the difference.


What about all the weapons forbidden by the Geneva convention?


> What about all the weapons forbidden by the Geneva convention?

Some weapons are prohibited Geneva convention because they are designed to cause suffering or indiscriminately kill non-combatants:

"Weapons prohibited under the Geneva Convention and associated international humanitarian law (including the 1925 Protocol, CCW, and specific treaties) include chemical/biological agents (mustard gas, sarin), blinding lasers, expanding bullets, and non-detectable fragments. Also banned are anti-personnel landmines and cluster munitions.

Key prohibited and restricted weapons include:

Chemical and Biological Weapons: The 1925 Geneva Protocol and subsequent conventions (1972, 1993) banned the use, development, and stockpiling of asphyxiating, poisonous, or other gases, including nerve agents and biological weapons.

Blinding Laser Weapons: Specifically designed to cause permanent blindness (Protocol IV of the CCW).

Non-detectable Fragments: Weapons designed to injure by fragments not detectable in the human body by X-rays (Protocol I of the CCW).

Incendiary Weapons: Restrictions on using fire-based weapons (like flamethrowers) against civilian populations (Protocol III of the CCW).

Anti-personnel Landmines: Banned under the Ottawa Treaty (1997) due to risks to civilians.

Cluster Munitions: Prohibited due to their indiscriminate nature.

These treaties aim to protect civilians and combatants from unnecessary suffering and long-term danger."

Would "good hands" choose weapons that are designed to cause suffering or that kill indiscriminately?

No, they would not.


That’s a simplistic framing (obviously)


What does effective weapons mean in this particular instance?


Depends what the customers of anthropic and OpenAI think.


Yeah


"You need me on that wall!"


This guy sounds like he ordered a code red.


Yes?


It's extremely slow, takes several minutes to generate an image.


AFAIK chess is has been "solved" for a few years in the sense that Stockfish running on modern laptop with 1 minute per move is unbeatable from the starting position.


This is not true. Stockfish is not unbeatable by another engine, or another copy of Stockfish.

Chess engines have been impossible for humans to beat for well over a decade.

But a position in chess being solved is a specific thing, which is still very far from having happened for the starting position. Chess has been solved up to 7 pieces. Solving basically amounts to some absolutely massive tables that have every variation accounted for, so that you know whether a given position will end in a draw, black win or white win. (https://syzygy-tables.info)


The parent is using a different definition, so they put "solved" in quotes. What word would you suggest to describe the situation where the starting position with 32 pieces always ends in either a draw or win for white, regardless of the compute and creativity available to black?

I haven't verified OP's claim attributed to 'someone on the Stockfish discord', but if true, that's fascinating. There would be nothing left for the engine developers to do but improve efficiency and perhaps increase the win-to-draw ratio.


Yea that's true, it's a pretty overloaded word. From what I remember though, even the top players thought that there wasn't anywhere left to go with chess engines, before Alpha Zero basically ripped the roof off with a completely different play style back in 2017, beating Stockfish.

And the play style of Alpha Zero wasn't different in a way that needs a super trained chess intuition to see, it's outrageously different if you take a look at the games.

I guess my point is, that even if the current situation is basically a 'deadlock', it's been proven that it's not some sort of eternal knowledge of the game as of yet. There's still the possiblity that a new type of approach could blow the current top engines out of the water, with a completely different take on the game.


However, it is true that Elo gain on "balanced books" has stalled somewhat since Stockfish 16 in 2023, which is also reflected on the CCRL rating lists.

IMO AlphaZero was partially a result of the fact that using more compute also works. Stockfish 10 running on 4x as many CPUs would beat Stockfish 8 by a larger margin than AlphaZero did. To this day, nobody has determined what a "fair" GPU to CPU comparison is.


It's a strange definition of "solved".

War was "solved" when someone made a weapon capable of killing all the enemy soldiers, until someone made a weapon capable of disabling the first weapon.


Do you have a source? I remember asking on the Stockfish Discord and being told that Stockfish on a modern laptop with 1 min per move will never lose against Stockfish with 1000 min per move from the starting position.

But I'm not sure whether that guy was guessing or confident about that claim.


There's the TCEC [0] which is a big thing in some circles. Stockfish does lose every now and then against top engines. [1] Usually it's two different engines playing against one another, though. Like Leela Chess Zero [2] vs. Stockfish.

In that hypothetical of running 2 instances of Stockfish against one another on a modern laptop, with the key difference being minutes of compute time, it'd probably be very close to 100% of draws. Depending on how many games you run. So, if you run a million games, there's probably some outliers. If you run a hundred, maybe not.

When it comes to actually solved positions, the 7-piece tables take around 1TB of RAM to even run. These tablebases are used by Stockfish when you actually want to run it at peak strength. [3]

[0]: https://tcec-chess.com [1]: https://lichess.org/broadcast/tcec-s28-leagues--superfinal/m... [2]: https://lczero.org [3]: https://github.com/syzygy1/tb


doesn't TCEC use opening book?

I remember hearing that starting position is so draw-ish that it's not practical anymore


TCEC does force different openings yes. Engines play both sides.


Here's a game from a month ago where Stockfish loses to Lc0, played during the TCEC Cup. https://lichess.org/S9AwOvWn

Chess is a 2 player game of perfect, finite information, so by Zermelo's theorem either one side always wins with optimal play or it's a draw with optimal play. The argument from the Discord person simply says that Stockfish computationally can't come up with a way to beat itself. Whether this is true (and it really sounds like a question about depth in search) is separate from whether the game itself is solved, and it very much is not.

Solving chess would be a table that simply lists out the optimal strategy at every node in the game tree. Since this is computationally infeasible, we will certainly never solve chess absent some as yet unknown advance in computation.


What I meant by "solved" is "never loses from the starting position against Stockfish that has infinite time per move".

In the TCEC game, I see "2. f4?!", so I'm guessing Stockfish was forced to played some specific opening, i.e. it was forced to make a mistake.


That means that Stockfish's parameters are already optimized as far as practically possible for Rapid chess and Slow chess, not that chess itself is solved, or even that Stockfish is fully optimized for Blitz and Bullet.


Surely it is apparent to you that the first few moves are not independently chosen by the engine, but rather intentionally chosen by the TCEC bookmakers to create a position on the edge between a draw and a decisive result.

For what it's worth, Stockfish wins the rematch also. https://tcec-chess.com/#game=13&round=fl&season=cup16


Yes, engines would almost certainly never play 2. f4. That's a different question than whether chess is solved, for which the question of interest would be "given optimal play after 1. e4 e5 2. f4 is the result a win for one side or a draw?"

It's also almost certainly the case, in that I don't know why you would do it, that Stockfish given the black pieces and extensive pondering would be meaningfully better than Stockfish with a time capped move order. Most games are going to be draws so practically it would take awhile to determine this.

I'm of the view that the actual answer for chess is "It's a draw with optimal play."


That just means that Stockfish doesn't get stronger with more than 1 minute per move on a modern computer. It doesn't say anything about other engines.


Stockfish with 1000 minutes per move is an approximation of a perfect chess player. So if Stockfish with 1 minute per move will never lose against a perfect player, it is unbeatable by any chess engine.


> a perfect chess player

How could we possibly know this?

> it is unbeatable by any chess engine

So its engine is finished? There's no further development? No new algorithms?


> How could we possibly know this?

Isn't it obvious that increasing time per move will make the engine better and at some point perfect?

> So its engine is finished? There's no further development? No new algorithms?

No.


Hypothetically, what reward would be worth the cost for you to attempt to beat Stockfish 18, 100 million nodes/move, from the starting position?


“Solved” is a term of art. Defining it in some other way is not really wrong (since it is a definition) but it seems… unnecessary.


You can run Stockfish single threaded in a deterministic manner by specifying nodes searched instead of time, so in principle it is possible to set some kind of bounty for beating Stockfish X at Y nodes per move from the start position, but I haven't seen anyone willing to actually do so.


Even by a stockfish running on a modern laptop with 2 minutes per move (provided they are going second)?!


Yes, that's what "unbeatable from the starting position" means.


Can you like to the proof? It seems so implausible that chess has been 'solved'... How do we know an even higher time searching will not work?


There's no proof, only strong evidence.


A system for personally owned vehicles has been on the roadmap for a long time.

There's a partnership with Toyota related to this: https://waymo.com/blog/2025/04/waymo-and-toyota-outline-stra...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: