In the 1930s, when electronic calculators were first introduced, there was a widespread belief that accounting as a career was finished. Instead, the opposite became true. Accounting as a profession grew, becoming far more analytical/strategic than it had been previously.
You are correct that these models primarily address problems that have already been solved. However, that has always been the case for the majority of technical challenges. Before LLMs, we would often spend days searching Stack Overflow to find and adapt the right solution.
Another way to look at this is through the lens of problem decomposition as well. If a complex problem is a collection of sub-problems, receiving immediate solutions for those components accelerates the path to the final result.
For example, I was recently struggling with a UI feature where I wanted cards to follow a fan-like arc. I couldn't quite get the implementation right until I gave it to Gemini. It didn't solve the entire problem for me, but it suggested an approach involving polar coordinates and sine/cosine values. I was able to take that foundational logic turn it into a feature I wanted.
Was it a 100x productivity gain? No. But it was easily a 2x gain, because it replaced hours of searching and waiting for a mental breakthrough with immediate direction.
There was also a relevant thread on Hacker News recently regarding "vibe coding":
The developer created a unique game using scroll behavior as the primary input. While the technical aspects of scroll events are certainly "solved" problems, the creative application was novel.
It doesn’t have to be, really. Even if it could replace 30% of documentation and SO scrounging, that’s pretty valuable. Especially since you can offload that and go take a coffee.
It’s better in the sense that it’s much faster. Bikes and cars don’t theoretically get you to different places than walking, but open up whole categories of what’s practically reachable.
I think the 'better than googling' part is less about the final code and more about the friction.
For example, consider this game:
The game creates a target that's randomly generated on the screen and have a player at the middle of the screen that needs to hit the target. When a key is pressed, the player swings a rope attached to a metal ball in circles above it's head, at a certain rotational velocity. Upon key release, the player has to let go of the rope and the ball travels tangentially from the point of release. Each time you hit the target you score.
Now, I’m trying to calculate the tangential velocity of a projectile from a circular path, I could find the trig formulas on Stack Overflow. But with an LLM, I can describe the 'vibe' of the game mechanic and get the math scaffolded in seconds.
It's that shift from searching for syntax to architecting the logic that feels like the real win.
The downside is that you miss the chance to brush up on your math skills, skills that could help you understand and express more complicated requirements.
...This may still be worth it. In any case it will stop being a problem once the human is completely out of the loop.
edit: but personally I hate missing out on the chance to learn something.
That would indeed be the case if one has never learned the stuff. And I am all in for not using AI/LLM for homework/assignments. I don't know about others, but when I was in school, they didn't let us use calculators in exams.
Today, I know very well how to multiply 98123948 and 109823593 by hand. That doesn't mean I will do it by hand if I have a calculator handy.
Also, ancient scholars, most notably Socrates via Plato, opposed writing because they believed it would weaken human memory, create false wisdom, and stifle interactive dialogue. But hey, turns out you learn better if you write and practice.
In later classes in school, the calculator itself didn't help. If you didn't know the material well enough, you didn't know what to put into the calculator.
That's only true in classical electrodynamics, as it happens. If you're in a very strong B-field like you might find near a compact object you'll get nonlinear QED effects.
The logic we typically use for repeaters (EDFA, erbium-doped fiber amplifiers) for long-distance lines amplifies but does not clean noise (so across the oceans, you are very much bound by SNR). And you need one of them every 80 km or so in typical fiber.
Versity is really promising. I got a chance to meet with Ben recently at the Super Computing conference in St. Louis and he was super chill about stuff. Big shout out to him.
He also mentioned that the minio-to-versity migration is a straight forward process. Apparently, you just read the data from mino's shadow filesystem and set it as an extended attribute in your file.
You make a good point and I agree mostly to the point being made i.e. it is more fluid than categorical. However, I think it is not being made in good faith. I found the article highly insightful because it provides a solid starting point to those that have not started or don't know much about negotiations and how they happen. It should be safe to assume that there are plenty that have not started yet. It is also true that the more frameworks one reads and learns about, the more they realize that there are gaps in each one of them, and it is indeed fluid, not categorical, and hence reaching the same conclusion.
Asking because I don't know. How is enrichment governed? Say for instance if a country is only using it for energy vs defense/offense. And are there elements that can be specifically used for energy vs otherwise? Last I remember, having access to enriched uranium was grounds for a country to bomb another one.
The only way to ensure that a civil uranium enrichment program remains strictly civil is via transparency and monitoring. A country that has mastered uranium enrichment technology for fueling civil power reactors could use the same technology to produce bomb-grade uranium. It actually takes more work to enrich natural uranium into fuel for power reactors than it takes to further enrich power reactor fuel into bomb material:
This is scary. so the extra effort to move from, say, 20% to 85% is relatively small compared with the effort to get up to 20% in the first place. Might as well build a feature into the reactor so that it only works with <=20%
I think two reasons. If reactors can't function above 20%, a country having access to >20% enriched payload is a certain violation. Vs "60% enrichment is still for clean energy, my reactor works with it". 2. If you are only buying the payload and not enriching it yourself, you can't do anything with >20% . More like mixing methyl alcohol in lab available ethyl alcohol, to deter lab techs from mixing water and having a rager.
You should read the Treaty on the Non-Proliferation of Nuclear Weapons (NPT) as it addresses several of those issues. Possession of highly enriched uranium isn't necessarily an act of war by itself.
Natural uranium on earth is currently about 0.7% U-235; civilian power reactors typically need low-enriched uranium which is 3% to 5% U-235.
The critical mass required for a weapon shrinks as enrichment increases; implosion designs would require an infinite mass at or below 5.4% enrichment (see https://en.wikipedia.org/wiki/Enriched_uranium).
Weapons-grade uranium is more like 85%+ U-235. Enrichment above around 20% is what really raises red flags.
Modern weapons use plutonium not uranium, uranium weapons can be constructed.
All it takes is the enrichment to produce the fissile material for a weapon.
As far as I know countries have agreed to not build weapons, with the exception of those that already have them, there is an international body that monitors enrichment sites, but checks are voluntary a country can choose to not accept inspections and/or build additional secret enrichment sites.
The fissile material is not sufficient for a weapon though, as I understand there is quite a bit of science that goes into making a bomb.
Additionally, first generation weapons are large and unwieldy, i.e it takes a bomber to deploy a single weapon with a very small yield.
Miniaturization, building a weapon small and light enough to put on a missile is a significant problem that took the current powers years to get over.
But that's about it, if you can figure out how to make a small bomb of variable yield, you can make bombs small enough to fit a large backpack, and thermonuclear weapons that fit in a ballistic missile as well.
IAEA inspections verify your claimed inventory and enrichment facilities. They are trying to detect if any nuclear materials are being skimmed/diverted. As for weapons, nuclear fuel is very low enrichment (usually under 5%). Iran surpassed 60%, which has no peaceful use, so that is why it was said they were perusing weapons.
Imo that's a pretty complicated topic. On one side if you just build LWRs you just don't need very highly enriched uranium or plutonium so posession of those is a red flag. On the other side fast breeder reactors are the ones which are able to produce the least harmful waste. But fast breeders and closed fuel cycles produce and handle plutonium which in turn can be used for bad things.
You don't actually need enriched uranium for nuclear power. The design is easier if you enrich it, but there are reactors that would work on unenriched uranium.
The solution to these issues is just to manage the enrichment supply chain. If a country wants nuclear power but can't be trusted, supply then with at cost uranium.
I was thinking about what all is new in this version, or in fact in any other versions after iPhone 10/X (I don't know)? They all look same to me.
I personally think that Apple and other smartphone companies need to do a minor and major version release like you do with software. Every 3-5 year, do a major release. This way you create significant hardware/software features every major version, a hype that is well backed up, and at the same time keeps you working and improving and still making money out of it through minor versions. Plus, you also don't have to rely on planned obsolescence as people are gravitated towards the major version release naturally.
>I personally think that Apple and other smartphone companies need to do a minor and major version release like you do with software. Every 3-5 year, do a major release.
That's basically what they've been doing. That's why people whine when they aren't blown away every year now.
You don't have to buy a new phone every year, and indeed most people don't. The changes are incremental, but if you buy a new phone after 3+ years, the new one will probably noticeably better enough compared to your old to make a difference to you.
The late 2000s - early 2010s were exciting cause there was a lot of room for improvement. It's not so easy now and that's fine, because if you buy a new iPhone, it'll last longer and be overall better than what you got a decade+ ago.
My take on fed's unaccountable power is you can't and shouldn't do anything about it and keep it that way. Because if the fed does do a good job, the economy keeps floating just fine. On the contrary, if it doesn't, its just chaos and catastrophe, that benefits no one, including the fed. So in essence, the job is to prevent disaster. Personally, I think it is a super shitty job.
Normal (voting) person is oblivious to both what the fed does and what will happen if the fed doesn't do it. All that a normal (voting) people care about, are numbers on price tags, and the fact that those numbers aren't as low as they used to be.
I have been using python http server to get this working. Go to a place where you have your index.html, start a python server, python3 -m http.server, and voila, everything is now importable and locally accessible.
Not true, if XSS is used to compromise an admin user, the damage can be far more than what a seemingly harmless SQL injection that just reads extra columns from a table does.
This particular comment feels more like an over-concentration on trivialities rather than refutation or critique of opinion.
You are correct that these models primarily address problems that have already been solved. However, that has always been the case for the majority of technical challenges. Before LLMs, we would often spend days searching Stack Overflow to find and adapt the right solution.
Another way to look at this is through the lens of problem decomposition as well. If a complex problem is a collection of sub-problems, receiving immediate solutions for those components accelerates the path to the final result.
For example, I was recently struggling with a UI feature where I wanted cards to follow a fan-like arc. I couldn't quite get the implementation right until I gave it to Gemini. It didn't solve the entire problem for me, but it suggested an approach involving polar coordinates and sine/cosine values. I was able to take that foundational logic turn it into a feature I wanted.
Was it a 100x productivity gain? No. But it was easily a 2x gain, because it replaced hours of searching and waiting for a mental breakthrough with immediate direction.
There was also a relevant thread on Hacker News recently regarding "vibe coding":
https://news.ycombinator.com/item?id=45205232
The developer created a unique game using scroll behavior as the primary input. While the technical aspects of scroll events are certainly "solved" problems, the creative application was novel.