Hacker Newsnew | past | comments | ask | show | jobs | submit | jorvi's commentslogin

AI is good at the first 80% but terrible at the last 20% of producing good code. And you need to through that first 80% to really understand what the code is scaffolded to do, which writing it yourself will vastly improve. And typing speed has never been the bottleneck for coding.

Even worse, whole generation of devs are being trained to not care of learn about that last 20% because the AI does it """all""" for them. That last bit is an unknown unknown for the neo developer nee prompter.


Your team is creating code you don't really grok to "get stuff out the door". Guaranteed a month or year from now this is going to bite you in the ass, hard.

This will stay useless for editing personal pictures so long as virtually every prompt with a person in it is met with "I can't edit images of some people". For whatever reason, they've made the celeb detection so ultra-aggressive that almost everyone is detected as a (lookalike) celeb.

It's only for Europe, you should try a US VPN or, in the worst case, use it over Vertex AI, which allows you to generate anyone.

RAM isn't a critical security category like 5G base stations.

Also, I don't think you've seen true consumer rage until the opposition in the EU would start pointing out the current parties are making the smartphones, laptops, TVs and whatnot consumers wanna buy much more expensive (or more crappy). Large parts of the EU are currently being crushed by one of the worst housing crises in the world, the economy seems to be wavering for young people especially, and tech / gadgets being cheap was one of the sole rays of light left.


Youth unemployment is actually somewhat low in the EU at the moment. It's at around 15%, which is the level as back in 2008 before the great recession.

Raw unemployment numbers are pretty meaningless alone. Governmenments have ways of counting unemployment to get a desired number like for example only counting those registered as seeking work through the government agency. Like If you're doing some school or training, BAM, you're not counted as unemployed, if you've been unemployed for too long, then you're counted as "long term state welfare" and not as unemployed, if you refuse shitty hard labor jobs from the unemployment office, then you're cut off from unemployment and you're not counted as unemployed, and other such tricks.

Plus, even taking a low unemployment numbers at face value, the job quality has fallen a lot, with a lot of people still technically employed but not in great jobs, but in shitty jobs they do for survival, like fast food delivery.

The reality is that mass layoffs and SME bankruptcies are a current occurrence in many EU countries.


> RAM isn't a critical security category like 5G base stations.

Those base stations are only security critical because mobile networks are deliberately insecure to enable government surveillance.

And I can image backdooring RAM. At least the controller part.


> Large parts of the EU are currently being crushed by one of the worst housing crises in the world, the economy seems to be wavering for young people especially, and tech / gadgets being cheap was one of the sole rays of light left.

Huh?


Prices are returning to normal, probably 2-3 years from now. SK Hynix is making absolutely monstrous investments in memory fabs and CMXT will be entering the market in force more and more.

The biggest problem is that the industry wants HBM, whereas consumers want DRAM. Until the need for HBM has been sufficiently satisfied, fabs will prefer being tooled for HBM because businesses can be squeezed much harder than consumers.

Then again, as consumer you don't really need DDR5 or even DDR4 so long as you aren't using an iGPU. Its all about being around CL15 timings.


This, pretty much.

The ideal setup is having a separate vlan for your IoT things, that has no internet access. You then bridge specific hubs into it, so the hubs can control them and update their firmware.

If you have IoT devices that are unsafe but cannot be updated any other way, you can temporarily bridge the IoT VLAN to WAN.

Honestly, what IoT stuff needs is something similar to LVFS. Make it so all the hubs can grab updates from there, and can update any IoT device that supports Matter. It would also serve as a crapware filter because only brands that care about their products would upload the firmwares.


Go to the Adguard GitHub (or use the extension) and report it. And get all your friends to switch to Adguard extension and Adguard Home (Pi Hole alternative) as blockers.

Easylist and its sublist are notorious for being poorly maintained and ignoring issues opened against it. Adguard is much more active in maintaining its lists. Especially Adguard its language blocklists have much, much less breakage and missed ads than Easylist.


>> And get all your friends to switch to Adguard extension and Adguard Home (Pi Hole alternative) as blockers.

Nice of you to slip this "easy" step into your advice. Give me a break!


..?

If you know how to run a Pi Hole, you know how to run Adguard Home. And installing Chromium / Firefox / Safari extensions isn't exactly rocket science.


The crux is in the sentence of yours:

>...all your friends to switch to ...<

:-))


That Economist stat often gets misunderstood. It is "net contribution to public finances" (= how much taxes do they pay), not "net contribution to the economy". This is because they are overly represented in low wage jobs, or indeed on longterm welfare. People in the lowest tax brackets pay very little of it.

I do agree that there needs to be a honest conversation about what (economic) immigrants offer vs. what they cost, but it needs to be done properly.

We will need immigrants because we are below 2.1 in Total Fertility Rate. But, the EU doesn't need to be the comfy life raft of the world as it has been for the past 2-3 decades.


Hehe, yeah there's some terms that just are linguistically unintuitive.

"Skill floor" is another one. People generally interpret that one as "must be at least this tall to ride", but it actually means "amount of effort that translates to result". Something that has a high skill floor (if you write "high floor of skill" it makes more sense) means that with very little input you can gain a lot of result. Whereas a low skill floor means something behaves more linearly, where very little input only gains very little result.

Even though its just the antonym, "skill ceiling" is much more intuitive in that regard.


Are you sure about skill floor? I've only ever heard it used to describe the skill required to get into something, and skill ceiling describes the highest level of mastery. I've never heard your interpretation, and it doesn't make sense to me.

Yes, I am very sure. And it isn't that difficult to understand, it is skill input graphed against effectiveness output. A higher floor just means that with 1 skill, you are guaranteed at least X (say, 20) effectiveness output.

https://imgur.com/tOHltkx

The confusion comes from people using "skill floor" for "learning curve" instead of "effectiveness".

But this is a thing where definitions have shifted over time. Like jealousy. People use "jealousy" when they really mean "envy", but correcting someone on it will usually just get you scorn and ridicule, because like I mentioned, language is fluid.


If the skill floor is high and therefore "effectiveness" is the same for a wide range of skill levels, isn't that the same as having a high barrier to entry? It seems that any activity or game where it takes a lot of skill before you can differentiate yourself from other players would be described that way.

No, a high skill floor is the opposite. It means that anyone can pick up the thing and immediately do decently.

To put it simply, think assault rifle vs sniper rifle. Anyone can use the AR and spray and pray and do pretty okay. You can't do that with the sniper rifle. So the AR has a high skill floor (minimum effectiveness) whereas the sniper rifle has a low skill floor (low minimum effectiveness). But the AR has a low skill ceiling too a point where you can put in endless amounts of skill and see no improvement in effectiveness. The sniper being an infinite range OHKO can scale to the end given aim skill and map knowledge.

Another example would be Reinhardt in Overwatch. You can tell a noob to "look in that direction and deploy shield" and they will contribute to the team. You can't put a noob on Widowmaker and have them contribute (as) significantly.


I've also never heard that use of "skill floor" before. The "floor/ceiling" descriptors imply min/max constraints.

Yeah, sounds like they're confusing AVX2 for AVX512. AVX2 has been common for a decade at least and greatly accelerates performance.

AVX512 is so kludgy that it usually leads to a detriment in performance due to the extreme power requirements triggering thermal throttling.


AMD's implementation very much doesn't have that issue - it throttles slightly, maybe, but it's still a net benefit. The problem with Intel's implementation is that the throttling was immediate - and took noticeable time to then settle and actually start processing again - from any avx512 instruction, so the "occasional" avx512 instruction (in autovectorized code, or something like the occasional optimized memcpy or similar) was a net negative in performance. This meant that it only benefitted large chunks of avx512-heavy code, so this switching penalty was overcome.

But there's plenty in avx512 the really helps real algorithms outside the 512-wide registers - I think it would be perceived very differently if it was initially the new instructions on the same 256-wide registers - ie avx10 - in the first place, then extended to 512 as the transistor/power budgets allowed. AVX512 was just tying too many things together too early than "incremental extensions".


See this correct comment above: https://news.ycombinator.com/item?id=47061696

AVX512 leading to thermal throttling is a common myth that from what I can tell traces its origins to a blog post about clock throttling on a particular set of low-TDP SKUs from the first generation of Xeon CPUs that supported it (Skylake-X), released over a decade ago: https://blog.cloudflare.com/on-the-dangers-of-intels-frequen...

The results were debated shortly after that by well-known SIMD authors that were unable to duplicate the results: https://lemire.me/blog/2018/08/25/avx-512-throttling-heavy-i...

In practice, this has not been an issue for a long time, if ever; clock frequency scaling for AVX modes has been continually improved in subsequent Intel CPU generations (and even more so in AMD Zen 4/5 once AVX512 support was added).


That was true only for the 14-nm Intel Skylake derivatives, which had very bad management of the clock frequency and supply voltage, so they scaled down the clock prophylactically, for fear that they would not be able to prevent overheating fast enough.

All AMD Zen 4 and Zen 5 and all of the Intel CPUs since Ice Lake that support AVX-512, benefit greatly from using it in any application.

Moreover the AMD Zen CPUs have demonstrated clearly that for vector operations the instruction-set architecture really matters a lot. Unlike the Intel CPUs, the AMD CPUs use exactly the same execution units regardless whether they execute AVX2 or AVX-512 instructions. Despite this, their speed increases a lot when executing programs compiled for AVX-512 (in part for eliminating bottlenecks in instruction fetching and decoding, and in part because the AVX-512 instruction set is better designed, not only wider).


I think that's slightly old information as well, AVX512 works well on Zen5.

Agree. It's only recently with modern architectures in the server space that avx512 has shown some benefit. But avx2 is legit and has been for a long time.

In gamedev it takes 7-10 years before you can require a new tech without getting a major backlash. AMD came out with AVX2 support in 2015. And, the (vocal minority) petitions to get AVX2 requirements removed from major games and VR systems are only now starting to quiet down.

So, in order to make use of users new fancy hardware without abandoning other users old and busted hardware, you have to support multiple back-ends. Same as it ever was.

Actually, a lot easier than it ever was today. Doom 3 famously required Carmack to reimplement the rendering 6 times to get the same results out of 6 different styles of GPUs that were popular at the time.

ARB Basic Fallback (R100) Multi-pass Minimal effects, no specular.

NV10 GeForce 2 / 4 MX, 5 Passes, Used Register Combiners.

NV20 GeForce 3 / 4 Ti, 2–3 Passes, Vertex programs + Combiners.

R200 Radeon 8500–9200, 1 Pass, Used ATI_fragment_shader.

NV30 GeForce FX Series, 1 Pass, Precision optimizations (FP16).

ARB2 Radeon 9500+ / GF 6+, 1 Pass, Standard high-end GLSL-like assembly.

https://community.khronos.org/t/doom-3/37313


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: