Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Here's what I learned working at Samsung:

At the end of the day, the stakes are way too high to allow everyone a chance to play cowboy with the tools. You want your best possible champions to bring a design to bear. Then, the other 99% of the army is responsible for implementing the design in as repeatable & standardized a fashion as physically possible so that there is even the remotest chance of yield+profit. Software is the antithesis of this. The cost of playing around with your tools is not even worth accounting for. Due to certain cultural effects, you probably don't want to do software in semiconductor industry unless you really, really like the problem domain.

Even if you don't get to play around with the billion dollar tools, you still get to help troubleshoot some of the most intensely complicated problems on earth. Solving these riddles is very rewarding and the experience will stick with you forever. Hard to package those 2 sentences up into a PR campaign for the young generation, but I'm sure we can spin it if the DoD can still find ways to recruit.



>> 99% of the army is responsible for implementing the design in as repeatable & standardized a fashion as physically possible so that there is even the remotest chance of yield+profit.

That sounds like a mature and competitive field. I am suspicious of companies and fields where people aren't working like this. Look at the car industry, or energy, or farming, or shipping, or even aerospace. Only one of every thousand engineers at Boeing will ever decide the shape of an aircraft wing. The other 99% are there to implement and optimize its construction. Any company not spending 99% of its energies on optimization does not operate in a competitive environment and is therefore very likely on borrowed time. Eventually a competitor will appear or an IP monopoly expire and the easy times will be over.


> Any company not spending 99% of its energies on optimization does not operate in a competitive environment

That’s how a company stagnates and eventually gets replaced. Yes operational efficiency and optimization is important, but you only do that if you have a big moat and not much growth ahead of you. You need to spend at least 30% on growth and innovation so you can stay relevant. Higher if you are the one trying to replace the 99% ones.


A "mature and competitive field" is going to already have all low-hanging fruit taken and implemented. The moat is operational efficiency. The barrier to entry is the high costs of investment and general lack of access to know-how.

Not every industry is like software/tech. And the tech industry is already moving towards a "mature and competitive field" model. Otherwise why is Google and Meta cutting all their unnecessary spending and touting efficiency over growth? (besides AI tech, I suppose) Why all the layoffs?


Hah, having worked in automotive, things are dysfunctional as shit. Oh, they ship things, sure, but mostly as a consequence of sheer will of plowring man-hours in thins untill they kinda work.

Curious what was your experience operating in those fields.


And here I was wondering how manufacturers can take a thing that works and make it not work in new engine version.

Shit like "we did chain driven timing and it worked, but in new engine it prematurely wears", or "someone decided it is fine for belt-driven timing to take dip in oil, oh surprise, we now have parts of the belt landing in various parts of the engine".

Or my favourite, "metal, belt driven water pump is almost never replaced thru the lifetime of the engine and it's cheap, let's make propellers plastic and also drive it from separate electric engine"


Forget chains. Gear-driven cams are best. My motorcycle had them, but honda switched to chains in the new models because cam gears made a small high-pitched noise.


For RPM for sure, although I'm not really fan of the noise.


(Or it's a very young industry)


In the grand scheme of things... in a universe that's billions of years old. Humanity is 200k+ years old. Civilization is 6k years old.

Computers? 100 years old.

What part of any "modern" industry isn't still very very young? even industries like construction that have been around for thousands of years are still relatively young.

Computers and computing? infinitesimal.


By "very young", I'm talking about industry that is still figuring out the right direction. Monoplanes made biplanes obsolete, no matter how perfectly the biplane's wing was designed. That kind of dramatic and rapid change tends to happen during the infancy of an industry.

See the early shifts from vacuum tubes to semiconductors, from expertly-crafted BJT to crude CMOS, or the rapid march of good-enough architectures in the latest process node clobbering beautiful architectures in older nodes. As the industry has matured and course has stabilized, perfecting the design has grown in importance.


Old by how close the industry's products are to physical limits.

For example, the newest gas turbine designs are already exceeding 50% of the maximum theoretically possible efficiency as allowed by thermodynamics. So it doesn't matter how many thousands or millions more years gas turbines are optimized, by us, aliens, future descendants, etc...

No future gas turbine industry in this universe can possibly double the thermodynamic efficiency of the finished product.


>Old by how close the industry's products are to physical limits.

I was just listening to an AI podcast and they were duscussing going from the 1 second, 1 minutes, 1 day [unit of response - I cant recall the name of the measurement] -- but I assume thats the "moores law" of AI right now?

And as we get closer to the physical limits of chip production scale/die/etc - I assume they will be scaled horizontally while the GPT-X capabilities will scale volumetric-ally.

Is this a sound assumption?


Not OP, but even GPT-X and ML reaches limits due to lack of compute and/or datasets.

For example, CNNs were largely known and discovered by the 1990s-2000s, but the compute simply didn't exist yet until GPU manufacturing became commoditized.

OpenAI's massive quantum leap is thanks to the massive corpora they were able to leverage, which until the past 5 years, simply didn't exist.

This is why we've seen massive jumps in Mandarin Machine Translation and Computer Vision from the PRC due to their massive corpora/dataset of English+Mandarin language news from CCTV/CGTN and local surveillance camera data respectively.

This compute limit is a big reason why the US Federal Govt has been working on the Exascale Computing Project for example.


>"reaches limits due to lack of compute and/or datasets.

Yes! thats what I mean by scaling VOLUMETRICALLY -- scaling horizontally and vertically are now replaced by volumetrically. (coining a term?)

its going to be an obloid-worping sphere-oid

Thats what I see down the pipe..?

Thoughts, anyone?


The technical terms are Scale Up vs Scale Out.

Even for training on a massive dataset, you are still limited by compute and processing time (good ole Computational Complexity), which is why HPC projects like the Exascale Compute Project were created in 2015 along with additional funding+research in efficient and alternative data structures.

I highly recommend going down the rabbit hole of High Performance/Accelerated Machine Learning.


Scale 'out' has been around forever.

The eminence of AI is more 'volumetric' than 'out'

Out scales "up and out like a hill or a lift"

- volumetric is spherical - it presses into the future AND the past (it already has been harvesting history, but created a fire-hosed spigot-interface for the future as well) -- and draws it into its center for eval...

They are different.

Always on the positive expressions of XYZ axis - but had ignored the negative of each, where AGI will go in all dimensions...


I don't think this aligns with the Chinchilla scaling law. There is indeed a point at which you can oversaturate a model with data, as well as such thing as not giving it enough. Compute is the constraining factor, and it scales more-or-less linearly in both directions.


Thank you - what is your reco for best info on that law?c


>Solving these riddles is very rewarding and the experience will stick with you forever.

That was also my experience working in the semi field (in Europe). People working there weren't in it for obscene compensation, they were into it for the hands on puzzle solving of uniquely complex HW problems with cool and rare expensive machinery. Some were quite significantly underpaid and while they knew it they never felt the need to complain too badly about it. I guess it's kind of like arts, which is a shame, because here some publicly traded corporation is abusing you for your passion.

Also, the fact that most of Europe doesn't have many FAANGs and big SW companies paying orders of magnitude more to un-balance the jobs market, surely help to not discourage people from this industry.


I'm in digital physical design and part of that 99% of the army. The schedule is king. If we make a mistake it is another $30 million for mask costs and 4 months in the fab to get a new chip to test. Contrast with software where you can change 1 line of code and recompile and test in seconds. We do so much to minimize risk by reusing existing blocks, licensing third party IP that has already been validated in silicon, and armies of verification engineers.


> recompile and test in seconds

i wish


Compare that to the lead time of electronics, though. I just started in PCB design which is leagues more time forgiving than silicon design and fab, and even then if a non-generic component I'm designing for goes out of stock, the lead time for a new batch is anywhere from a month to a few years, on average around 4-6 months.

Thinking about the design process, testing and revisions, all the way to fab and then market, we're probably talking years just to see a single design reach your test bench (I'm speculating). Oh and millions, because the major chip fabs only do things in large batches.

Compare this to software. It is, indeed, seconds in comparison.

I'd love for semiconductor fabrication to get fragmented out like it sounds like it will.


> I'd love for semiconductor fabrication to get fragmented out like it sounds like it will.

Why does it matter for lead times?

If 2 fabs with a lead time of X turn into 5 with a lead time of X... your still waiting.


Lead time isn't because of some magical bottleneck. It's because of demand and batch size. If university students can somehow decimate the batch size needed per design, and enough groups do this (or find new ways to reduce the process size in order for more firms to do this), and enough of them do it at once to help demand, then the lead time goes down, and so do costs.


Do you understand what lead times are?

For example, 9 women can't have a baby in 1 month, because there is a series of necessary processes that has to be undergone and can't be accelerated.


You seem to not get the GP's point at all. The bottleneck in the lead time of semiconductors is not physics at all.


The months long lead time is not purely a scheduling/capacity issue though. Producing the masks for these modern processes is very complex and slow. Adding more fabs won't help because because its a latency issue, not a throughput issue.


The OP is talking about years-long lead times.

AFAIK (but never studied the latest processes), making a mask takes weeks. That means that the minimum lead-time on making masks is weeks long, because there is no interdependence between them. But I'm sure this is one of the steps that add up to years on practice, because of production limitations.

Actually making the chips has a higher floor, because every step is interdependent. With a hundred steps, each one taking half a day, we are talking about 2 months here. There are probably more than a hundred, and a few take more than half a day (but many take less than it), so yeah, I'd easily expect a 4 months minimum. What doesn't compare to years at all.


Those are called supply chain shortages.

I'm fairly certain the actual lead times to procure semi-custom chips are indeed 1 month+. And a fully custom chip is 1 year+.


And you don't need to compile in many cases..


Yeah but that speed also means we can go from idea to broad release quickly which both means quicker to market to deliver value but can also mean that a screw up scales quickly too.


I'm a hobbyist in this area, and think the field is somewhat early in its development. MCUs and cyber-physical systems was like this until the Arduino happened. Arduino may not have been the first to do exactly that but it was just good enough to cause an (re?)explosion in electronics as a hobby. During its hayday, I would say Arduino was a core element of the Maker movement.

So what needs to happen to make this a reality for semi-con? First off, we need cheap, cheap fabrication. I actually looked at public funding in Canada and how that was going to the big name Universities who had their own in-house fab labs (at older process nodes). The costs of someone not in the inside was nuts. The actual cost should be in the 100s of dollars to fabricate a design (considering the marginal costs).

There are people that do this at home but it doesn't work either due to chemicals being pretty dangerous and the need for a bunch of equipment. I bet the amount of money the EU spent on its first metaverse townhall (or whatever it was called .. the thing very few people attended) or a tiny fraction Canada wastes on silly things promoting youth culture or whatever, they could fund a lab that is actually open to the public, with the express mission of promoting hobbyists and education. This will NEVER happen because (a) it needs a professor who is on the inside with a kid-like passion in this tech and a commitment to bringing it to the masses (I see some profs like this at schools like MIT but it is so rare at large, competitive schools like the big ones in Canada), and (b) it does not have an instant payoff for the govt. They don't want dabblers and vague educational outcomes. They want workers with degrees.

I am convinced before I am dead, advances in robotics and fabrication will simply the process (or use home equipment such as future laser printers for printing stencils). I'd love to spend my retirement fabricating my own CPUs :D

Edit:

Let me add: I don't mean the cutting edge process node. I mean the kind of process node that was used to make the very first chips (but less toxic, repeatable, cheaper equipment). If it is possible for synthetic biology, it must be doable for semicon :D


I'm a 25 year professional in the industry so I've never really thought of the hobbyist side. I use software from commercial Cadence and Synopsys that has a list price of over $1 million for a single physical design tool license and we use about 200 of those licenses simultaneously to tape out a chip. Then we spend about $30 million in mask costs. If we make a mistake it is another $20-30 million for new masks and another 4 months in the fab for a new chip.

Google / SkyWater / eFabless have this program for 90/130nm chips. That is really old technology from the 2002-2006 time range but it is still useful for a lot of types of chips.

https://opensource.googleblog.com/2022/07/SkyWater-and-Googl...

https://www.skywatertechnology.com/technology-and-design-ena...

https://efabless.com/open_shuttle_program

I am curious what kind of hobbyist chips you want to make that can't be done in an FPGA? You can't do custom analog in an FPGA but these days you can find FPGAs with multiple PCIE / USB / HDMI serdes links, DAC, ADC, etc.


The problem with FPGA's isn't the hardware... it's quite complete and capable. It's the software stack you're forced to access most of them via that makes them a non-starter for many/most hobbyists. (cost is a major issue, but hardly the only one) My guess is that as low cost producers who have embraced the existing open source FPGA software being developed start producing higher end parts (more recent process nodes, large LUT count etc), that's when you'll really start to see them take off in the hobbyist world.

I think the OP isn't even talking about 2006-level capability but rather something akin to 3D printing for semis that approached even early (i.e. somewhere in the 1960's-1980's range) level fab capability for hobbyists. Obviously it doesn't exist currently, but that's the dream for some.


Cost of the FPGA design tools isn't really the biggest barrier. The big barrier is that the FPGA design tools from the FPGA companies are full of bugs that you spend an inordinate amount of time trying to work around. Some open source tools like Yosys have emerged. So far they're limited to the ICE-40 FPGAs from Lattice and a few older Xilinx parts. I've used them with the ICE-40 and they do work well. Hopefully FPGA companies will come to realize the value of opening up so that 3rd party tools can be developed for them.


> The big barrier is that the FPGA design tools from the FPGA companies are full of bugs that you spend an inordinate amount of time trying to work around.

Do they not have older, more stable, releases?


you cannot fathom how bad eg Xilinx's FPGA tool suite is. like beyond the fact it's like some kind of frankenstein monster because they bought up a bunch of smaller tools and glued them together with fucking tcl, any individual piece is worse than your worst open source compiler/ide/whatever tool you use to build software. and yet people say that it's 10x better than the competitors' offerings. hell on earth is being an FPGA dev without a direct line to Xilinx employee for help debugging (which is how things really work professionally).

just lend some credence to myself here - vitis, their supposed hls offering that turns C++ into RL by magic and genius and oracles, today embeds LLVM 7.0 (released in 19 Sep 2018). gee i wonder how many perf and etc improvements have landed in LLVM since 7 since today LLVM is approach 17.

i could tell you more but it would just spike my blood pressure after 6 months of not having to deal with that mess.


Can confirm. I used Xilinx Vivado to compile for their FPGAs in Amazon F1 servers a few years ago. There came a point when each time I edited a single small Verilog file in my small project, the GUI took 40 minutes of 100% CPU just to update the file list in the GUI. That's before you could do anything useful like tell it to compile or simulate. The GUI wasn't useful until it finished this silly update.

I knew FPGA compilation could be slow, but this wasn't compilation, this was a ridiculously basic part of the GUI. I knew it had to read my file and analyse the few module dependencies, but seriously, 40 minutes? At that time I just wanted to run simple simulations of small circuits, which shouldn't be slow.

In (open source) Verilator, the same simulations ran from the same source files in just a few millliseconds with nothing precomputed or cached.

I looked into what Xilinx Vivado was really doing and found a log indicating that it was re-running a Verilog read on the changed file several thousand times, each time taking a second or so.

That was such a ridiculous bug for software Xilinx said they had spent over $500M developing. If there were good parts of the software I didn't get to see them due to this sort of nonsense. I think it was fixed in a later version, but due to (yay) enforced licensing constraints I couldn't use a fixed version targeting these FPGAs. That's when I abandoned Vivado and Xilinx (and Amazon) for the project as I didn't have that much spare time to waste on every edit.

I am under the impression the current crop of open source FPGA and especially open source ASIC tools are much better designed.


> I looked into what Xilinx Vivado was really doing and found a log indicating that it was re-running a Verilog read on the changed file several thousand times, each time taking a second or so.

Just reading that made me nauseous, to think this forced so many trillions of CPU cycles to be spent on nothing, and nobody at Xilinx corrected it until some time later.


I'll second this, but for Intel / Altera.

I started working with a Stratix 10 GX dev kit last year, and the Intel Quartus Prime Pro software. It's been a nightmare.

1. Unless you have a direct line to an experienced support engineer inside Intel, technical support amounts to pleading with new-hire support engineers who can't do more than try to look something up and regurgitate it back to you.

2. Sample designs don't work with the tools you have. A design might be for version 18 of Quartus and you have version 23. You can try to auto-upgrade the IP, and maybe 3/4 of the time it works. The other 1/4 of the time it's some obscure error you can't track down, so you're back on the "Community Site" begging for help.

3. Doing something like programming your design into flash on the S10 board involves lots and lots of research, including watching Intel produced YouTube videos.

I could go on, but the whole process is like running in wet cement.


> I could go on, but the whole process is like running in wet cement.

just quit. seriously it's not worth it. no job you have (not even low-latency work in HFT) will compensate you enough to make investing time, blood, sweat, hair, and sleep into this work worth it. there is a better, more rewarding, with an actually transferable set of skills, software job waiting for you somewhere out there.


> i could tell you more but it would just spike my blood pressure after 6 months of not having to deal with that mess.

There was a time when I could've transitioned to FPGA development and I was pretty keen to do so. But after using (fighting with) Xilinx's tools and watching other people who were deeper into the FPGA side wage similar wars I decided it just wasn't worth it. That it would be better just to stay on the software side of things where the tools are mostly open source and so much better. In FPGA development you just don't have any control over the backend tools - fortunately there are a lot of front-end simulation tools like Verilator that are open source and some limited backend tools like Yosys are open source. Hopefully open source will continue to make inroads, but it's slow going.


No.


I get paid to do this professionally so I don't follow the hobbyist side but I've heard of people using this for FPGAs. I have no idea how it compares to commercial tools.

https://github.com/YosysHQ/oss-cad-suite-build

OSS CAD Suite is a binary software distribution for a number of open source software used in digital logic design. You will find tools for RTL synthesis, formal hardware verification, place & route, FPGA programming, and testing with support for HDLs like Verilog, Migen and Amaranth.


> I use software from commercial Cadence and Synopsys that has a list price of over $1 million for a single physical design tool license and we use about 200 of those licenses simultaneously to tape out a chip. Then we spend about $30 million in mask costs. If we make a mistake it is another $20-30 million for new masks and another 4 months in the fab for a new chip.

I've thought for a long time now that this is an area ripe for disruption. But it's very difficult to disrupt - it hasn't been yet. EDA software is probably the easier part to disrupt. Some open source EDA tools are out there, but not so much on the physical design side.


I've read there's actually some excellent ASIC open source design flows now, thanks to some open PDKs (notably Skywater) as well as recent funded research. I haven't tried the toolchains myself but they are readily available and you have courses like Zero to ASIC using them.

They don't target the advanced nodes where masks cost as much as the GPs prices (though $20-30M sounds higher than I expected even at the leading edge), but they are workibg their way forward and the space in general is being disrupted at last.


I've taped out 4 chips in 5nm and our managers and VP's have said about $30 million for a full set of masks (base layers plus metal layers)

The $20 million is for metal layers only for a respin.

We are moving to 3nm right now which will be even more expensive.

The open source tools can't handle the latest process nodes. Maybe in the future but this is a highly specialized area with tons of NDAs from the fabs for PDKs and DRC decks.


> MCUs and cyber-physical systems was like this until the Arduino happened

I'd say it technically happened when cheap flash based microcontrollers become available.

Before you had to have *at the very least* EPROM programmer and eraser (or CRT TV I guess) to even put your code onto microcontroller. Flash based microcontrollers meant all you needed is parallel port and some wires to start programming microcontrollers, dropping the barrier to entry by orders of magnitude if you already had PC to program.

Then we had first wave with Basic STAMP and similar making it even easier.

Then it was Arduino that exploded it and there were other factors on that too.

>So what needs to happen to make this a reality for semi-con? First off, we need cheap, cheap fabrication. I actually looked at public funding in Canada and how that was going to the big name Universities who had their own in-house fab labs (at older process nodes). The costs of someone not in the inside was nuts. The actual cost should be in the 100s of dollars to fabricate a design (considering the marginal costs).

Like that. Before making PCB was either doing it at home that took a bunch of time and not everyone wanted to play with chemicals, or expensive to prototype PCB.

Now we have extremely cheap low volume PCBs and even well priced low volume manufacturing options. Same thing with 3D printing, went from massively expensive to affordable.

The problem really is that it's still a bit profitable to do so for companies doing it, while doing same for chip-making would be oh so much harder unless someone put some SERIOUS R&D into making "industrial chip printer" where each wafer could have different set of chips (packaging I guess could be handled by requiring each submitted chip to have connectors in same place). So no mask but some kind of DLP or similar projector to do lithography (dunno if that even possible for anything in hundreds of nm range, just guessing)


https://developers.google.com/silicon might be of interest to you.


I don't believe Google is sponsoring any more free shuttles. Your best bet nowadays is ponying up $10k to get a tiny chip on sky130 (so small you really can't hold even mildly complex in-order RISC-V cores). Sky130 is also generally so old that you can probably get better performance (let alone area) on a modern FPGA for a fraction of the price. Efabless's sky130 MPW is nice for semiconductor research insofar as it makes it more realistic to actually fab designs but it's not particularly useful for hobbyists beyond just the novelty of holding a chip you designed in your hands.


There is the libresilicon project: https://libresilicon.com/

But I don't know how usable they are.


I think it's simpler than that. Semiconductor manufacturing is such an advanced field, and there are so few openings for engineers, that they can be very selective in who they hire. We don't need to conclude that semiconductor engineering is particularly difficult, it's sufficient to conclude that they are simply picking people who will work lots of unpaid overtime, simply for the thrill of working in such a cutting-edge field.


You missed the part where they do it for a fraction of the salary.

I suggest there's a kind of 'cultural imperative' in Korea that is so different from the US that it doesn't even begin to factor into our equations.

My Korean grad school colleagues went to intern at Samsung for serf pay, the only way that could work is if there was some kind of culturally implied merit.

The US also needs a 'different kind of ethos' to compete in semiconductor, Cali Surfer Vibes won't cut it, neither will NY banker vibes, neither will SF Social Media AI Allbirds vibes, nor Cambridge High Intellectual Healthcare Conference vibes, nor Texas Energy Industry or DC Government Consultant etc..

And that will be impossible without a fair number of migrants who will be mostly Asian - so where is this going to happen?

Maybe they need a 'New South Valley' about 50Km south of San Jose, a cross between Orange County, Silicon Valley, Cambridge, and Seoul aka Cali, but slightly more formally regimented, a bit like the actual old-school Silicon Valley which was at the time, away from the buzz, a bit of a sleepy burb where people focused on hard stuff and economics mattered.


> The US also needs a 'different kind of ethos' to compete in semiconductor, Cali Surfer Vibes won't cut it, neither will NY banker vibes, neither will SF Social Media AI Allbirds vibes, nor Cambridge High Intellectual Healthcare Conference vibes, nor Texas Energy Industry or DC Government Consultant etc..

If someone like Netflix/HBO did a proper documentary with Samsung about their LSI fab in Texas, HR would have more applicants than they would know what to do with. This 'ethos' can be created. I saw what it could be like when I worked there. I really believe seeing the whole thing in action is the primary selling point. The draconian security makes sense once you walk that catwalk and witness with your own 2 eyeballs what is at stake. Capturing this sensation for the masses is the challenge.


They don't pay enough is the point. Really smart and competent people don't work for 1/2 wages unless they are caught up in some kind of system like in Korea where it's a culturally backed norm and frankly they can't really escape.

Americans will not work for fractional pay unless it's a kind of a 'wartime' / 'emergency' situation.


> The US also needs a 'different kind of ethos' to compete in semiconductor, Cali Surfer Vibes won't cut it,

Wait till you hear about the history of the semiconductor industry!


> the stakes are way too high to allow everyone a chance to play cowboy with the tools

This is really what pushed me into just doing digital design as a hobby and making my money elsewhere. It's understandable that chips companies are so risk averse (a bad tapeout or, worse, a major escape can cost the company billions) but it is really a miserable experience to have so much passion and ambition but never being able to do anything other than extremely conservative incremental changes over a decades long career.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: