Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Intel Launches 10nm Atom Embedded CPUs: Elkhart Lake Now Available (anandtech.com)
84 points by rbanffy on Sept 23, 2020 | hide | past | favorite | 104 comments


There is a buried lede: the Atom chips have an on-chip ARM core.

"This is a dedicated ARM processor, specifically an Arm Cortex M7, that supports real-time functionality, network synchronization, time sensitive networking, and low compute requirement workloads without needing to fire up the bigger cores."

It's interesting to see Intel recognizing this need and this limitation in this way.


Intel has a long, storied, history with ARM, dating back to the late 90s. StrongARM was a great implementation of ARM.

https://en.wikipedia.org/wiki/StrongARM


I thought for a second you were joking about Intel strongARMing its OEMs against AMD...

They also had Merrifield and Moorefield back in 2014 for the smartphone market. Not as good as Snapdragons but not terrible. Makes you wonder what if Intel did not pull the plug.


Intel putting an ARM core on their SoC is like Windows putting a Linux Kernel in their OS ... wait a second.


This is a great idea. I have a BeagleBone Black that has a SOC with a dual core ARM CPU for running linux and a Texas instruments proprietary real time processor for controlling timing sensitive applications.

This Intel solution has the potential to be more useful than the TI one because the ARM core has better compiler support than the custom chip TI made.


Hmmm...

Arm Cortex M7 is completely different from Arm Cortex A53. The A53 I see as a proper Intel Atom competitor, the Cortex M7 is something I'd expect to be found (typically) in a very-low end setting, like the controller for a mouse or keyboard.

Atom clearly shrinks down to Arm Cortex-A53 levels, but to pretend that Arm Cortex-A53 and M7 are similar is to be disingenuous.

----------

EDIT: Case in point, here's the first Cortex-M7 on Digikey that I found: https://www.digikey.com/product-detail/en/stmicroelectronics...

There's a HUGE difference between Arm-M (like the M7 in the Intel chip) and ARM-A cores (like the A76 or A53 in your phone).


> There's a HUGE difference between Arm-M (like the M7 in the Intel chip) and ARM-A cores (like the A76 or A53 in your phone).

The M7 core is for real time code, power and sleep management, and offloading small processes that would waste too much power waking up the main cores. It's not meant to be powerful, it's meant to be ultra low power.

I don't know why you'd expect it to be an A53 or A76 core. Those are much more complicated, power hungry, and are made redundant by the x64 cores.


> I don't know why you'd expect it to be an A53 or A76 core.

Your reading of my words seems mistaken. Under no point do I allege that Intel should be using a Cortex-A core. I'm simply pointing out that Cortex-M and Cortex-A are grossly different.

Read the thread again: many people here are confusing M and A cores: as if all ARM chips are the same or something.


Intel used to have the Quark chip to compete with these embedded ARM cores.


And before that, Intel had XScale ARMs.

Intel decided to leave the low-margin microcontroller business a long time ago. ARM may have gobbled up the market, but there's an entire graveyard of companies who were unable to make it in that highly competitive market.

The Cortex-M world is dominated by peripherals, more so than core performance. A faster or more-accurate 12-bit ADC is what makes your company live or die (or STM's "op-amps on board", which reduce the need of external opamps, a singular opamp + cortex-M3 chip is all you need). Integration is key, not so much performance or even power-efficiency.

--------

In any case, I stand by my primary claim: ARM Cortex-M chips do NOT compete against Intel in any capacity. They're a completely different market. Intel doesn't have the technical expertise needed to make ADCs, Timers, or Op-amps like say, ST-Micro or TI. (Lower-power OpAmps, higher-frequency, lower-input current, lower output impedance, compatibility with a wider variety of voltages from 1V to 5V... etc. etc. ).


They also have the Puma line of DOCSIS cable modem SOCs, but those are kind of infamous right now for some nasty bugs.


I think the issue here is that 'workloads' in the original piece is not meant to refer to user workloads - and no comparison is made with the A-53 in the article.

Incidentally, interesting that Intel can use a low margin Arm core to enhance it's offering that competes against higher margin Arm cores. It will be interesting to see if this sort of thing survives the Nvidia takeover.


Seems really weird.

How is an OS supposed to support that? Seamlessly transition processes to run in qemu-x86 on the ARM core? Compile some processes as ARM binaries and pin them to the core? Require some sort of data-layout-preserving dual-architecture compilation for all binaries?

Seems much more reasonable to add a low-power in-order x86-64 core instead.


A core like the M7 would be invisible to the OS entirely.

The M7 core is likely involved in the bootup process. Modern CPUs are so complicated that you need another microprocessor for assistance to boot the darn thing.

Things like DDR4 initialization, PCIe initialization, SATA initialization (woops, this computer doesn't have any SATA drives, time to turn the attached M.2 drive into a SATA drive... wait, no M.2 drive either. I guess the motherboard wants to boot through PXE, which requires the network controller to be initialized). Etc. etc.

Even something like reading from NAND Flash requires a complicated initialization dance, where a microcontroller would be useful.

I admit I'm mostly ignorant on the bootup process of modern chips: but I understand that they're very complicated beasts now.


Spot on. Even the power supplies providing all the different power domains have to be booted up in a precise sequence. Companies like Marvell usually sell a suite of power management chips just to deal with that - and extract more money out of customers cause no one can be bothered to stray too far from the reference design.


The ARM M7 is a microcontroller class processor. Its a high end one, enough to comfortably run Python for example, but its not an application class processor.


> Seamlessly transition processes to run in qemu-x86 on the ARM core?

No, this thing runs its own firmware. It's a Baseboard Management Controller on-die, basically.


> How is an OS supposed to support that?

Theoretically, if you can share memory between cores of different architectures and are careful to compile everything so endianness is the same and padding lines up, shared state means you could hand control over to the other core.

Realistically, I bet this is more like the PS1 chip in the PS2, just on the same die.


This is basically the use case for the Programmable Real Time units in the AM3359. You write bare metal code to run on those processors and then use shared memory to communicate with host processes running in Linux on the application processor. The PRU lets you control peripherals without the timing jitter from a non-realtime OS like Linux.


This is targeted at embedded so anything for that core will only run on that core. Thus the OS needs to know about it because it needs to start that core and stay out of memory that core uses.

I have often wised my embedded system had separate CPUs for the embedded control (real time requirements, bad things happen if it crashes), and the user interface (needs to be pretty with icons, but who cares if it crashes)


I’ve worked with systems which did this with two processors talking over SPI. The difficulty with integrating the real-time processor via shared memory is ensuring the real time processor has exclusive memory access to the peripherals it’s using. The control registers might be helpful here provided the bootloader can be trusted to configure them and the rest of the OS kernel doesn’t mess with them at startup. The client PRU is entirely at the mercy of the main OS. So it’s hard to argue that you have perfect separation here like you can when you have separate chips talking over a bus. You may also run out of pins to mux peripherals through as display functions and IO can take a lot of pins.


These cores, and any modern computer has a quite a few, run their own OSs and software that's usually packaged as blobs the OS loads during driver initialization.


It just means that you don't need an external micro controller. It's more about having a fully integrated solution than some ARM vs x86 thing. Nobody is using x86 based micro controllers anyway.


Adding a low-power ARM core to an ARM SoC is the norm. See all these "big-little" configurations.


This isn't a big-LITTLE situation. The controller is there to manage board state without waking up the CPU, as extremely commonly found on server- and embedded-class deployments. They've probably even got a document describing how to drop in your favorite IPMI software (be it Dell's DRAC or HP's iLO, etc).


> ARM core to an ARM SoC

The surprise is something else


The bigger question what's the use of X86 core there, if Atoms are well known to be steamrolled by just any high end ARM core.


It's a micro controller.


I mean if they already went through the pain of integrating ARM ecosystem SoC components, they could've put proper ARM CPU cores too


I wonder if Intel is aware that the Atom brand is pretty sullied. I'm surprised they haven't rebranded it. They've had some decent ones, like the C3000, but people do associate the name with underwhelming performance.


I use an atom C2558 as a file server. It runs under 15 watts, has no fan, 32 GB of ECC memory, quad gigabit Ethernet ports and runs ESXi hosting a Debian ZFS file server.

It’s perfectly balanced. It can read and write at full gigabit speed, with ZFS encryption, and the CPU is ~90% utilized.


I think it's the early versions that tarnished Atom's name.

I have an ancient Atom 330 still chugging away as my Asterisk box. It's on an old Intel Mini-ITX board, the only Intel-branded motherboard I've ever had that didn't die from capacitor plague.

It's hilarious to realize that a Raspberry Pi 4 will run circles around it. The 330 can't saturate a GbE port, and even the SSD pokes along at the 1.5 Gbps SATA speed. If I didn't have a PCI telephony card I wanted to keep, I'd replace it with a Pi.


> It's hilarious to realize that a Raspberry Pi 4 will run circles around it.

Is it?

The Atom 330 was released in 2008 (as one of the cheapest, weakest processors, lowest power core Intel would produce at the time). The Rasp. Pi 4 was released 11 years later, on 24 June 2019.

I bet you that the Intel Atom 330 ran circles around any chip from 1997.


I have a "netbook" (I mean, it is what it is) with a Celeron N4000. For email, light browsing, and watching online videos it's more than enough, and even manages to pull 10 hours of battery life, fanless and barely feels warm to the touch if "stressed" for a long while.

That celeron is a Goldmont+, which is a "hybrid" between previous Atom designs, with a few rehashed things from Skylake. Not a bad processors at all, though it lacks AVX (so SSE4.1 is the strongest SIMD it has) and absolutely crawls if you try to do anything number-crunchy on it.


N4000 at least is OOO. I'm pretty sure the 330 was the early core that was (to my understanding) more like a p5 than p6 in design, In order but with HT.

A lot of low power cheapie cpus from the era of the N4000 have similar trade offs, if it makes you feel any better. I had a 4 core AMD Jaguar based thing from that era and it was similar, except while number crunching was decent the single channel for memory bandwidth kneecaped anything that taxed the integrated video card or cpu too much.

I will still never understand that decision for Jaguar on mobile. If it was a cost concern why not bin? Or was the worry that with 2 channels it would make Bulldozer look worse than it already did? (My personal theory, based on my experience with a 2 core 3ghz dozer part...)


I am hoping Qnap will release a new NAS with new Atom. 2.5Gbps Ethernet, ZFS support from new OS.

Hopefully they update their design of NAS a bit so it doesn't look as bad.

I would be interested to know how much do these new Atom cost.

But now I need to sit down and think why Intel decide to use 10nm Super fin for Atom, a low margin product at this stage of the cycle. My guess would be yield is great, but that ignores Server Part and other 10nm roadmap.


I’m pretty sure an intel atom cannot reach gigabit speeds with zfs encryption.


That's part of the confusion. The C2XXX and C3XXX aren't typical Atoms. They use the brand for an odd range of chips.


I always thought the C series should have been branded as something like Xeon E1 to indicate that they're server chips. And now Intel has changed it up again with Snow Ridge branded as E5900 instead of C4XXX.


Based on?

Have you ever used one? They have hardware AES, and have had since 2013.


I'm not making it up: https://gist.github.com/jeffmccune/0f381a69e0a8111c3cac88291...

WRITE: bw=184MiB/s (193MB/s), 184MiB/s-184MiB/s (193MB/s-193MB/s), io=10.8GiB (11.6GB), run=60067-60067msec

Correction from above though, it's an Atom C2758


I agree that the Atom brand has been sullied, but I've often wondered if they weren't "bad" per se but were used in the wrong kinds of products. I got the impression they were better suited for an ATM or a vending machine that Tweets or something -- but not great for a full Windows environment, even if the user is not doing anything resource heavy. Feel similarly toward Celeron. Marketing != practical use for these CPUs IMO


> I got the impression they were better suited for an ATM or a vending machine that Tweets or something

Gambling machines too. There's a market for used small Atom based motherboards previously employed in betting and slot machines. They're often industrial grade really good quality hardware with plenty of serial ports even where not needed, and if lucky enough to find one with enough SATA ports, they also make excellent NAS boards. Linux support is usually complete since they're mostly used with that operating system. I had no surprises using one of them with BSD based Nas4Free.


They're definitely more suited to headless operations, I have two old Atom netbooks and both were much better running Ubuntu Server. One of them -- one of the earlier Atoms that only had hyperthreading and originally ran Windows XP -- was a print server for several years. (Until the fan became grindy and the keyboard stopped registering input due to 24/7 operation with the lid closed)

With Windows -- either XP Home or 7 Starter -- they both barely chugged along, not to mention battery life.


Yeah, they have used it all over. Including the C2XXX and C3XXX for server platforms.


I used to have an Atom based server that was used for an SSH bastion for a rack-internal IPMI/LOM network and it worked great. But that’s literally all it was used for.

Sometimes it’s nice to have a small server for a dedicated purpose. But would I have considered an Atom for anything else? No way.

Then again, I also didn’t consider anything but an Atom for this job.


For me, it's getting hard to consider anything other than Pi for these kinds of tasks. Yes, I could install an ancient, low-spec machine, but it'd use more power and be more bother.

Biggest reason this is annoying is only one onboard ethernet.


Now that you can boot a Pi from USB, I could see this point of view. I’ve had enough SDs die on me in a Pi that I wouldn’t trust an SD driven Pi for anything where I’d need to rely on it for remote access. I wish there was something in between Pi and Atom for this use case.


> a vending machine that Tweets

A vending machine where you let it post sponsored content to your twitter in exchange for a "free" soda.


In my experience they also work quite well in chromebooks.


Among who? Not the people to whom they market these parts. People who are designing embedded systems with 24 low-power cores and integrated programmable 100gbps NICs don't care if the people on some overclocker forum don't like the Atom brand.


I think the defect which causes Atom C2000 processors to fail after 18 months comes to mind, to both businesses that bought the equipment and the equipment manufacturers that used them.


This batch of chips looks like it was designed for someone in mind that was going to buy a bunch of them. They then tossed it over the wall and said others can get it too. Probably some sort of IoT type situation where power is a concern with an occasional burst of work. Something where you are 95% idle and then that 5% you want a bit of perf within a particular power budget. Something like a TV, cablebox, or a wake up sample send data in goto sleep situation.


They are probably referring to pre-silvermont(pre-2013) which were in-order execution.


Yes, that's why they brand some of the top SKUs Pentium and Celeron. But these are for embedded uses where branding is less important than actual features/performance.


In the target market "Atom" means "low cost" which is exactly what Intel wants.


This isn't right, either. In Intel's lineup the Atom brand is the most expensive brand using these cores. The Pentium and Celeron branded parts are the cheaper and cheapest ones, repsectively. "Atom" is basically the "Xeon" of its family.


I see it more as "really low power".

I've got a quad core Atom mini PC running plex and general file-server duties, and the thing is not only passive but sips energy.


On the other hand, it could be a form of price anchoring for their more expensive chips. In which case, a slightly tarnished reputation is actually a plus.


Perhaps. It's confusing though, that there are Atom server chips that are complete garbage, then some that actually pretty decent.


And awful Linux support. Many people with x86 tablets hate them.


I can't speak to your tablet use-case but as a general statement Atom is absolutely fine, even great under linux.

Intel actually has long had some of the better Linux support in the market in general, especially when compared to the early Zen hardware which needed a few years to build up critical mass (and of course it dumps all over the ARM-based experience, having a non-x86 processor is several notches upwards in complexity/difficulty of the experience). In particular they were an early mover on the open-source graphics driver thing (several years ahead of AMD), and their Linux driver has been the gold standard for stability/specs compliance for a long time. The chips underneath them aren't the fastest but the driver is clean and sensible.


The era where it was bad though, it was phenomenally bad. Before the present era of Intel-designed graphics cores in Atom chips there was a range of atom chips in some netbooks and tablets that used PowerVR graphics, and these machines were absolute horror shows.


Fair enough, I guess that's the difference then. I've always used the ones with Intel graphics.


Linux support for x86 tablets has improved a lot as of late. There are a few problematic chipsets, e.g. with PowerVR based graphics, but they're rare.


The PowerVR issue was much bigger when netbooks were all the rave. They were usually sold with XP Home running on 256-512MB RAM which was barley enough to run the base OS let alone a single program. The first thought was to toss Linux on the system but you were immediately thwarted by missing graphics drivers. So you had to run in vesa mode resulting in worse performance than Windows XP. Shame because a friend had a nice Toshiba that was completely useless from the day he bought it and wound up tossing it after just a year. So many useless netbooks...


This. I don't think someone designing an ATM cares if the CPU can run Crysis.


The Atom is like the 737 MAX.

American car manufacturers have shown no interest at all in selling small cars but have been forced to by regulators. Thus the Nova, Chevette, Cobalt, Sonic, Neon, Gremlin and other names you don't remember.

Honda and Toyota could not stop making the Civic and Corolla if they tried -- if they did stop for a year they would see people buying 3 year old cars at new prices and be shocked at the money they are leaving on the table. (Honda in particular has been trying to phase out the Fit for the CR-V but they may not have the discipline to be able to do it.)

Atom is a Chevy Nova that is being marketed as if it were a Toyota Corolla. Crap like that should be forgotten in six months, not waved around to remind you how bad it is.


Thus the Nova, Chevette, Cobalt, Sonic, Neon, Gremlin and other names you don't remember.

Three of these models have seen refreshes and a resurrection in the market in the last generation, dunno if your intent was to suggest they don’t exist anymore and were forever lost to history but they do but they most definitely were not.


They are also forced to by market realities (which you bring up). They can't just ratchet up the prices to where they would get the same margin as for an SUV. But many people can't afford that.

The big 3 made a conscious decision to get rid of their cars because they constantly had to do incentives to make them sell. Compare and contrast to the Japanese makers who tend to be less likely to do so. IMO they weren't selling as many fits as they wanted, and my gut also tells me the new trade agreement may have had an impact (I will admit I haven't dug into it and checked the foreign content and if it would be out of range.)

Heck, Subaru actually moved their US Impreza production to be domestic. If a few players stick to the car market for a while I feel like they will be rewarded for doing so. Not everyone wants the extra cost of an SUV.


Really?

When I go to a "car" dealer it seems the cars are all sold out but they have a long line of SUVs on deep discounts.

My dad would go to American car dealerships in the 1970s and they would refuse to sell him a small car. Every time I've been in an American car dealership since it has been the same story except for a small window after 2008 when you might find a small car.

Last time I went to a Honda dealership the Fits were all sold out because the factory washed out but they had plenty of CR-Vs made in the same factory.


US manufacturers can’t make small cars profitably with their labor costs, but are required to sell them because of CAFE. They are only going to make enough to reduce their CAFE costs, no more.


Right.

So I am sick and tired of the media repeating that "Americans don't want to buy small cars" when the fact is that "American car manufacturers don't want to sell small cars."


The problem is the big three have massive pension costs inflating their assembly costs. For them, it makes perfect sense to focus on larger cars, SUVs and trucks where cost of assembly is way lower proportion of sales costs.


It's probably time to reevaluate that opinion. Apollo Lake/Gemini Lake are extremely competent little chips. The latest uarch puts them around a Core2Quad performance, perfectly acceptable for light desktop usage scenario.

I've been using one all summer for web browsing/citrix/etc so I don't have to run my big gaming rig in the summer heat, and it's been great. There's a seller on Amazon who has been clearing out the NUC7PJYH (J5005 based) off and on for the last year or so for $125 and I've picked up a couple. Despite the official spec they do support 16GB of memory and they also have the new media block with HDMI 2.0b support and HEVC 10-bit decode, they are super great for the price.


Funny you said that about the Chevy Nova...

https://en.wikipedia.org/wiki/Chevrolet_Chevy_II_/_Nova#Fift...


It is the genius of GM that they sold Toyotas and also that they did a joint venture to take their carmaking to the Japanese level.


I remember the chevette being a very popular car that was partly responsible for bridging GM out of a sales slump in the 70s. Two oil shocks did a real number on the market for heavy cars.


I'm not really old enough to remember, but as I recall the chevette was sold at a slight loss because it got the fleet fuel mileage numbers up allowing GM to sell more expensive gas guzzlers. Of course profit is weird and it is hard to account for everything.


The Chevette was hugely popular in Europe as marketed by Opel and Vauxhall, part of the GM empire. Effectively it was an Opel sold in the US rather than an American car sold in Europe.


I think part of the problem is that so many North American Chevettes (and their Pontiac equivalents, the 1000 and Acadian) where sold with a 3-speed automatic transmission that made them unbelievably slow:

https://www.youtube.com/watch?v=oMsXLYFU0pU&feature=youtu.be...

I generally don't care too much about performance. I was completely happy with the performance of my '85 Toyota Tercel with 65hp. But I don't think I could handle 0-60 in 30 seconds...


Yes, that's just too slow. The same car with a stick shift did just fine.


The Sonic is also an Opel designed platform. https://en.wikipedia.org/wiki/GM_Gamma_platform


I wonder how much of a difference it would make if Intel made and sold boards that were identical in size and price to the Raspberry Pi and were full systems for hacking and/or IoT development. I think that would be a wildly popular item.


Did you hear of the Edison board? Sank like a stone. And like many things Intel lost interest.

Intel is culturally incapable of doing long-attention-span products. They only succeed with fire-and-forget products. Sustained effort over the long haul is just not in their DNA.


It was more expensive, though. Edison and Galileo were overpriced and feature-poor - you need to add your own serial console, which, compared to an RPi (and the majority of these SBCs), is a major nuisance.


> It was more expensive, though

Exactly Intel's problem with these things. They are always too expensive compared to the competition. It's the #1 reason they failed in mobile, too. Their similar performance Atom chips cost 2x more than the highest-end Arm chip.


It's so much this. Intel showed up with a proprietary 70 pin fine pitch SMT connector, a 1.8V power requirement and sold the Edison to high level designers, when the market they were after (the Rasp-PI folks) had extremely easy to integrate fat header pins that are 5V tolerant and beginner friendly. You needed a breakout board and hours of tinkering to get the thing to a state that you get out of the box from the ARM offering.

Galileo was as close as they came to a home run though. It really was their run at a Rasp-pi-alike, but it was almost twice as expensive, wasn't compatible with the Rasp-Pi headers (though it oddly was with Arduino, which seemed like a real design impedance mismatch), and was lacking numerous peripherals like video out.

They had a similar problem with that whole Maker-targeted series of chips - they couldn't seem to decide if they wanted to capture the ease of use of Arduino or the deep integrator capabilities of these target-specific MCUs and really failed to split the difference.

They also failed to take into account that they were very much the minority player and any sort of entry into those markets would be a war of attrition and not just overnight success... and the billion dollar juggernaut did what billion dollar juggernauts do to side projects that aren't overnight successes - they killed it before anyone could even develop competence working with them.

Both Edison and Quark really had niches they could have well serviced, if they had given it a real try... but it was very apparent that management wasn't interested and the designers didn't understand who they were targeting.


They don't fire-and-forget for established products (desktop, laptop, server) but they really takes care about platform's evolving and sustainability, compared to AMD.

It seems that strangely they can't do that on newer field.


They priced it like any other dev board (which is maybe how they thought about it?). It was comparatively unaffordable for hobbyists. They also didn't advertise it very well either as far as I know. I only knew about it because of DigiKey.


If they wanted to stem the ARM tide they would do exactly this. Unfortunately they don't seem to be able to see the value in anything beyond direct profit ventures. The Pi foundation has deep ties to Broadcom, so Intel would have to start something from the ground up. I still think it would be immensely worth it.


I don't think the PI was the major misstep for Intel. It was the low power device revolution which led up to cellphones. Silicon has become a commodity business. Companies can pretty easily get a contract set up with one of the major foundries and begin cranking out chips in mere months. While Intel was focused on larger and more complex chips aimed at high performance compute, others were focused on battery powered small form factor devices. Once ARM became the de-facto standard for these, it was only a matter of time before Intel building it's own foundries would become a liability.

History doesn't always repeat, but it sure does rhyme. There's a lot of parallels here to Microsoft and the PC industry.


They just need a reference design than can be manufactured and sold for profit in the 20-30 range.

And make sure their fabs can deliver at volume without breaking them, which is iffy - every Atom wafer sold is one less Xeon wafer to sell.


The Intel Edison, Galileo, Curie, and Joule boards were roughly similar to what you propose. They were all discontinued in 2017.


Those boards were the right form. The problem they did not realize was their price was 5x what it needed to be to compete.


And, IIRC, the Quark X chip in the Galileo board sold for like $5 in quantities.

I've been thinking of how to build educational clusters with chips like these (current candidate is the Octavo SoM that a TI Sitara core, RAM, some flash, an SD controller and two Ethernet ports) that could have a couple dozen nodes, connected by one or two Ethernet switches, all assembled on a single PCB. A bit like the Pine 64 Sopine clusters, but with more, simpler nodes. The built in Flash can be used to network boot the node, eliminating a large cost in external flash.


Exactly, they shouldn't be designed to be major profit product. They should be sold at cost to support their ecosystem.


None. No, seriously.

The biggest advantage of RaspPi is the Linux support. Install and play.

Atom? Only recently it started to run without hopelessly freezing and a whole lot of patches on top of mainline kernel.


Every time I see an Intel 10nm announcement headline I get very excited, followed very quickly by getting very disappointed.

When will they finally launch some high performance 10nm?


Next desktop CPU family is Rocket Lake. It is rumored to be released at the end of 2020 or first half of 2021. It'll use 14nm process. After that it'll be Alder Lake. It is expected to be launched in the second half of 2021 and will use 10nm process.

Considering all the delays and overall state of the world, I wouldn't be surprised to see further delays.


Absolutely agree. I've wondered if Intel was just being lazy prioritizing improving their silicon process and got snuck up on by AMD, thinking they effectively had no competition.


It's really TSMC doing the advancing and AMD (and pretty much everyone else in the world aside from Samsung) just along for the ride!

I just started reading more about the real differences between intel 14nm and TSMC 7 and found this article comparing the actual sizes of the transistors.

https://hexus.net/tech/news/cpu/145645-intel-14nm-amdtsmc-7n...

Interesting how similar they actually are, though TSMC 7nm clearly smaller. Not by as much as the numbers would suggest.


They had no competition for a long time. After opteron and before maybe Zen 2, Amd was not competitive.


Ive been spoiled by the last few years of AMD cpu launches. This 'news' harkens back to the pre-Ryzen era. Freshly painted walls at the DMV.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: