Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Seems really weird.

How is an OS supposed to support that? Seamlessly transition processes to run in qemu-x86 on the ARM core? Compile some processes as ARM binaries and pin them to the core? Require some sort of data-layout-preserving dual-architecture compilation for all binaries?

Seems much more reasonable to add a low-power in-order x86-64 core instead.



A core like the M7 would be invisible to the OS entirely.

The M7 core is likely involved in the bootup process. Modern CPUs are so complicated that you need another microprocessor for assistance to boot the darn thing.

Things like DDR4 initialization, PCIe initialization, SATA initialization (woops, this computer doesn't have any SATA drives, time to turn the attached M.2 drive into a SATA drive... wait, no M.2 drive either. I guess the motherboard wants to boot through PXE, which requires the network controller to be initialized). Etc. etc.

Even something like reading from NAND Flash requires a complicated initialization dance, where a microcontroller would be useful.

I admit I'm mostly ignorant on the bootup process of modern chips: but I understand that they're very complicated beasts now.


Spot on. Even the power supplies providing all the different power domains have to be booted up in a precise sequence. Companies like Marvell usually sell a suite of power management chips just to deal with that - and extract more money out of customers cause no one can be bothered to stray too far from the reference design.


The ARM M7 is a microcontroller class processor. Its a high end one, enough to comfortably run Python for example, but its not an application class processor.


> Seamlessly transition processes to run in qemu-x86 on the ARM core?

No, this thing runs its own firmware. It's a Baseboard Management Controller on-die, basically.


> How is an OS supposed to support that?

Theoretically, if you can share memory between cores of different architectures and are careful to compile everything so endianness is the same and padding lines up, shared state means you could hand control over to the other core.

Realistically, I bet this is more like the PS1 chip in the PS2, just on the same die.


This is basically the use case for the Programmable Real Time units in the AM3359. You write bare metal code to run on those processors and then use shared memory to communicate with host processes running in Linux on the application processor. The PRU lets you control peripherals without the timing jitter from a non-realtime OS like Linux.


This is targeted at embedded so anything for that core will only run on that core. Thus the OS needs to know about it because it needs to start that core and stay out of memory that core uses.

I have often wised my embedded system had separate CPUs for the embedded control (real time requirements, bad things happen if it crashes), and the user interface (needs to be pretty with icons, but who cares if it crashes)


I’ve worked with systems which did this with two processors talking over SPI. The difficulty with integrating the real-time processor via shared memory is ensuring the real time processor has exclusive memory access to the peripherals it’s using. The control registers might be helpful here provided the bootloader can be trusted to configure them and the rest of the OS kernel doesn’t mess with them at startup. The client PRU is entirely at the mercy of the main OS. So it’s hard to argue that you have perfect separation here like you can when you have separate chips talking over a bus. You may also run out of pins to mux peripherals through as display functions and IO can take a lot of pins.


These cores, and any modern computer has a quite a few, run their own OSs and software that's usually packaged as blobs the OS loads during driver initialization.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: