As an example, what about a divide instruction. A machine without an FPU can emulate a machine that has one. It will legitimately have to run hundreds/thousands of instructions to emulate a single divide instruction, it will certainly take longer.
Thats OK, just means the emulation is slower doing that than something like add that the host has a native instruction for. In ‘emulator time’ you still only ran one instruction. That world is still consistent.
? That's not how Windows on ARM emulation works. It uses dynamic JIT translation from x86 to ARM. When the compiler sees, e.g., lock add [mem], reg presumably it'll emit a ldadd, but that will have different semantics if the operand is misaligned.
It’s funny that even cloud hosting and such didn’t drive more responsible resource usage.
We could easily have 10x-100x more utilization of cloud servers if the software was leaner.
Imagine a single server hosting 10000 VDI instances concurrently with high performance. Sounds insane but Windows, Word, Excel were usable on a 16MHz 386 with 2MB RAM and 20-40MB storage.
Systems today are literally 10000X as powerful (without even getting into the CPU architecture and cache improvements).
The second to last sentence I copied over talks about after 10yrs, basically saying they have to provide the knowhow to 3rd party tool makers and repair technicians, and that this settlement makes that more certain. (as I read it)
reply