A tragic anecdote has shaken France recently, when an unsupervised 6-year old entered a NICU, took a premature baby and dropped her on the floor. She died of her injuries a few hours later.
The same questions are being asked: how come anyone can enter a NICU? How could the parents let an unsupervised child roam the hospital? How come no one intervened? The worst part is that other parents had complained about the unsupervised child the day before.
Failures all along... that's often how accidents happen.
I wish there was a solid way to balance the weight of a tragedy (sans the kneejerk human emotional reaction) against the proposed solution.
Freak accidents will always happen, and if mitigation is simple and cheap, we should do it. But as soon as we get into the territory of "NICU doors need to be locked with keycard access" (causing every doctor and nurse to do a badge scan 40-50 times a day) then I think it's ok to have 1 infant death every 50 years globally because of it.
My rule of thumb for any big organization (like a hospital) is that nothing changes until there's a body to explain away.
Yeah, sometimes enough fractional close calls add up (usually to a big lawsuit) and policy changes without and death, but don't bet on it.
But, on the other end of the spectrum, having all sorts of absurd policy and procedure because someone might die so incredibly rarely we can't quantify it is terrible too.
People have always thought they could do anything. If you think this is crazy you should see some of the stuff people have been doing with cars and motorcycles for the last 5 decades.
I don't get it how in the world someone can just enter the room when the device is on. Trusting people to read signs and follow the rules is borderline insane. A simple lock mechanism could spare life here.
>> I don't get it how in the world someone can just enter the room when the device is on.
The magnet is always on. His wife was in the room. Unless you're previously aware of the dangers of an MRI machine it looks like any other exam room with some equipment in it. It's up to the staff to inform and keep people out and enforce that. IMHO he should not have even been in the outer room wearing a chain like that.
This article[1] has a good overview of safety procedures already in use at other facilities:
> Melonie Longacre, VP of Operations at Northwell Health, explained MRI safety protocols, emphasizing the importance of multizone procedures to ensure safety around the powerful magnet.
> "Zone I is just for awareness that there’s an MRI in the vicinity, Zone II is the patient screening zone where they get screened. Zone III is the post-screening zone, and Zone IV is the actual magnet room," she said. "It’s important to be educated and safe."
It's unclear if Nassau Open MRI (where this incident took place) had similar safety protocols. I'm guessing not.
Dude, exactly what I was thinking. Even if the staff weren’t telling me to remove it I would instinctively do the math:
big fat metal chain +
big fat powerful magnet
= disaster.
In fact, whenever I hear MRI I instantly think dental fillings. You’d think the patients and their handlers would instinctively think about all the metal they carry. How could big fat metal chain on neck not come to mind?
Great book! I already use python for some simple projects and your book is in the perfect level of practicality that I need.
Thank you!
Suggestion: create an epub version as well. It would be awesome to read it on a kindle or other e-ink devices.
I used to donate to wikipedia, but after seeing how wikimedia spends the money I stopped.
Just like Mozilla, they have plenty of money for their core product but spend a ton on other projects that I don't care for.
More emails and more meetings means workdays increased? Are they adjusting for the fact the in person you don't need as many emails and meetings and can just talk person to person?
I always felt that the neural engine was wasted silicon, they could add more gpu cores in that die space and redirect the neural processing api to the gpu as needed. But I'm no expert, so if anyone here has a different opinion I'd love to learn from it.
I'm not a ML guy, but when I needed to train a NN I thought that the my Mac's ANE would help. But actually, despite it being way easier to setup tensorflow + metal + M1 on Mac than to setup tensorflow + cuda + nvidia on Linux, the neural engine cores are not used. Not even for classification, which are their main purpose. I wouldn't say they are wasted silicon, but they are way less useful than what we expect
Eyeballing 3rd party annotated die shots [1], it’s about the size of two GPU cores, but achieves 15.8 tflops. Which is more than the reported 14.7 tflops of the 32-core GPU in the binned M4 Max.
Not really. That's 15.8 fp16 ops compared to 14.7 fp32 ops (that are actually useful outside AI). It would be interesting to see if you can configure the ANE to recover fp32 precision at lower throughput [1].
It seems intuitive that if they design hardware very specifically for these applications (beyond just fast matmuls on a GPU), they could squeeze out more performance.
I was trying to figure the same thing out a couple months ago, and didn't find much information.
It looked like even ANEMLL provides limited low level access to specifically direct processing toward the Apple Neural Engine, because Core ML still acts as the orchestrator. Instead, flags during conversion of a PyTorch or TensorFlow model can specify ANE-optimized operations, quantization, and parameters hinting at compute targets or optimization strategies. For example `MLModelConfiguration.computeUnits = .cpuAndNeuralEngine` during conversion would disfavor the GPU cores.
Anyway, I didn't actually experiment with this, but at the time I thought maybe there could be a strategy of creating a speculative execution framework, with a small ANE-compatible model to act as the draft model paired with a larger target model running on GPU cores. The idea being that the ANE's low latency and high efficiency could accelerate results.
However, I would be interested to hear the perspective of people who actually know something about the subject.
If you did that, you'd stumble into the Apple GPU's lack of tensor acceleration hardware. For an Nvidia-like experience you'd have to re-architecture the GPU to subsume the NPU's role, and if that was easy then everyone would have done it by now.