You shouldn’t need any dedicated RAM. A decent microcontroller should be able to handle transcoding the output from the camera to the display and provide infotainment software that talks to the CANbus or Ethernet.
And the bare minimum is probably just a camera and a display.
Even buffering a full HD frame would only require a few megabytes.
Pretty sure the law doesn’t require an electron app running a VLM (yet) that would justify anything approaching gigabytes of RAM.
I just went on Amazon and a 1GB stick of DDR3 ram is about 30% cheaper than a 128mb stick of RAM. Why would any RAM company make tiny RAM chips when they can make standard-sized chips that work for every application that needs less?
I really feel like a lot of the people objecting in this thread are people who have just written web apps in Python whose closest experience with the audio-visual space is WebRTC.
Tech for cars is “standard-sized”. Not everything revolves around datacenters and tech, the car industry easily predates the computer industry and operates on a lot tighter margins and a lot stricter regulations.
So having a smaller, simpler chip that ultimately costs less physical resources at scale and is simpler to test is better when you’re planning on selling millions of units and you need to prove that it isn’t going to fail and kill somebody. Or, if it does fail and kill somebody, it’s simpler to analyze to figure out why that happened. You’ve also got to worry about failure rates for things like a separate RAM module not being seated properly at the factory and slipping out of the socket someday when the car is moving around.
Now - yes, modern cars have gotten more complex, and are more likely to run some software using Linux rather than an RTOS or asic. But the original complaint was that a backup camera adds non-negligible complexity / cost.
For a budget car where that would even make sense, that means you’re expecting to sell at high volume and basically nothing else requires electronics. So sourcing 1GB RAM chips and a motherboard that you can slot them in would be complete overkill and probably a regulatory nightmare, when you could just buy an off-the-shelf industrial-grade microcontroller package that gets fabbed en masse, dozens or hundreds of units to a single silicon wafer.
I simply refuse to believe the cost difference between a CPU with hundreds of megs of DRAM is cheap enough to be an appealing choice over the same chip with a gig of RAM. We're not talking about a disposable vape with 3kb of RAM, this is a car that needs to power a camera and sensors and satellite radio and matrix headlights or whatever. If it's got gigahertz of compute, there's no reason it's still got RAM sized for a computer from 30 years ago.
The original comment was complaining about backup cameras seemingly adding significant electronics requirements.
In practice, you’re not going to tie intimate knowledge of the matrix headlights into the infotainment system, that’s just bad engineering. At most it would know how to switch them on and off, maybe a few very granular settings like brightness or color or some kind of frequency adjustment, not worrying about every single LED, but I can’t imagine a budget car ever exposing all that to the end user. Even if you did, that would be some kind of legendarily bad implementation to require a gigabyte of RAM to manage dozens of LEDs. Like, is it launching a separate node instance exposing a separate HTTPS port for every LED at that point?
Ditto for the satellite radio. That can and probably is a separate module, and that’s more of a radio / AV domain piece of tech that’s going to operate in a world that historically hasn’t had the luxury of gigabytes of RAM.
Sensors - if this is a self-driving car with 3D LIDAR and 360-degree image sensors, the backup camera requirement is obviously utterly negligible.
Remember, we had TV for most of the 20th century, even before integrated circuits even existed, let alone computers and RAM. We didn’t magically lose the ability to send video around without the luxury of storing hundreds of frames’ worth of data.
Yeah, at some point it makes more sense to make or grab a chip with slightly more RAM so it has more market reach, but cars are manufactured at a scale where they actually are drivers of microcontroller technology. We are talking about a few dollars for a chip in a car being sold for thousands of dollars used, or tens of thousands of dollars new.
There is just no way that adding a backup camera is an existential issue for product lines.
Back in the mists of time, we used to do realtime video from camera to display with entirely analog components. Not that I'm eager to have a CRT in my dashboard, but live video from a local camera is a pretty low bar to clear.
Yeah, I cannot understand why people are thinking a gigabyte of RAM in this context save for their context being imagining what this would take with a python HTTPS server streaming video via WebRTC to an electron GUI running out of local docker containers or something. Because that ought to be enough memory for a hour of compressed video.
It’s like saying your family of four is going to take a vacation, so you might need to reserve an entire Hyatt for a week, rather than a single room in a Motel 6.
> I cannot understand why people are thinking a gigabyte of RAM in this context save for their context being imagining what this would take with
Who's people? It isn't me, I was rounding to the nearest positive integer. And bastawhiz is arguing in the abstract about RAM prices so I don't see how they fit this complaint either.
> It’s like saying your family of four is going to take a vacation, so you might need to reserve an entire Hyatt for a week, rather than a single room in a Motel 6.
From my point of view, it's more like each room only holds one person so you can't just say "a room" (megabyte), and renting a whole hotel would only be 0.1% of the total vacation budget, so I simplify it and just say "rent a hotel" (gigabyte). It doesn't mean I think it's necessary, it means I'm pointing out how cheap it is and don't need to go deeper.
I tried to think of a wording that wouldn't get this response, I guess I failed. Ram is generally bought in gigabytes, "1 or less" is as low as numbers go without getting overly detailed.
So what microcontroller do you have in mind that can run a 1-2 megapixel screen on internal memory? I would have guessed that a separate ram chip would be cheaper.
But mostly it’s the fundamental problem space from an A/V perspective. You don’t need iPhone-grade image processing - you just need to convert the raw signal from the CMOS chip to some flavor of YUV or RGB, and get that over to the screen via whatever interface it exposes.
NTSC HD was designed to be compatible with pretty stateless one-way broadcast over the air. And that was a follow-on to analog encodings that were laid down based on timing of the scanning CRT gun from dividing the power line frequency in an era where 1GB of RAM would be sci-fi. We use 29.97 / 59.94 fps from shimming color signal into 30 fps B&W back when color TV was invented in the early-mid 1900s, that’s how tight this domain is.
That board has a DDR3 chip on it. Is there one with HDMI that doesn't?
> But mostly it’s the fundamental problem space from an A/V perspective. You don’t need iPhone-grade image processing - you just need to convert the raw signal from the CMOS chip to some flavor of YUV or RGB, and get that over to the screen via whatever interface it exposes.
> NTSC HD was designed to be compatible with pretty stateless one-way broadcast over the air. And that was a follow-on to analog encodings that were laid down based on timing of the scanning CRT gun from dividing the power line frequency in an era where 1GB of RAM would be sci-fi. We use 29.97 / 59.94 fps from shimming color signal into 30 fps B&W back when color TV was invented in the early-mid 1900s, that’s how tight this domain is.
If you're getting a signal that's already uncompressed TV-like then you probably don't need a processor at all. But I didn't want to assume you're getting that, running a multi-Gbps signal over a wire in a very hostile environment.
The more generic solution needs the ability to hold a couple frames in memory. Which probably means a ram chip. Please don't focus so hard on the way I rounded the number. The point was that it's a negligible number of dollars. And you can use a much smaller chip than a gigabyte, but that doesn't save a proportional amount of money and the conclusion is the same, negligible amount of dollars.
I guess I could have said "gigabit". Anything that got into specific numbers of megabytes would have been pointless detail. And it's megabytes minimum if there's a frame buffer.
There was a library for Rust called “faster” which worked similarly to Rayon, but for SIMD.
The simpleminded way to do what you’re saying would be to have the compiler create separate PTX and native versions of a Rayon structure, and then choose which to invoke at runtime.
2. In practice, the risk of introducing a breakage probably makes upstream averse to refactoring for aesthetics alone; you’d need to prove that there’s a functional bug. But of course, you’re less likely to notice a functional bug if the aesthetic is so bad you can’t follow the code. And when people need a new feature, that will get shoehorned in while changing as little code as possible, because nobody fully understands why everything is there. Especially when execution speed is a potential attack vector.
So maybe shades of the trolley problem too - people would rather passively let multiple bugs exist, than be actively responsible for introducing one.
It reminds me of Google Dart, which was originally pitched as an alternate language that enabled web programming in the style Google likes (strong types etc.). There was a loud cry of scope creep from implementors and undo market influence in places like Hacker News. It was so poorly received that Google rescinded the proposal to make it a peer language to JavaScript.
Granted, the interests point in different directions for security software v.s. a mainstream platform. Still, audiences are quick to question the motives of companies that have the scale to invest in something like making a net-new security runtime.
Pointless nitpick, but you want "undue market influence." "Undo market influence" is what the FTC orders when they decide there's monopolistic practices going on.
> It is the first model to get partial-credit on an LLM image test I have. Which is counting the legs of a dog. Specifically, a dog with 5 legs. This is a wild test, because LLMs get really pushy and insistent that the dog only has 4 legs.
I wonder if “How many legs do you see?” is close enough to “How many lights do you see?” that the LLMs are responding based on the memes surrounding the Star Trek episode “Chain of Command”.
I started with desktop applications, so my go-to for GUI has been Qt, especially QML. It works on Windows / MacOS / Linux as well as iOS and Android. I think there’s now a way to compile QML to webassembly as well. It also has a ton of support classes that are loosely analogous to the various *Kit things supplied on iOS and Android.
The downside is that the core of Qt is in C++, so it’s mostly seen (or used for?) embedded contexts.
I recently used Slint as well, which isn’t anywhere near as mature, but is at least written in Rust and has some type-safety benefits.
SwiftUI is pretty good too, and I wish I got to work on Apple platforms more.
To me, the simplicity of creating a “Button” when you want a button makes more sense, instead of a React component that’s a div styled by layers of CSS and brought to life by JavaScript.
But I’m kind of bummed that I started with that route (well, and writing partial UI systems for game / media engines a few times) because most people learned web apps and the DOM, and it’s made it harder to get the kind of work I identify with.
So it’s hard for me to recommend Qt due to the career implications…but at the same for the projects I’ve worked on, it’s made a smaller amount of work go a longer way with a more native feel than electron apps seem to have.
Yes. And everyone is glossing over the benefit of unified memory for LLM applications. Apple may not have the models, but it has customer goodwill, a platform, and the logistical infrastructure to roll them out. It probably even has the cash to buy some AI companies outright; maybe not the big ones (for a reasonable amount, anyway) but small to midsize ones with domain-specific models that could be combined.
Not to mention the “default browser” leverage it has with with iPhones, iPods, and watches.
Unified memory and examples like the M1 Ultra still being able to hold it's own years later might be one of the things that not all Mac users and non-mac users alike have experienced.
It's nice to see 16 Gb becoming the minimum, to me it should have been 32 for a long time.
Slint does not use a browser. Instead, it has its own runtime written in rust and uses a custom DSL to describe the UI.
It has API for different programming language.
For Javascript, it uses node or deno for the application logic, and then spawn the UI with its own runtime without the browser.
In a way it is the opposite which took the JS runtime out of electron to replace it with a Rust API, while Slint keeps the JS runtime but swaps the browser for its own runtime (for the JS dev point of view)
You shouldn’t need any dedicated RAM. A decent microcontroller should be able to handle transcoding the output from the camera to the display and provide infotainment software that talks to the CANbus or Ethernet.
And the bare minimum is probably just a camera and a display.
Even buffering a full HD frame would only require a few megabytes.
Pretty sure the law doesn’t require an electron app running a VLM (yet) that would justify anything approaching gigabytes of RAM.