Since the goal is to create an illusion of real physics, I wonder when they'll just cut to the chase and start using actual physics engines.
Reason I mention it is neither this nor bezier curves deal with the target changing mid-animation very well. CSS just starts over from the current position, which breaks the illusion. A physics engine would maintain the illusion, and could be simpler to specify:
The article mentions spring animations, which is essentially a basic physics simulation. They’re very commonly used in higher level JavaScript animation frameworks, but not supported in native CSS.
What's messing with people's heads when it comes to predicting solar is the perfectly rational tendency to base projections on past and current data. I think it qualifies as a black swan event for this reason.
Ditto with batteries. I very frequently see people talk about batteries with notions from the 2000s about expense and manufacturing capability.
Batteries have become dirt cheap to make and we are very quickly getting to the point where the power electronics are actually the more expensive part of a battery deployment rather than the battery capacity itself.
Yes you often here this with nuclear advocates. Saying that grid scale battery storage is a pipe dream.
The real defining fact about solar and battery storage is that it is very amenable to mass production and scaling. Small modular components that can be produced in a factory and require minimal maintenance.
And you see people who confidently spout conclusions that have long been rendered obsolete. It's not just that they don't understand growth, it's that their conclusions ossified a decade or two ago. They're basing them on far past data, not recent past data.
As I understand, this company's existing consumer-level product can already be used to capture the sound of an amplifier, but it's a snapshot of the device's (amplifier, effects pedal) settings in terms gain, eq, etc.
Meanwhile, they have an extremely labor-intensive set of techniques for modeling a device's analog circuitry, resulting in a model that allows the user to adjust gain, eq, etc. This isn't a consumer-level process; it happens in a laboratory somewhere, and the output is shipped as a software plugin or model on a digital effects unit.
This technology bridges the gap. Ultimately it's an unguided ML approach akin to the former, but introduces ML-guided robotic knob-turning (AKA "TINA") which (unlike the former) maps continuous changes within the device's parameter space, allowing to ship something more like the latter.
First there was the DIY Era, when layout options were limited and CSS implementations were riddled with bugs and browser differences. Most folks coded their own layouts using a small bag of tricks (e.g. float + clear-fix). It was messy, but it all fit in your head.
Then came the Framework Era, when lots more things became possible, but the size of the spec exploded, and with it the number bugs and incompatibilities. A common choice at this point was to use a framework.
This article fits with the idea that we've entered the Reference Era. Implementations have matured enough that browsers do what you want without arcane hacks and workarounds. You just need a good cheat-sheet, because the spec has long since stopped fitting into most peoples' heads.
> First there was the DIY Era, when layout options were limited and CSS implementations were riddled with bugs and browser differences. Most folks coded their own layouts using a small bag of tricks (e.g. float + clear-fix). It was messy, but it all fit in your head.
I call it the Neopets era because I first learned HTML from all the hacks people did to customize their personal pages.
Phone-free schools seem like an obvious way to fight this, but supposedly modern parents need constant access to their kids and tend to oppose the idea.
Americans might not get phone-free schools but others in places that have their shit together on gun regulations and police corruption/incompetence will.
In short, the police department's involvement (or lack thereof) has made parents feel that they can't rely on the police to communicate or coordinate during a mass shooting.
Hype: "Static Typing reduces bugs."
Shower: A review of all the available literature (up to 2014),
showing that the solid research is inconclusive, while the
conclusive research had methodological issues.
Static typing lets you do more complicated things by offloading a subset of complexity-management to robots. The remaining human-managed complexity expands until new development slows to a crawl, and no further human-managed complexity can be admitted to the system, similar to adding more lanes on a freeway.
Even if it doesn't reduce bugs (and how do we even measure this? in terms of bugs per loc? bugs per unit time?), it does make APIs easier to use (not even in terms of correctness, but in terms of time required to grok an API).
"Reduce bugs" is kind of a loaded term anyway. Static typing doesn't reduce bugs in an absolute sense, but I think it does reduce bugs per unit of value delivered. That's a lot harder to measure in a formal study.
> Static typing lets you do more complicated things by offloading a subset of complexity-management to robots
I've read some of the research on this! Yes, static typing improves documentation and helps you navigate code.
It also correlates with code quality and reduces smells. Inconclusive whether that's because of static typing or because more mature teams are likelier to choose static typing.
But all the research agrees: Static typing does not reduce logic bugs. You can build the wrong thing just as easily with dynamic and with static typing. The only type of bug that static typing reduces is the sort of bug you'll find by running the code.
In my experience, static typing is best thought of as a way to reduce the need for manually written unit tests. Instead of writing tests that break when a function signature changes, you write types that break when you call functions wrong.
You still need tests for logic. Static typing doesn't help there.
This seems like a strong statement to make based on the research. What I've seen falls into several camps:
- research that made some conclusion about logic bugs for complete beginners on small assignments, with languages that have bad type systems
- research that had significant limitations making it impossible to generalize
- research that failed to demonstrate that static typing reduced bugs—which is very different from demonstrating that it didn't!
I haven't done a super thorough review of the literature or anything, but I have looked through a decent number of software engineering papers on the subject. The only strong conclusion I got from the research is that we can't get strong conclusions on the subject through purely empirical means.
Hell, the whole question is meaningless. "Static typing" is not one thing—there's way more difference between Java and Haskell than between Java and Python, even though both Java and Haskell are statically typed and Python isn't. (This is even assuming you completely ignore Python's type annotations and gradual typing!)
> The only type of bug that static typing reduces is the sort of bug you'll find by running the code.
This is a pretty solid argument in favor of static typing, then, unless you somehow have a test suite that exercises every possible code path and type variation in your codebase, and also keeps itself perfectly up to date. Because otherwise you're rarely running all of your code and verifying the result.
If "type bugs are an obvious thing and happen all the time" and "static Typing reduces type related bugs" then it should be easy to demonstrate this empirically. However, "a review of all the available literature (up to 2014), show[s] that the solid research is inconclusive while the conclusive research had methodological issues."
Why would you need an empirical study for this? It’s trivially provable. Runtime exceptions in a language like JavaScript can arise from type mismatches. That’s impossible to do in a language like Java, because the compiler catches it before you ever run the program. This eliminates an entire class of bugs.
What you’re proposing here sounds like somebody saying “How do we know Rust results in less bugs than C++ without an empirical study?”. Even though, we _know_ Rust eliminates an entire class of memory related bugs. I say this as a C++ advocate too. Anytime I run into a memory bug, that’s a bug that would not have happened in Rust. Likewise, any time you run into a runtime exception due to a type mismatch (for example: expected an int not an object), that is a bug that would not have happened with a type safe language.
Edit: I also want to add that the metric is important. Is it number of bugs per line of code? What does that even mean? Assembly programs consist of many more lines of code because it’s more terse, but the number of bugs in assembly will probably be greater than a higher level language. Even though the large number of lines of code would probably push the metric down and make it seem like assembly has a low number of bugs per line of code. Because of this, bugs per line of code isn’t a useful metric.
The only way I could think of measuring this would be to have two feature for feature equivalent projects in two different languages and compare the number of bugs in each. But even that probably has a bunch of flaws.
I think you're right that static typing reduces bugs, but I am not convinced the reduction is significant or meaningful. If static typing has a significant effect, then why is the existing research so weak and inconclusive?
I don't understand your point about metrics and measurement. Are you saying the effect of static typing is so small that it is completely dominated by other confounding factors and thus cannot be measured?
My point about metrics is why I think the research is inconclusive. It’s very difficult to get a metric that’s meaningful in this context. If you said: this code base on average has 1 bug per 100 lines of code, that doesn’t say anything meaningful. If that code is assembly, that’s not very good because of how terse the code is. Whereas, if that code is Python or Ruby, that’s much better because of how concise those languages are.
Because of this, I feel like the only way to truly measure whether or not static typing has a significant effect would be to create two equivalent projects. Say you created stack overflow in Python and in C#. Then you could compare the quantity of bugs and see if it differs. But even this has problems because who knows how many bugs haven’t been caught? Is the code truly equivalent? Did the people who wrote the two codebases have slightly different experience resulting in differing number of bugs?
There’s too many variables in an experiment like this to conclusively determine whether or not static typing reduces the bugs. But, I don’t think that means that we can’t infer that eliminating a whole class of bugs is helpful.
Edit: the more I try to think about my reasoning the more I’m thinking it’s flawed. I think the answer to whether or not static typing reduces bugs is unknowable, but I strongly believe that it helps. Maybe we’ll get a study that isolates this metric one day :)
I think the important question is: at what cost? E.g., if it takes me 4x more time to write statically-typed code, and it saves me 10% fewer bugs (completely made up numbers here), is that worthwhile? Maybe, if I'm programming self-driving cars or autopilot software for aircraft. Probably not if I'm programming a web calendar for dog sitters.
But this is where the studies come in. Lots of people think this is true. And it seems perfectly reasonable. But there's really no research to back this up.
It's even worse than that. If it saves me 10% bugs per unit of code, but I have to write 20% more code, am I actually even ahead in the bugs department?
Dynamic typing doesn't change/improve this, though, so I'm not sure what the point being made is. I'm also not sure I agree with it at its premise anyway.
A literary review I made in 2020 was actually pretty conclusive about it also reducing bugs. I think we might be missing some of the later literature here.
Are you still going to use the word software or compiler or do you plan on switching over to calling everything a robot? Is your coffee maker a robot too?
I'd be ok calling my coffee maker a robot. It's got a cpu and sensors, and is capable of limited manipulation of its environment (via a heating element).
But to the main point, I read "robots" as a metaphor. Metaphors can be situational, jut because I might call a compiler a "robot" in one context doesn't mean I have to call them that every time.
And it's not as if there isn't long-standing precedent for using "robot" to refer to a piece of software. Have you ever heard of a "robots.txt" file? People complaining about "bots" on various social media sites?
Yeah, this is the one I pull up when I need a reference or a quick refresher. When it's been awhile, I play quick run through of the grid garden game: https://cssgridgarden.com/
Same! I even have them framed behind my screen where I can peek from time to time. Even after so many years, I can't remember which ones are "justify" and which ones are "align", lol.
I normally just set display: flex on an element and then use Chrome’s dev tools. You can click on an icon next to the declaration and it brings up a UI with all the flex alignment properties.
> So put your EV charger on the same circuit as the dryer. Then as long as you don't need to charge the EV at the same time you are running the dryer as far as your house wiring is concerned it is like you are running an extra two loads of clothes through the dryer every day.
I asked an electrician about this once and got lectured. Apparently per regulations a dryer needs to be on a dedicated circuit, and tapping into that line is a big no-no. Nor do you want to have to depend on a breaker as a fail-safe at this amount of current. Instead he ran a separate 50 amp line to my garage terminating in a single outlet, that I can do whatever I want with, EV or welder or whatever.
Reason I mention it is neither this nor bezier curves deal with the target changing mid-animation very well. CSS just starts over from the current position, which breaks the illusion. A physics engine would maintain the illusion, and could be simpler to specify: