Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The post contains a common fallacy that you will know of if you have tested substantially sized games:

What ECS architecture does is offload the testing burden into data instead of code. The code itself can do the right thing in all circumstances, but as you componentize your higher-level behaviors, more business logic becomes bindings of data to other data. So after a 100% green test suite, you still have bugs and all of them are of the form of "mysterious runtime behavior". The engine code doesn't know how to automatically check for a "good data shape" or flag it as a bug; you can add checks, but it's project specific and asset-specific compiler development: a missing animation is a different kind of bug from a typo in a text prompt, or a hitbox that's slightly misplaced. Thus, for a lot of categories of assets, your cheapest test is still to periodically eyeball things and say "that's in spec" or "that's fishy".

ML AI might actually be able to change this scenario by acting more like human players and deriving more second-order metrics.



I think you're reading something that isn't being argued in the post, though. The author is not arguing that you will have no bugs in your game if you use ECS and add tests, it's that testing in this way can make it easier for development because you reduce the possibility of introducing a bug inadvertently in one of your systems and then have to rely on QA to find it later (components too I guess, but I would guess component "tests" can be covered by types since they're just data and the systems are the ones that interact with the data). The kinds of bugs you mention are of course different and some of those kinds of bugs aren't worth the trouble of even trying to test, but if your game is shaped in a way that it is basically a collection of code modifying data that represents the game, you can eliminate the need for a lot of manual QA testing by just automating it.


And it's not just games, this trend of moving code into data. I've been in several companies where their fundamental logic was a configuration interpreter, very complex, tied to a gigantic set of xml or json "configurations" which themselves had interdependent logic written as structure.

So when maybe the first iteration 20 years ago was all fun and giggles "look a monkey could edit a few knobs and it all moves around fine", today it's a tangled mess that an ever rotating set of new people build on top of with no validation tool, no formalized syntax, no ability to test nor explain the holistic logic of.

So, as you say, we end up running the thing and looking if the output makes sense, the code working more out of luck than engineering.


> ML AI might actually be able to change this scenario by acting more like human players and deriving more second-order metrics.

New startup idea. Probably applies to any "hard to test" thing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: