Your comment really inspired me, I've mostly tried following methods from leerburg, but I'm curious what other trainers you think are worth following and using methods they try to teach
I resonate so much with this, it felt very easy to fall out of touch with people that I had closer some time ago.
I've actually started building a more niche version of cal.com|calendly that focuses exactly on this but with a twist for coffee :) the main differentiator being setting a "time limit" that reminds you to meet X. Very similar to this A,B,C or D system.
It feels robotic, but if it helps someone that's enough for me.
Just taking what would normally be a unit test and having the input values be generated. Some examples:
1. I have a test for encryption/decryption functions. The data that's provided for the plaintext, additional data, and key, is generated. The assertions are:
assert_ne!(plaintext, encrypted_data);
assert_eq!(plaintext, decrypted_data);
assert_eq!(aad, decrypted_aad);
etc
2. I have some generated integration tests. For example, in our product, there are certain properties that should always hold for a given database entry. I generate a new entry on every test and have the fields for that entry provided by quickcheck, then I perform the operation, query the database, and assert that properties on those values hold.
So to answer your question, yes. Sometimes you want to check a concrete output (ie: "this base64 encoded string should always equal this other value) for sanity, but in general property tests give me more confidence.
I find it works particularly well with a 'given, when, then' approach, personally.
edit: I'll also note that for the base64 case I'd suggest:
I don't think Rice's theorem being proven invalid (or rather, more likely (but imo unlikely), P=NP) is important to this. In fact, a somewhat loose interpretation of Rice's theorem would imply that you can not "prove your way" out of having bugs of arbitrary classes, which seems entirely compatible with "our testing framework will find bugs".
Rice's theorem is applicable; the paradox is the usual trivial case,
if symflowerSaysItsBroken() {
DoTheRightThing()
} else {
DoTheBrokeThing()
}
I have no doubt that symbolic analysis can find a large class of problems, but if you write stuff like "we promise to find errors" you will a) dupe a lot of junior devs who will genuinely believe your tool is doing impossible magic, b) alienate experienced devs who know that's 100% marketing fluff and find nothing more specific on your site.
I'm pretty much the ideal customer for a product like this - staff+ with final say about toolchain decisions on a product with a lot of data-driven behavior. But what I want to see isn't a vague "promise" (really? is it in a contract, that you're liable for bugs your tool misses? of course not) but some actual comparison to quickcheck/fuzzing. I want to see your approach finds a superset or at least mostly-disjoint set of what we already have invested in, or finds the same set more effectively. Like, survey through the bugs fuzzing found in go stdlib and show me your tool finds them all faster.
I'm also skeptical of some "purely" coverage-driven approach from the start - coverage-driven fuzzing already has a tendency to miss interesting cases that don't come from branchiness - a classic example is subnormal value handling. Fuzzing usually still finds these eventually just by virtue of exhaustiveness; a tool driven only by symbolic methods better come armed with a _lot_ of encoded knowledge about the language semantics.
From looking at their stuff I think their value comes not from the the way they promise to write tests that exercises more code paths than human generated testing can. This is different from the Go fuzz testing approach, which requires you to write a specialized kind of test by hand. I can even imagine a future evolution of Symflower's product that writes fuzz tests for you.
func (db *actualStoreImplementation) GetTask(t *Task) error {
if (t.ID != 0) {
// query by ID, mutate the parameter, return nil
}
if (t.Tag != "") {
// query by tag, mutate the parameter, return nil
}
return ErrWhatever
}
usually I have some other package that defines all of the types that can appear on the wire (which I often call `wire` because `proto` is taken by protobuf), define some exported interface in that package with an unexported method so that no other packages can define new types for that interface, and then have a method on my db structs that returns the wire types, like this:
func (t Task) Public() wire.Value {
return wire.Task{
// explicitly generate what you want
}
}
---
Hey robomartin, I'm trying to get in contact with you.
https://news.ycombinator.com/item?id=26560799
Your comment really inspired me, I've mostly tried following methods from leerburg, but I'm curious what other trainers you think are worth following and using methods they try to teach