Hacker Newsnew | past | comments | ask | show | jobs | submit | fpopa's commentslogin

sorry for unrelated comment.

---

Hey robomartin, I'm trying to get in contact with you.

https://news.ycombinator.com/item?id=26560799

Your comment really inspired me, I've mostly tried following methods from leerburg, but I'm curious what other trainers you think are worth following and using methods they try to teach


Congrats on the launch.

Hopefully this will be the last pipelining language I'll learn.


I resonate so much with this, it felt very easy to fall out of touch with people that I had closer some time ago.

I've actually started building a more niche version of cal.com|calendly that focuses exactly on this but with a twist for coffee :) the main differentiator being setting a "time limit" that reminds you to meet X. Very similar to this A,B,C or D system.

It feels robotic, but if it helps someone that's enough for me.



We've also got romanian now: https://cuvantul.github.io/cuvantul


That was quick. I'm very impressed with your work.


I feel like you did most of the work :)


What kind of generated tests are you writing?

Is it more similar to 'golden files'? Generate expected output and assert versus current implementation output?


Just taking what would normally be a unit test and having the input values be generated. Some examples:

1. I have a test for encryption/decryption functions. The data that's provided for the plaintext, additional data, and key, is generated. The assertions are:

assert_ne!(plaintext, encrypted_data);

assert_eq!(plaintext, decrypted_data);

assert_eq!(aad, decrypted_aad);

etc

2. I have some generated integration tests. For example, in our product, there are certain properties that should always hold for a given database entry. I generate a new entry on every test and have the fields for that entry provided by quickcheck, then I perform the operation, query the database, and assert that properties on those values hold.

So to answer your question, yes. Sometimes you want to check a concrete output (ie: "this base64 encoded string should always equal this other value) for sanity, but in general property tests give me more confidence.

I find it works particularly well with a 'given, when, then' approach, personally.

edit: I'll also note that for the base64 case I'd suggest:

a) A hardcoded suite of values.

b) Generate property tests.

assert_eq!(value, base64decode(base64encode(value));

As well as things like "contains only these characters" and "ends with [=a-zA-Z]" etc.

c) Oracle tests against a "known good" implementation.


Sounds like a sensible mix. There is really no single silver bullet.

We at https://symflower.com/ are working on a product to generate unit tests. Unlike quickcheck/proptest we promise to find errors, even if they are unlikely (for example [this input](https://github.com/AltSysrq/proptest/blob/master/proptest/RE...) would be trivial for Symflower). Also, unlike fuzzing our technology is deterministic.

Here's one of our blog posts that explains the approach: https://symflower.com/en/company/blog/2021/symflower-finds-m...


> we promise to find errors

What does this actually mean, because I assume you didn't disprove Rice's theorem?


I don't think Rice's theorem being proven invalid (or rather, more likely (but imo unlikely), P=NP) is important to this. In fact, a somewhat loose interpretation of Rice's theorem would imply that you can not "prove your way" out of having bugs of arbitrary classes, which seems entirely compatible with "our testing framework will find bugs".


Rice's theorem is applicable; the paradox is the usual trivial case,

    if symflowerSaysItsBroken() {
        DoTheRightThing()
    } else {
        DoTheBrokeThing()
    }
I have no doubt that symbolic analysis can find a large class of problems, but if you write stuff like "we promise to find errors" you will a) dupe a lot of junior devs who will genuinely believe your tool is doing impossible magic, b) alienate experienced devs who know that's 100% marketing fluff and find nothing more specific on your site.

I'm pretty much the ideal customer for a product like this - staff+ with final say about toolchain decisions on a product with a lot of data-driven behavior. But what I want to see isn't a vague "promise" (really? is it in a contract, that you're liable for bugs your tool misses? of course not) but some actual comparison to quickcheck/fuzzing. I want to see your approach finds a superset or at least mostly-disjoint set of what we already have invested in, or finds the same set more effectively. Like, survey through the bugs fuzzing found in go stdlib and show me your tool finds them all faster.

I'm also skeptical of some "purely" coverage-driven approach from the start - coverage-driven fuzzing already has a tendency to miss interesting cases that don't come from branchiness - a classic example is subnormal value handling. Fuzzing usually still finds these eventually just by virtue of exhaustiveness; a tool driven only by symbolic methods better come armed with a _lot_ of encoded knowledge about the language semantics.


From looking at their stuff I think their value comes not from the the way they promise to write tests that exercises more code paths than human generated testing can. This is different from the Go fuzz testing approach, which requires you to write a specialized kind of test by hand. I can even imagine a future evolution of Symflower's product that writes fuzz tests for you.


Go stdlib has property testing built it. It's not as powerful as some quick check frameworks, but it's built right in. I wrote an article on it.

https://earthly.dev/blog/property-based-testing/


Well done, great demo!


This makes sense, did you implement this alongside grpc / protobuf?

I'm curios about the way you handled zero values, field masks could be a solution, but I think it would get bloaty.


    func (db *actualStoreImplementation) GetTask(t *Task) error {
        if (t.ID != 0) {
            // query by ID, mutate the parameter, return nil
        }
        if (t.Tag != "") {
            // query by tag, mutate the parameter, return nil
        }
        return ErrWhatever
    }
usually I have some other package that defines all of the types that can appear on the wire (which I often call `wire` because `proto` is taken by protobuf), define some exported interface in that package with an unexported method so that no other packages can define new types for that interface, and then have a method on my db structs that returns the wire types, like this:

    func (t Task) Public() wire.Value {
        return wire.Task{
            // explicitly generate what you want
        }
    }


I did a similar thing, but used left / right head tilting for scrolling the page.

Reading and peeling oranges became easier.


I personally hate one liners. I used to love them and feel good about how neat the code looked like.

After some time I've noticed that I read the code much easier and understand flow when the indentation is similar to python or go.

Also, I keep my editor on a vertical half of the screen.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: