Hacker Newsnew | past | comments | ask | show | jobs | submit | jeffomatic's commentslogin

I'm writing a game that uses both behavior trees and coroutines (via C# generators), and I have found that they are not quite interchangeable.

The main thing is that the execution model is different. I think of the behavior tree as being evaluated from the root on every tick, whereas with coroutines, you are moving linearly through a sequence of steps, with interruptions. When you resume a coroutine, it picks up exactly where it left off. The program does not attempt to re-evaluate any preceding code in the coroutine in order to determine whether it's still the right thing to be executing. By contrast, a behavior tree will stop in the middle of a task if some logic higher up in the tree decides that you need to be on a different branch.

What we end up doing is composing behavior trees with coroutines as leaf nodes. It works quite nicely, although I wish there was a way to express the structure of the behavior tree in a more elegant way. We do the obvious thing: each node in the tree is some subclass of a Node base class, representing a logical operation like "if" or "do these in parallel".

I very much feel OP's angst about creating a pseudo-programming language-within-a-language by creating what are basically ad hoc AST nodes, but I haven't come up with a better solution. Maybe something like React, where you use basically imperative code to describe a structure, and there is some ambient state that gets properly reconciled by a runtime that you don't touch directly.


This is an old debate and yet one that is difficult to think through. I share your experience of having trouble convincing someone else not to combine things too hastily. Partly it's a "my gut feel is different to yours" situation, but I often don't have the confidence that I could articulate my reasoning without indulging in a wordy lecture on minutiae.

Lately I have been fixating on the following line of thinking: the unit of deduplication--usually a function, but sometimes even bigger--is the same thing as the unit of abstraction. When you dedupe, you've also given birth to a new abstraction, and those don't come for free. Now there's a new thing in the world that you had to give a name to, and that somebody else might come along and re-use as well, perhaps not in a context where you originally intended. The new thing is now bearing the load of different concerns, and without anyone intending it, it now connects those concerns. The cost of deduplication isn't just the work of the initial refactor; it's the risk that those future connections will break something or make your system harder to understand.

This reminds me of another famous Carmack pronouncement about the value of longer functions [1], which I think has some parallels here. In the same way we're taught to DRY up our code, we're taught to break up long functions. I sort of think of these two things as the same problem, because I view their costs as essentially the same: they risk proliferating new and imperfect abstractions where there weren't any before.

[1] http://number-none.com/blow/blog/programming/2014/09/26/carm...


I understand your sentiment, but this line:

    When you dedupe, you've also given birth to a new abstraction, and those don't come for free.
It feels like you can write the inverse with equivalent impact, e.g., code duplication doesn't come for free.


Sure, but the costs of code duplication are well known. We know it increases maintenance, and can sometimes lead to issues if you forget to update one or another of something, and so on.

So there can be an assumption that deduplicating removes costs, when it may, but it may also create further ones. That removing something can make some things more difficult, isn't intuitive for everyone.

Like all things programming, their is a balance, pros and cons, of each approach. Knowing when to use which approach, that's part of the profession, and everybody can get it wrong sometimes. And the environment can change, and the choice may become invalidated - and then you get stuck with the hard choice of changing abstraction, or keeping the same. And that's a hard choice, as well.

Nothing in coding comes for free, but sometimes it can look like it does.


What's the cost? A slightly bigger binary & codebase? It seems like it's close to free to me. Am I missing a cost? Or are these costs bigger than I'm assigning them?


> What's the cost?

The cost is exactly what is pointed in the original tweet:

> a requirement to keep two separate things aligned through future changes is an “invisible constraint” that is quite likely to cause problems eventually

Code changes, and if those two identical or similar pieces of code are likely to change together, now whenever you change one you have the cognitive load to also change the other, or risk having them go out of sync.

Of course, when the two pieces of similar code aren't likely to be changed together, they should be kept separated


For sure. Most of the time when I copy-paste code from one place to another, a change in one place doesn't imply a change in the other. I certainly have seen that happen though.


I dunno, call me skeptical about Carmack's text.

I agree very much with using pure functions wherever you can. It fact, I would argue writing a pure function should be the default approach. (See the Function Core, Imperative Shell talk on DestroyAllSoftware.com.) Let the compiler handle the inlining and memory optimizations and use const everywhere.

OTOH, Carmack doesn't even consider testing. Breaking up your code into multiple functions facilitates that a lot. On top of that, if your functions are pure, it is even easier to test them.

He also doesn't consider the cost of reading & maintaining a piece of code that lives somewhere inside a big (multi-page) function. You have to keep track of all that function-global state. Side effects sprinkled all over the function are common. Ugghh.

> Besides awareness of the actual code being executed, inlining functions also has the benefit of not making it possible to call the function from other places. That sounds ridiculous, but there is a point to it. As a codebase grows over years of use, there will be lots of opportunities to take a shortcut and just call a function that does only the work you think needs to be done.

This, too, sounds a bit ridiculous today. Languages usually have access modifiers (public/private/…) or conventions to declare something "internal" to the class or module (e.g. `__foo` in Python). On top of that, you can always use something like ArchUnit to enforce your architectural rules and prevent usage of function X in module Y.

Yes, correctly cutting your modules & scopes is never easy. But this is simply at the heart of the game of software development.

The arguments regarding latency & performance are certainly valid but it feels like a very, very specific case he discusses. It's difficult to generalize the conclusions he draws from it.


This is a good insight. One quibble I’d make is to point to the “into a loop or function” in Carmack’s post. When you’re consolidating some repeated code into a loop, the weight of the abstraction is lower than a function. Also, the problem of “future maintainer pulls the abstraction out of necessary context” is less likely.


I can see how this could be a helpful reframing for some folks, but IME centering your attention on how to title yourself can have the reverse effect, which is to encourage you to indulge in an identity that doesn’t have much substance underneath.

I had a phase where me and my friends all thought of ourselves as writers and artists. At the extreme, there was a buddy of mine who would answer simple questions by prefacing with, “Well, as a writer, I tend to X.” And X would be many secondary tendencies you might associate with writers: look at the world differently; ask annoying questions at parties; overanalyze pop culture; drink too much caffeine; procrastinate; joke about procrastinating, etc.

The problem is that most of us didn’t do the only X that matters, which is to actually write. And I think we knew this on a subconscious level, and it was why we were so angsty all the time. (Being angsty is another X that writers are supposed to do, so it was a vicious cycle.)

Writers who don’t write seems like a niche phenomenon of a narrow and privileged set, but I feel like I see this elsewhere. I’m an engineer these days, and I occasionally come across junior folks who have a similar thing going on. Especially in bigger orgs, you can see people struggle for years with this: there’s something about the job they like (perhaps what the job seems to say about them as individuals), but they have a hard time with the actual moment-to-moment work. I generally think it’s not my place to judge people, let alone gatekeep or call people out on it, but I sometimes feel that I did those folks a disservice by not telling them: hey maybe this just isn’t for you.


> The problem is that most of us didn’t do the only X that matters, which is to actually write. [...] Writers who don’t write seems like a niche phenomenon of a narrow and privileged set

Nah, I think this is super common across titles that people romanticize, mainly in the arts, since there's no real barrier to entry. (Unlike, say, claiming "I'm a lawyer" or "I'm a doctor.") I've seen tons of people say "I want to be/I am a musician" and then spend a bunch of time watching YouTube tutorials, hanging out in musician/producer Discords, etc. and not actually, you know, making music.

For a lot of people, "I want to do X" actually means "I want to have done X," and then reap all of the benefits that comes from that: the sense of accomplishment, the fame, social media follows, whatever.

These days I'm usually very suspicious of people who make big public pronouncements about how they're starting X task, whether that's going to the gym, learning guitar, building something in Rust, or whatever. If you wanted to do the activity, you'd just do it without all the pomp and circumstance. Every time I've seen a friend on social media announce they're going to start a grand new adventure, they fizzle out after a month or two. The ones who get shit done will show up to a party looking amazing and casually mention, "Yeah, I've been hitting the gym."


>titles that people romanticize,

Very strange that right after the arts are two titles that are highly romanticized. Back in the days I owned my own business and had a sizeable medical client base, I cannot tell you how many doctors had to buy a BMW because their other doctor friend has a BMW and you're not one of the "doctor club" unless you own one.

I come from a family heavily involved in the criminal justice system, and the lawyers, police, and judges I know have the same problems with falling in common tropes.

And don't even get me started on engineers. Give them 10 minutes and we'll tell everyone how liberal arts are the end of the world ;)


> I cannot tell you how many doctors had to buy a BMW because their other doctor friend has a BMW and you're not one of the "doctor club" unless you own one.

I've seen it as well.

Also, medical professionals tend to only be around other medical professionals for most of their 20's and early 30's, which really helps create a kind of insular and closed culture. Having to match for residencies and fellowship doesn't help (you'll get shipped somewhere you know nobody and be forced to work long hours and your only support network will be colleagues). It's not that dissimilar to how a cult operates when you really think about it.

It's no surprise they come to identify strongly with the tittle and will do things to fit in with the "club".


> Very strange that right after the arts are two titles that are highly romanticized. Back in the days I owned my own business and had a sizeable medical client base, I cannot tell you how many doctors had to buy a BMW because their other doctor friend has a BMW and you're not one of the "doctor club" unless you own one.

New York investment bankers and stockbrokers. I don't think any of us want to know all the American Psycho shit that goes down in those professions, but I do know of one anecdote (which I can't cite but may have gotten from Hackernews years ago): Apparently, the thing to do if you're in high finance in NYC is to live in a posh apartment in Manhattan. You could live in a (relatively, this is NYC) cheap apartment in Brooklyn or the Bronx and save a bit of money to put towards retirement or whatever -- but you will be looked down on by your peers and passed over for promotions. The higher-ups want to see you "hungry", as they think it makes you more loyal and driven.


Tribalism?


Yeah there have been studies that as soon as you tell someone about the thing you're intending to go, be it go to the gym or become a musician, that causes you to lose motivation. But in today's Instagram, pour your heart out online, hyper connected world, trying to build up that follower count for online clout world, narrating your own life is how you live in it. It's one thing to proclaim I'm going to go to the gym and become hella ripped like the Rock and then can't follow through, it's another thing to be a smartphone addicted person that's posts every time they're at the gym. Point is, some people like the pomp and circumstance, others really hate the spotlight. What'll blow your mind is the fact that those two groups often work together, with one person working behind the scenes and the other being the face of things. Ghostwriting isn't just about writing.


> "I want to do X" actually means "I want to have done X,"

I co-authored a book with someone, which ended up meaning I did 90% of the work and they could be prodded with considerable effort to contribute in a few areas and give feedback. But they were thrilled to have been an author and hand out copies etc.

No real harm from my angle. I have no issue with them being a co-author. Doesn't hurt me. But a perfect example of this principle. A former boss at a small company was a somewhat similar example. They liked being a $X. They came to not like doing the work of being a $X.


Agreed on the first point. I let other people call me an artist or photographer or pianist or whatever I deserve - I don't even need to agree with it, and I've replied that holding a camera doesn't make me a photographer more than standing in a garage makes one a car or going to $RELIGIOUS_PLACE makes one a $RELIGIOUS_FOLLOWER - but I am growing into rocking the PhD title that I proudly earned.


>For a lot of people, "I want to do X" actually means "I want to have done X,"

I think in many cases it's more like "I want to want to do X." They think it would be great to be flowing with words, musical ideas, technical ideas, ready and motivated to create, but presently they are not.


> For a lot of people, "I want to do X" actually means "I want to have done X," and then reap all of the benefits that comes from that: the sense of accomplishment, the fame, social media follows, whatever.

What's interesting to me is that sometimes even 'doers' feel this way. There are days when I absolutely love practicing and training and there are days when I wish I could reap the rewards without putting in the work.


In short: don't talk about it, be about it.


This is a trick I learned a long time ago, and why I still hate standup. If I need/want to do something, the more time I actually spend talking or thinking about it, the less likely it is to get done.

If you actually want to get it done, don't talk about it, just do it.


I’m often called an artist by people who know me IRL, which annoys me for a bunch of reasons. One is that I don’t see myself this way. I just sometimes do stuff that is art-adjacent. Another is that making it a noun instead of a verb reduces me entirely to that one side of me and also suggests that it is something very stable, something that I’m going to be for the rest of my life, because well this is who I am after all. “I shoot photos”/“I make films”/“I write poetry”/“I write software” has a very different shade than “I’m a photographer”/“I’m a film maker“/“I’m a poet”/“I’m a software developer”. The latter feels very reductive.


You can view it as a label and the nice thing about labels versus boxes is you can have a bunch labels at once. Labels and identities are also temporary. Being something doesn't inherently mean you'll be that way forever. I say "I was a pilot" since I don't fly anymore, even though I still have the licenses. Someday it will be that "I was a software engineer". Someday it will be "I was alive".


Nice that you think in that way :-) I wonder though if what you look at as labels, many others treat instead like boxes?

F.ex. if they've classified you as a software engineer, then ... I'm thinking it doesn't occur to most people that you might be a writer and musician too hmm


I’m sure that happens. In lots of cases there’s nothing I can do about that. In many cases, an hour or two together is enough for people to drop that with me, personally. I’m pretty “boundary dissolving”.

What I don’t think will ever really work, though, is getting introduced as a software engineer and being annoyed at people for it. Or, even speaking up, “that’s something I do, it doesn’t define me”.

It’s pretty hard to convince people of stuff by telling them. It’s just about impossible to convince someone your are not contained by an identity when it seems to have such a tight hold on you.


Any tips for dissolving boundaries :-) ? In the other direction too -- nicely finding out more about sbd else?

"What do you like doing when you aren't working" I say sometimes to others (what might you say to others?)


The real stuff: https://m.youtube.com/watch?v=3D3F5WPXmdo

I’m an awful interviewer, mostly in the sense that I just don’t do it. I certainly don’t give people the warm fuzzies by asking them questions that seem like I’m taking an interest in them.

What I do have going for me is a willingness and ability to bounce around between many different planes of reality and meet people on whichever one they choose (or that I can lure them too). The boundary dissolving is first internal, being willing to try many different things, be many different people. And never cast those old selves aside. Even if you throw out the tennis racket, don’t throw out the tennis player.

And when you talk with someone, feel around, see where you can meet them. And then meet them there, without preamble or apology. If you think they’re interested in tennis, just go into a conversation about it with the confidence of already being at a match together. This feeling out is also rooted in the present moment. I’ve probably asked someone “where did you go to school?” around three times in my life just to try it out, and was bored with myself before I finished the sentence. Can’t imagine asking “didja play any sports?”. But a game on the TV in the bar that someone is checking for the score is enough to see if we can meet there. Often we don’t, and that’s fine, too.


You are large, you contain multitudes. Different people will see you differently because your light is refracted through their experience with you.


I still wonder if "He (or she) is a poet ... And a software developer" causes a conflict to arise in many people's brains. They want it simple, pick one?

Whilst "He writes software and poetry" doesn't? Or to a lesser extent?

Personally I say I build software, not that I'm a software eng :-). (And that I practice the guitar ... for real)


I’m a poet, I do poet things.

Like write software, take photo’s and make films.


I just say what I do, which is write overly complicated code to make websites that have no right to be fancy.


> I had a phase where me and my friends all thought of ourselves as writers and artists.

The difference between this and what the article's talking about is that you never wrote, and you never made art. If you had, you could've credibly called yourself those things while you were doing them, even if sporadically and/or badly.

> "We become a runner when we start running a few days a week."

The article makes the point that it's okay to think of yourself as a [title] once you start doing the things a [title] does. From that POV it's not encouraging delusion, just generosity toward oneself.


I prefer it the other way.

If you're not writing, you're not a writer, thus if you want to be a writer you need to write and just doing it once doesn't mean anything, you have to keep doing it, because if I once ate a vegan meal, it wouldn't mean that I'm vegan.

You are what you do. While you're writing you're a writer. The more time you spend writing the more of a writer you'll be.

If you want to call yourself a writer, you have to write.

That's why I always say, judge people not for their words, but for their actions.


The article actually seems to agree with you.

The advice seems to be "as long as you're writing, you're allowed to call yourself a writer. don't have an ethereal unachievable standard that needs to be reached first before you're allowed to use the title."


There is also the opposite. Ie, people who seem to actually do a lot of a given thing, but dislike the culture of the people in said field and hence don't identify with them and would never call themselves that thing. It's probably not as common, but it's there.


> I can see how this could be a helpful reframing for some folks, but IME centering your attention on how to title yourself can have the reverse effect, which is to encourage you to indulge in an identity that doesn’t have much substance underneath.

I agree. Cemented in my mind is the use of the word "activist" by all corners of our political spectrum, and especially among techies in the last decade. At present when I hear the word I have to walk myself through dropping all the gut-reaction feelings attached to that word because of the behavior of people who didn't know what they were doing and acted very poorly. This is an isolated example of one, but I'm sure it translates across subjects.


How often do you hear "As a parent ...", especially used to justify a completely unremarkable statement. "As a mother, it's important that my child is healthy" - duh.


There's some skepticism in the comments around the recommendation for xid. I'm curious if anyone here is using it in production at scale, and can comment on the practical realities.

I saw xid make the rounds about a year ago, and the promise of a pseudo-sortable 12-byte identifier that is "configuration free" struck me as a bit far-fetched.

In particular, I wondered if the xid scheme gives you enough entropy to be confident you wouldn't run into collisions. UUIDv4 doesn't eat a full 16 bytes of entropy for nothing. For example, if you look at the machine ID component of xid, it does some version of random assignment (either pulling the first three bytes from /etc/machine-id, or from a hash of the hostname). 3 bytes is 16777216 values, i.e., with 600 hosts you have a 1% chance of running into a collision. Probably too close for comfort?

There are settings where you can build some defense-in-depth against ID collisions, like a uniqueness constraint in your DB (effectively a centralized ticketing system). But there are many settings where that kind of thing wouldn't be practical. Off the top of my head, I'm thinking of monitoring-type applications like request or trace IDs.


No slight to anyone pursuing this strategy, but I also felt like using the shorter list for ranking candidate words was cheating. On the other hand, my solver ends up making silly suggestions that might be appropriate for early on in the game, but not for late guesses.

I've tried to think of what might be a good way of approximating a human's own sense of which words are viable Wordle answers. One thing I attempted was using a word frequency table to help bias the algorithm away from less common words. The funny thing is that some words rank low in word frequency but are otherwise count as "common knowledge". For example, "tapir" was a Wordle solution a few weeks ago, but it ranked lower than some obvious non-answers like "grovy". It could be that the frequency table I was using was weird, but I can believe that there are well-known words that aren't used that often. Maybe I could come up with a better corpus (e.g. only use the NYT or some other newspaper) to use as the basis for a custom frequency table. Seems like a lot of work!


+1, I'd also recommend doing a low-poly lesson before attempting the donut tutorial. I did Imphenzia's low-poly tutorials after a few false starts with the donut tutorial, and found the low-poly approach to be much better at establishing a foundation for working with geometric primitives.


This is my experience as well. I use both Chrome and Firefox across Mac and Windows, and I find the experiences to be pretty much interchangeable. I'm usually someone who gets annoyed at minor differences, but this really is one area where I'm not bothered at all.

I use Firefox most of the time, out of a desire to support Mozilla and promote a non-Google browser. Admittedly, this is a political motivation, but it comes at pretty much zero practical cost. If you are interested in trying out Firefox, I think the only significant inconvenience is disabling Pocket.

Like the parent, I'll occasionally switch to Chrome for separate profiles, or if I need something specific in DevTools. Chrome has much better multi-profile support, in the sense that it has it at all, but again, the differences between the two are so minor that I don't have any qualms switching back and forth.


I agree it’s important for Chrome to have a competitor, but Firefox has proven itself not to be a successful one. I’ve lost faith in Mozilla’s ability to make decisions that win back users from Chrome. I wonder if Firefox shut down, the dispersal of its users to other non-Google browsers might create enough momentum that one of them could start to become competitive with Chrome.


I recently tried switching from an MBP to an X1 Carbon, in a yet another attempt to switch to Linux as my daily driver. I ended up being thwarted, to my surprise, by the touchpad support.

The big problem I encountered was poor palm detection: I learned through this process that I have a tendency to use the touchpad with my right hand, while my left hand is still in typing position. This means that the edge of my left palm rests along the upper-left corner of the touchpad. This would result in the driver sometimes detecting two touchpoints instead of one, and I would end up scrolling when I meant to move the pointer.

The result of this was pretty amazing to me: after a few hours of use I found that my wrists, shoulders, and neck hurt because I was subconsciously trying to adjust my hands to avoid this. After a couple of days trying different drivers and tweaking config files, I ended up conceding that it just wasn't there yet.

It's too bad: I have lots of imposter syndrome about understanding my OS and using Linux seems like a great cure. But after literally decades of trying, what I've noticed is that hardware (or at least, my acclimating to new hardware) always advances enough past Linux's capabilities that the switch is difficult. In the past, it was about getting sound cards or WiFi working, and nowadays it's about things like touchpads and high-resolution displays.


I find that, with my Thinkpads that I don't use the touchpad at all. I find the trackpoint a much more convenient pointing device as it keeps my hands a home row.


Touchpad scrolling is nice. Yes keyboard shortcuts exist for scroll, but I for whatever reason vastly prefer touch-based.


On Thinkpads, there's also the thumb buttons below the space bar that act as mouse buttons. The middle one allows you to scroll with the trackpoint. It's very convenient.


Trackpoints are amazing! I just wish they were more popular. I have had to resort to building custom keyboards with trackpoints to overcome the limited availability.


And they never run out of space!


But when you push it in one direction too long, then let it go neutral, it'll have a bias into the other direction and move there by itself for a few seconds.


And then reset itself and stop, which I always found oddly satisfying


This is so surprising to me. I use Fedora on Thinkpad T580 (Fedora keeps the kernel very fresh (and has best/latest hardware fixes in it)) but I can't get the palm detection to fail for me even if I try. It just works with the out-of-the-box settings for me.


This is actually usually pretty configurable. You have to use the actual config file though and not the GUI to get the really tunable numeric settings of palm-size and whatnot.

Also, understand that there's libinput or touchpad synaptics and you can use the other one if one isn't working for you. I have had better luck with libinput myself but it probably depends on the hardware.


That is such a horrible user experience though, it's no different than having to hack apart your X11 config file to get the proper modeline support for your monitor, or to map your keyboard so your Super key works.

Thankfully we've moved past those problems for now, until you want to use multiple screens or use a touchpad.

Desktop Linux still has a long way to go if there is ever going to be any appreciable market share.


You can help improve the situation: The libinput project mentioned in the article is attempting to cut back on the amount of configuration and provide sane defaults for things like this, but there are a huge amount of touchpads out there to test. The idea is that once something is added to their database of "quirks", then it can be contributed upstream, and the device will be configured correctly for everyone. More info here: https://who-t.blogspot.com/2018/06/libinput-and-its-device-q...

If you figure out the right settings while using the legacy synaptics driver, please consider translating them a libinput quirks file and sending them back to the right place. I promise you are not the only one who doesn't want to fiddle around with legacy drivers anymore.


Every time I try to use Linux, I come to the conclusion that the user experience makes it not worth the effort it would take to be a daily driver. I always feel like I just don't have the knowledge to get things done in a reasonably efficient manner... And the OS needs more polish.

There's always some annoying little problems that take a mountain of effort to properly diagnose and solve. Things like dual monitor support not working properly when the monitors have different resolutions, or YouTube dropping out of fullscreen when you focus on a different window.

To be fair, these kinds of issues are probably no problem for someone with lots of experience... But I'm not one of those.

I also find that the gap between windows and Linux has narrowed dramatically over the last 10 years. Whenever I try a fresh install (every couple years), I'm surprised at how much better it feels.


Linux is great on the server. It will never be good on the human interface because hardware and UX trends evolve and fragment faster the tiny user community can afford to support.


Hmm, I don't recall ever having a problem with palms on any touchpad, ever. On desktop I do usually use Windows or MacOS (mostly terminal on Linux), but I do occasionally use Linux (Ubuntu) on an aging HP Zbook, and never had an issue. I used to use Ubuntu desktop a lot quite a few years back, and don't recall any issues even then.


I've switched to a MBP 2015 as my work laptop about 1 year ago, and it's the only device I have palm issues with. I've never, ever had any issues with other laptops (my previous work laptop was a T460). I guess the bigger-than-usual touchpad size on the MBP is the main issue.


Linux will always lag, never lead, at problems where there are no paid developers, like the non-ChromeOS, non-Android user experience.


Question about the buffered writes and invalidation: is there anything that prevents race conditions between a Del() and a prior Set() on the same key? Just glancing at the source, it looks Set() ends up in a channel send, whereas Del() seems to synchronously update the internal map.


I am not familiar with the Go code, but in the Java version (Caffeine) this is handled by a per-entry state machine. The entry is written to the Map first and the policy work buffered for replaying. If the replay occurs out-of-order (e.g. context switch before appending causing removal=>add), then the entry's state guards against policy corruption. This may not be necessary in their case due to using a sampling policy and other differences.


Feel free to file an issue. That could cause correctness issue in Ristretto as of today.


This issue would fixed with this PR: https://github.com/dgraph-io/ristretto/pull/62


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: