Locality. People want to work close to where they live and not all places are bustling with all kind of activity. I suspect you're hybrid or on site only, right?
not GP, but we're hybrid but remote-first and 80% is remote and we have the same experience. Getting juniors is easy, getting seniors+ is very difficult.
The model I am mentioning matches with this. Speaking from my own personal experience as well, when you're junior and young, you can move anywhere, especially if you're ambitious. As you gain experience, you also settle down a bit in your life, you have a wife, kids, a house. Their jobs and schools. Moving then is a _big deal_.
Of course, there are other factors that make juniors more abundant on the current job market, namely, most companies don't want them.
That absolutely makes sense, but I'm not sure it is the reason. I mentioned we're remote first: we hire _everywhere_. I've been with this company for 7 years, and haven't traveled to HQ even once, and have worked from home or a spot of my choosing (but honestly, that spot is almost always home!) every day, that's how remote first we are - nobody has to uproot their life to work with us.
But it's still extremely hard to find senior+. I'm sure our tech stack plays a role, and naturally senior developers are much less common than juniors. But whenever I hear about the job market being super hard, I feel like I'm living in a parallel universe.
AI is not replacing anyone from my perspective, but AI might become our only hope at some point, because we're growing aggressively. I have to keep mediocre people because I can't even replace at that level easily - the only ones I'm pruning are the ones that are net-negative contributors.
Ah, sorry, I misunderstood your original post then, I interpreted "hybrid, remote first" as... You can be remote most days but you _need_ to be in office a couple of days. This just goes to teach mea hybrid model has _a lot_ of variants.
Back to the point, I think I'm pretty senior, mostly embedded SW, thankfully I still have work, but the job market seems to havecratered. I have friends that are pretty good that are looking for jobs for about half of year now.
I'm incredibly curious now what is your tech stack. And how do you guys view people looking to switch tech stacks.
I'm sorry, but I, just like your admins, don't believe this. It's theoretically possible to have "undetectable" errors, but it's very unlikely and you'd see a much higher than this incidence of detected unrecoverable errors and you'd see a much higher incidence than this of repaired errors. I just don't buy the argument of "invisible errors".
EDIT: took a look on the paper you linked and it basically says the same thing I did. The probability of these cases becomes increasingly and increasingly small and while ECC would indeed, not reduce it to _zero_ it would greatly greatly reduce it.
Ok, I am sure there is _some_ amount of unrepairable errors.
But the initial discussion was that ECC ram makes it go away and your point that it doesn't. And the vast vast majority of the errors, according to my understanding and to the paper you pointed to, are repairable. About 1 out of 400 ish errors are non-repairable. That's a huge improvement! If you had ECC ram, the failures Firefox sees here would drop from 10% to 0.025%! That is highly significant!
Even more! 2 bit errors now you would be informed of! You would _know_ what is wrong.
You could have 3(!) bit errors and this you might not see, but they'd be several orders of magnitude even rarer.
So yes, it would not 100% go away, but 99.9 % go away. That's... Making it go away in my book.
And last but not least, this paper mentions uncorrectable errors. It says nothing of undetectable ecc errors! You said _undetectable_ errors. I'm sure they happen, but would be surprised if you have any meaningful incidence of this, even at terabytes of data. It's probay on the order of 0.000625 of errors you can get ( but if you want I can do more solid math)
You're thinking in terms of independent errors. I would think that this assumption is often not the case, so 3 errors right next to each other are comparatively likely to happen (far more than 3 individual errors). This would explain such 'strange' occurrences about ECC memory.
I think we diverge on ‘making it go away in my book’.
When you’re the one having to debug all these bizarre things ( there were real money numbers involved so these things mattered ), over millions of jobs every day , rare events with low probability don’t disappear - they just happen and take time to diagnose and fix.
So in my book ecc improves the situation, but I still had to deal with bad dimms, and ecc wasn’t enough. We used not to see these issues because we already had too many software bugs, but as we got increasingly reliable, hardware issues slowly became a problem, just like compiler bugs or other elements of the chain usually considered reliable.
I fully agree that there are lots of other cases where this doesn’t matter and ecc is good enough.
Oh, I get this point. If you have a sufficiently large amount of data an you monitor the errors and your software gets better and better even low probability cases will happen and will stand out.
But this is sort of the march of nines.
My knee jerk reaction to blaming ECC is "naaah". Mostly because it's such a convenient scapegoat. It happens, I'm sure, but it would not be the first explanation I reach for. I once heard someone blame "cosmic rays" on a bug that happened multiple times. You can imagine how irked I was on the dang cosmic rays hitting the same data with such consistency!
Anyways, I'm sorry if my tone sounded abrasive, I, too, have appreciated the discussion.
No you were not abrasive at all - I’ve learned to assume good faith in forum conversations.
In retrospect I should have started by giving the context ( march of 9s is a good description) actually, which would have made everything a lot clearer for everyone.
> robably meaningless because if things progress as they have, anyone can just copy what I've built
Most other people lack the prerequisite skills even with Claude at their disposal. And it was always the case that other people could copy your things, it's just the effort was much higher, now it's more accessible.
Regardless, I would suggest you don't let this deter you from bringing something new into this world. It might have enough value to make it all worth it. Or not, but not releasing it you won't find out for sure.
I like working on it and will definitely release it, just don't want to set myself up for disappointment later. Try to enjoy the process of creation instead. But, it could also be that it's something that has value for a few years, and maybe that's enough.
Absolutely not. Learning has been to experiment with the things until you form a effective mental model of the thing. Writing things does ab-so-luetely nothing except make you feel good in the moment. Just like listening to a lecture without engaging with the subject matter deeper.
Writing things down is important for organisational persistence of information but that is something else.
Choosing what to write down is making a mental model, extracting the core and thinking about the subject.
Seems to me you're just a bad note-taker that blindly writes things down, and for some reason decided to use that lack of knowledge in a tirade against me..
reply