Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Part of me wonders if the recruiting itself is over-engineered anyway. I mean, imagine if you just asked:

Implement Bubble Sort, in 2 Languages of your choice, with multithreading or other language-provided parallelism, with the correct number of parallel threads to be most algorithmically efficient

Would that really not weed out a lot of people? I think it would. I know the above algorithm is hardly production-ready, but the requirements are easy to understand. (It's also a bit of a trick question - there is no optimal algorithmically efficient number of threads in a bubble sort, only the number of CPU cores in the system.)



Our coding problem is easier than that (by a fair margin, it's all completely synchronous single-threaded procedural code unless you're doing something extremely weird) and it weeds out the vast majority of applicants.

The same was true of all three of the standard coding problems Triplebyte used. They're not quite literal fizzbuzz, but they require - at best - some basic critical thinking and basic language features, and that is a filter that eliminates 90+% of applicants to dev jobs. Now, granted, this is under time pressure. I imagine, given several hours, most could finish it (although maybe even that is overestimating things). But still.

There's an old Randall Munroe article quoting a physicist:

> The physicist who mentioned this problem to me told me his rule of thumb for estimating supernova-related numbers: However big you think supernovae are, they're bigger than that.

and I feel like this applies to recruiting: however bad you think the average applicant is, they're worse than that.


I ask candidates to implement a simple dynamic data structure, and I even describe it first, so there is nothing to memorize. It turns out many people don't know how to set up classes or data structures, even when you describe it to them first. Forget about computational complexity.


Triplebyte used to send candidates a link to a page which basically listed everything that was in the interview. Most candidates didn’t read it. They could probably have done away with the interview entirely and just put a link at the bottom of the prep material saying “click here to pass the interview”.

If anything I think it would have lowered the triplebyte passing rate.


Why not try this as an experimental filter then and see what sort of population pass?


Well, triplebyte is dead. That’s one reason not to do the experiment.

Also arguably the “customer” for triplebyte was the company that eventually hires the candidate. They’re paying in part for the screening process triplebyte did. We wouldn’t have been doing our job if we skipped the interview part of the process.


This sounds like what a lot of companies do - except the scaled problem with this approach (and the certification approach of the grandparent) is that most companies want to avoid candidates who've memorized a specific solution, as then they don't get any data about whether they can code anything aside from what was memorized.

The other problem is that implementing bubble sort will tell you about their skills in a particular dimension, but being a software engineer these days may look very different depending on the job.


I do a tree-search-ish thing when interviewing people. I’ll start with a super basic question about something beginner-ish on their resume. If they can’t answer that, the interview is politely wrapped up. I’ve eliminated a surprising number of people who had jQuery on their resume by asking them to write code that will make a div with the ID “mydiv” disappear if the user clicks a button with the id “mybutton”.

After that I ask a super difficult or niche trivia question like “in CSS, what two overflow properties cannot be used together?” If I’m hiring a mid-level frontend developer and they nail that one, I go “fantastic, great answer, do you have any questions for us?” And the interview can end early.

But if they miss that, no sweat, I’ll start asking more mid-level technical questions to figure out where they’re at.


It's also a mostly useless problem for determining engineer quality, in many cases.

It tests for pure coding ability, when most organizations should be optimizing for engineers that are going to be productive-if-not-spectacular, that can design and build maintainable systems.

Could I have written the above problem back in my engineering days? Probably not, since I went years not working with threads. But I also wasn't working on problems that would ever have benefited from knowing that. Most software engineering roles are essentially building CRUD or ETL systems, maybe with a user interface. Any coding problems given should be optimized for weeding out bozos (which are still plentiful), not for weeding out the most people.


I find picking good questions is hard, and many fall into similar patterns, making them something candidates can practice for.

Even your question isn't something I'd necessarily ask on the spot. Many engineers don't use parallelism in their day-to-day work(webdevs). The part about making it efficient is interesting, but feels borderline like a trick question that a good engineer could fumble.


> Even your question isn't something I'd necessarily ask on the spot. Many engineers don't use parallelism in their day-to-day work(webdevs). The part about making it efficient is interesting, but feels borderline like a trick question that a good engineer could fumble.

True, it's more of a backend role question. The reason I threw it in there, is from my assumption a leetcode grinder would be very likely to immediately go, "well, the most efficient number of threads is log (n)" or "the most efficient number of threads is the square root of n" or some other plausible-sounding BS answer. But the reason I chose bubble sort, is that it's so simple to understand, that you can fairly easily (I would hope) figure out there's no benefits to more threads than CPU cores at all, as long as you stop and actually think about what it is doing.


What are you trying to achieve here? Are you looking to remove a bunch of competent developers by asking weird trick questions?


Trick questions are a waste of every one’s time IMHO and it was something I made sure to not do when I took over the hiring process at my job.

It’s a form of hazing and rarely does it have any connection to the day-to-day work you do in the job. All of our tests or questions relate directly to the work we will ask you to do in the job, anything else is just trying to be clever and I dislike cleverness in both job interviews and code in general. Cleverness almost always means inscrutable and unmaintainable.


1. It's the dumbest algorithm for sorting possible (bubble sort). Compare two objects, swap them if one is bigger than the other, go down the list, repeat. If a developer doesn't know that algorithm, what algorithm could they possibly know? It's the lowest bar.

2. 2 Languages of your choice is enough to show you aren't a frameworker and can think in more than one box. Doesn't matter if you do it in JavaScript and C#, Go and Rust, Python and Haskell. It's a chance to show off, while being quite obtainable for a competent developer.

3. The parallelism trick question merely shows that you actually understand what you are talking about. A leetcoder might make the trap of assuming it's log(n), or the square root of n elements, or some other overly-thought-through math that is bogus. If you think it through though, bubble sort is simple enough, that it's very easy to realize (in my opinion) why more threads than CPU cores doesn't really help.


If I want to separate the real algorithm masters from the rest I’d avoid questions about sorts with total ordering but instead cover algorithms with partial ordering such as topological sort, BSP trees and such which are super-beautiful and I think quite useful but obscure.

Bubble sort is quite interesting from an algorithm analysis perspective (prove it completes, prove it gets the right answer, prove how it scales with N) but I’d almost rather a programmer I work with not know about it from a coding perspective because it increases the chance they code up a bubble sort than use the sort function that comes with the standard library.


> or some other overly-thought-through math that is bogus.

So the mathematically-correct answer would be a minus while "in my opinion" would be a plus?

I think your interview question brings you to your intended answer: the reason why every company makes their own process is because what's good for you would probably get you an instant fail in NASA and would get you labeled as "experience not relevant" in some research settings.


> So the mathematically-correct answer would be a minus while "in my opinion" would be a plus?

My opinion was only that it would be relatively easy to figure out. As far as I know, it is a certainty that sectioning a bubble sort, more or less than CPU cores, is less efficient. 16 threads on an 8 core CPU just means balancing 2 threads on every CPU core, each doing their own sorting and needing eventual merging back together. There's no way that can be more efficient.


If you have hyperthreading then a low IPC workload like this is optimally done on twice the number of cores.


What does parallelism have to do with Big O and I do wonder how would parallel bubble sort be written to your standards in for example Python.


I don't think we are over-engineering it. You want to "weed out" everyone but the best candidate for the role, or the best candidates for your open roles. It's a very hard problem to identify the "best" person from a group of people. It would be different if all programmers that are good enough are the same, but we all know the skill ceiling on programming is very high. Selecting the best devs is a critical function for any software project.


What's your approach for parallelizing bubble sort in a way that profits from cores beyond the first?


It's probably alternating the comparisons.

Compare [0, 1] [2, 3] [4, 5] ... in parallel and swap if necessary, compare/swap [1, 2] [3, 4] [5, 6] ... in parallel, then go back and forth until no more swaps are made - second element in pair is always greater/less than the first.

That does suggest that the theoretical ideal number of threads is n / 2 ignoring cores, though you'll also want to consider things like cache line size, cache coherency, and actual problem size so it's probably less.

At the end of the day, the important thing would be to actually benchmark your attempts and seeing how it scales with processors/problem size and checking the isoefficiency.

I think it was a bad question.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: