Sure, for instance, if all of them go through an 1 hour AI interview, then you might find a better candidate, at the cost of 1000 man-hours of work. You hire that person, another company opens a position, gets 999 applicants, send them all their own AI interview, and so forth.
How much would better would your hire be considering that you managed to check all 1000 of them, rather than just 50?
Assume that candidate fitness is a number normally distributed around 0 (half of them obviously being negative), that both you and the AI can perfectly pick out the best candidate, and that you picked the 50 to interview completely at random. The average actually seems to be around 40% better? Suprisingly decent. Is that improvement worth 1000 man-hours?
So attempt two here: maybe instead of each company sending candidates through an interview, there should be a common gatekeeper. All working age people take the same 1-hour AI interview, and the glorious overseer assigns them to the position they are best suited for.
(An actual answer here is you assess how important it is to get "the best candidate", and you interview enough people to get a reasonable approximation. The hour cost on your side is what keeps you honest. If wasting candidate time is free on your side, you're going to waste 500 man-hours of work for a 5% better result for you.)
Agree. Your post mirrors surprisingly well with mine of last year [0] - did you, by chance, happened to come across it? I said "you’d likely get approximately the same quality of candidates from a randomly selected 50 out of 5,000". And the main idea I was trying to convey is parallel to yours too: companies, consciously or not, try to find "the best candidate", while using completely inadequate "hiring funnel" approach. If that is their genuine need, they would be better off with "headhunting" approach. And the whole industry, businesses and candidates alike, would be better off if businesses recognize that the difference between "best" and "second best", or even "third best" is not meaningful (outside of C-suite) and not worth exponentially higher spend.
- [0] https://news.ycombinator.com/item?id=45271736 (believe it or not, em-dashes are all mine, even though I regret now putting my original draft through LLM - even if it was for grammar-check only)
I absolutely agree in principle, but I understand that the companies are also seeing a lot more applicants trying to skate past screening and interviews with AI assistance.
Connecting verified humans for a mutually respectful chat is a trust problem that companies like LinkedIn should be creating solutions for, instead of offering both sides automated shovels to shovel slop faster.
> They are the ones who started using AI in the hiring process
Aren't you ignoring the reports of companies receiving thousands of ChatGPT-written resumes, bots sending applications, and interviews with applicants being live coached by AI?
Probably more like the long tail of software - software that was created for a particular purpose in a particular domain by a single person in the company who also happened to know programming - maybe just as Excel macros.
I strongly assume the long tail is shifting and expanding now and will eventually mostly be software for one-off purposes authored by people who don't know how to code, and probably have a poor understanding of how it actually works.
Hm, yes, it makes sense. If AI "makes" software more and more composable, then yes, most software will be thin wrapper on some ancient machinery that no one understands :)
I guess in some sense this is already the case. Most developers are not "full stack" (and the job postings that describe a software MacGyver are ridiculed like clockwork), but with AI this is actually becoming more and more possible (and thus normal, or at least normalized). And of course software is eating the world, including itself, so the common problems are all SaaS-ified (and/or FOSS-ified), allowing AI-aided development to offload the instrumental dependencies.
- 90 days is a very long time to keep keys, I'd expect rotation maybe between 10 minutes and a day? I don't see any justification for this in the article.
- There's no need to keep any private keys except the current signing key and maybe an upcoming key. Old keys should be deleted on rotation, not just left to eventually expire.
- https://github.com/aaroncpina/Aaron.Pina.Blog.Article.08/blob/776e3b365d177ed3b779242181f0045cd6387b3f/Aaron.Pina.Blog.Article.08.Server/Program.cs#L70-L77 - You're not allowed to get a new token if you have a a token already? That's unworkable - what if you want to log in on a new device? Or what if the client fails to receive the token request after the server sends it, the classic snag with use-only-once tokens?
- A fun thing about setting an expiry on the keys is that it makes them eligible for eviction with Redis' standard volatile-lru policy. You can configure this, but it would make me nervous.
I've definitely seen Opus go to town when asked to test a fairly simple builder.
Possibly it inferred something about testing the "contract", and went on to test such properties as
- none of the "final" fields have changed after calling each method
- these two immutable objects we just confirmed differ on a property are not the same object
In addition to multiple tests with essentially identical code, multiple test classes with largely duplicated tests etc.
> Elijah Newren, who wrote git's merge-ort (the default merge strategy), reviewed weave and said language-aware content merging is the right approach, that he's been asked about it enough times to be certain there's demand, and that our fallback-to-line-level strategy for unsupported languages is "a very reasonable way to tackle the problem." Taylor Blau from the Git team said he's "really impressed" and connected us with Elijah. The creator of libgit2 starred the repo. Martin von Zweigbergk (creator of jj) has also been excited about the direction.
Are any of these statements public, or is this all private communication?
> We are also working with GitButler team to integrate it as a research feature.
I'll wager that 95% of incitive and unhelpful comments aren't written by "bad faith actors" as you define them, but ordinary people carried away by emotions or mob sentiment.
Just a reminder that "this probably isn't worth replying to" should help a lot. But alas, it would directly reduce precious engangement.
> In GitHub, you have to switch tabs (which is slow and distracting) to go between the PR summary and the code.
As a case study of Github UI friction, take merging a Dependabot PR from the PRs tab, with code approval required before merges. By my count this takes 6 clicks, and none of them approach a 'snappy' response time.
This is for mostly trivial single-line diffs. The entire thing could be 1 click - a hover preview on the PR list, and an 'approve and merge' button.
(To list them out: Click PR, "Files changed", "Submit review", "Approve", "Submit Review", "Merge")
How much would better would your hire be considering that you managed to check all 1000 of them, rather than just 50?
Assume that candidate fitness is a number normally distributed around 0 (half of them obviously being negative), that both you and the AI can perfectly pick out the best candidate, and that you picked the 50 to interview completely at random. The average actually seems to be around 40% better? Suprisingly decent. Is that improvement worth 1000 man-hours?
So attempt two here: maybe instead of each company sending candidates through an interview, there should be a common gatekeeper. All working age people take the same 1-hour AI interview, and the glorious overseer assigns them to the position they are best suited for.
(An actual answer here is you assess how important it is to get "the best candidate", and you interview enough people to get a reasonable approximation. The hour cost on your side is what keeps you honest. If wasting candidate time is free on your side, you're going to waste 500 man-hours of work for a 5% better result for you.)
reply