Well it's likely a little more complicated than that.
Either way my model is 20, 200, 2000. Those are 3 milestones (roughly) where you progress through different phases.
Going on about 10,000 hours for skills isn't meaningful for most people because almost no one will use that info to decide "this only takes 10,000 hours I think I'll master it." It only applies to the people who don't need the advice because they're naturally on the path of doing it anyway.
But 2000 hours you can sit down and have a real think about. After that it's diminishing returns for that last 5%~10% that will take 5x as long.
Yeah, roughly 2 ~ 4 years. Call it 3. That's how I tend to think about skill acquisition.
Just personal experience of getting good at things over the years. It's a rough model and I'm happy for people to disagree if they want but the numbers aren't hard and fast they're more of a ballpark hueristic.
It's sort of a 4-tier order-of-magnitude pattern I suppose.
Erickssons original number was more like 10 years of daily deliberate practice increasing drom 0.5 to a max of 4/hours per day iirc for experienced people. It was roughly observational based on violinists and pianists. The original paper was interesting.
The New York Times claimed that Facebook was sharing private data with Huawei, "a telecommunications equipment company that has been flagged by American intelligence officials as a national security threat", when in actual fact the data was shared with the friends those people had chosen to share the data with, via the smartphones in those friends' pockets.
Though I actually meant to link to the previous article in the series, which sets out the New York Times' astounding spin more clearly - this is the one where they claim that it was somehow an attack on user privacy to not make the setting which stopped companies like Cambridge Analytica getting all your data also force all your friends to install the official Facebook app to interact with you: https://www.nytimes.com/interactive/2018/06/03/technology/fa...
Like, in one part of the article they literally had network logs of their reporter's Blackberry device, after he logged into Facebook on it, pulling down information from Facebook which he was authorised to access directly to that device - and they presented this as though it was proof that Facebook were being incredibly dishonest in not treating it as though they were giving Blackberry, a third party, access to all that incredibly sensitive personal data. They took advantage of the fact that most people are too technically clueless to understand that Blackberry in actual fact didn't have that data in any way, shape or form, that it never left the pocket of the person who was granted access to it and they knew it.
> that it never left the pocket of the person who was granted access to it and they knew it.
How do they (you) know that? How hard would it be for Huawei or Blackberry to exfiltrate that data?
What that and other incidents show is that Facebook had a widespread pattern of sharing data and trying to control the reach of that data through contracts and legal power rather than actually controlling it. This strategy makes leaks inevitable.
While it was likely easier to exfiltrate data this way, you can rest assured that if they want to exfiltrate it, as the producers of the hardware and the firmware, they can easily do it.
They control the kernel. They can read it from the screen, from the app memory, from the TCP stream with a minor patch to the TLS code. Seriously, the "how do you know" rabbit hole has to start from first principle.
You don't know. And the special API made is easier if it did happen. But it was not an enabler or anything - if they wanted to, they did it without, before, or with Facebook's help.
That person meant that if all the main browsers move to Chromium, then it will dominate and cause a monoculture; the same way that Internet Explorer caused a monoculture in the 90s and 00s.
Pointing out that there are competitors doesn't change that their presence doesn't overthrow the monoculture. After all, Gecko was around for a long time and Firefox did become very popular very quickly after version 2; but realistically, it took until Apple put WebKit on iOS for any real dents to be made.
Also, I think Konqueror still uses KHTML by default. I don't know if Konqueror tracks WebKit, but I think they're rather different these days.
Why are you citing Safari's 14% market share when the person to whom you responded was stating that if it's true that Edge will become Chromium-based, then browsers running on Chromium will represent something like 66% of the world's web engine share. Heck, it's already 59%, over half. With Opera, Vivaldi, and Edge (or whatever Anaheim will become), we're talking two thirds of all browsers being based on one engine.
That's the monoculture.
Edit: didn't realise you were responding to someone refuting what you had said, haha. so basically we agree, I just thought you were someone else.
I'm saying that WebKit can't be a monoculture because Safari only has 14% market share. It doesn't matter if other devices and browsers that no one actually uses are WebKit.
Gotcha. Yeah, it seems a lot of people are stuck in the past idea that WebKit dominates. It used to dominate in mobile and it certainly a lot of WebKit-only extensions eventually became standards — but those days are behind us for everything other than iOS.
And while iOS has the greatest market share for mobile, you're right, WebKit doesn't have the greatest market share for general browsers. That's definitely Chromium, now on Windows, macOS, Linux, iOS, Android, your mum's toaster, just about every 'native' web app…
Side note: I guess we should really be saying Blink, that's the name of the engine. But even if, say, you use Google Chrome on iOS, backed by WebKit, that usually makes you a Google Chrome, backed by Blink, user on the desktop.
So how useless exactly were they? As long as you are looking for a "right" answer not a correct one, they are a very good metric for testing problem solving skills.
I got off to a real bad footing in a job interview using a questions like that once. With a guy looking for a 'right' answer, and it didn't go down well when I challenged his assumption.
He asked how many plumbers worked in the city, to which I replied you could check the industry registry for qualified plumbers, you can probably filter them out by city. There was silence then I had the question clarified to how many 'plumbing businesses' where there not individual plumbers.
To which I replied you could get the company registrar office but it was impossible to calculate as so many plumbers work full time while also holding businesses of there own as free agents. A very unimpressed look came across the guys face and I was told there is a very simple way to find out and asked to try again.
I sat in silence for a 30 seconds or so trying to think of something that would be more thorough than the registry offices, I think offered a few alternatives like tax department records, government statistics office. All things I could think of that would keep fine grained data. But I could see the guy growing impatient with me so I stared at him and asked him what a better metric was than what I had offered.
After a few moments I was told the correct answer was to check the phone book, any practicing plumber business would be listed.
Startled but what seemed like a completely faulty answer I pointed out what seemed obvious to me... not every business needs to have a public listing... some deal directly as sub contractors ... some could be umbrella companies for subbies ... again some are free agents... some might use unlisted cellphones... not everyone is a legal company, not all plumbers where qualified. It was a terrible way to get a dataset you could rely on.
Angry swept across the guys face and I was told sternly I was wrong the data was perfectly suitable, onto the next question... which was all down hill from there as he didn't want to hear my answers, didn't challenge me back, just rip through the rest.
To this day I laugh when ever I think back to that interview. It was probably the most uncomfortable interview I've ever been in.
So, is Google admitting the questions are hopeless, or are they saying that their interviewer's reactions to the answers to those questions are hopeless?
Because fixing interviews is harder than just working out hwat questions to ask.
I dunno about google, but my experience was more copy cat behavior by someone that didn't get the purpose of it I think... maybe I was to blame as well as I pushed back expecting to be challenged more.. not just told I was wrong.
It ended up worse then useless for both of us involved.
Dodged a bullet, anyone can see your answers were at least a good as the phone book one. Nothing worse than a managers who relies on authority to backup their flawed decisions just to spare their own ego. Sounds like a toxic org culture
If a candidate solves a puzzle, it tells you a bit about the candidate, but if the candidate fails to solve the puzzle, it tells you more or less nothing, and a large part of the interview is wasted.
As noted elsewhere, these usually make the hard part harder.
The other part of the issue is building the internal representation so that halfway decent code can be generated, not even thinking about optimization.
And, like the syntax and grammar of the language, the semantics of C++ are quite complex.
If you have a functional bend, Simon Peyton Jones' book (https://research.microsoft.com/en-us/um/people/simonpj/Paper...) is worth reading, too. His book, however, is not a complete treatment. It assumes you know e.g. how to write a parser, and concentrates on the challenges unique to lazy functional languages.