He says "You paid $100 million and then it made $200 million of revenue. There's some cost to inference with the model, but let's just assume in this cartoonish cartoon example that even if you add those two up, you're kind of in a good state. So, if every model was a company, the model is actually, in this example is actually profitable. What's going on is that at the same time"
importantly you'll notice that he's talking revenue, and assumes that inference is cheap enough/profitable enough that 100M + Inferance_Over_Lifetime < 200M
Well that's a problem the software industry has been building for itself for decades.
Software has, since at least the adoption of "agile" created an industry culture of not just refusing to build to specs but insisting that specs are impossible to get from a customer.
Agile hasn't been insisting that specs are impossible to get from a customer. They have been insisting that getting specs from a customer is best performed as a dynamic process. In my opinion, that's one of agile's most significant contributions. It lines up with a learning process that doesn't assume the programmer or the customer knows the best course ahead of time.
I have found that it works well as an open-endlessly dynamic process when you are doing the kind of work that the people who came up with Scrum did as their bread and butter: limited-term contract jobs that were small enough to be handled by a single pizza-sized team and whose design challenges mostly don’t stray too far outside the Cynefn clear domain.
The less any of those applies, the more costly it is to figure it out as you go along, because accounting for design changes can become something of a game of crack the whip. Iterative design is still important under such circumstances, but it may need to be a more thoughtful form of iteration that’s actively mindful about which kinds of design decisions should be front-loaded and which ones can be delayed.
You definitely need limits around it. Especially as a consulting team. It's not for open ended projects, and if you use it for open ended projects as a consultant you're in for a world of hurt. On the consultant side, hard scope limits are a must.
And I completely agree that requirement proximity estimation is a critical skill. I do think estimation of requirement proximity is a much easier task than time estimates.
And good luck when getting misaligned specs (communication issues customer side, docs that are not aligned with the product,...). Drafting specs and investigating failure will require both a diplomat hat and a detective hat. Maybe with the developer hat, we will get DDD being meaningful again.
I don’t want to put words in your mouth but I think I agree. It’s called requirements engineering. It’s hard, but it’s possible and waterfall works fine for many domains. Agile teams I see burning resources doing the same thing 2-3x or sprinting their way into major, costly architectural mistakes that would have been easily avoided by upfront planning and specs.
Agile is a pretty badly defined beast at the best of times but even the most twisted interpretation doesnt mean that. It's mainly just a rejection of BDUF.
I suspect that you are not only ignoring the existing safeguards that have already come of those discussions, but I suspect you’re also ignoring or pretending like those public discussions never happened in the first place.
Furthermore, I suspect you’re also trivializing what is and is not in contention with moral issues as these companies are trying to compete against each other.
I also think you’re probably assuming the slower options are the safer options because you haven’t really considered the risks of ceding power/investment to a less scrupulous competitor.
I’m not claiming any of these men are moral upstanding people or that they’ve done enough.
I think people should be very critical, but they should at least make the effort to ENGAGE in the moral issues and consequences.
Your cheap four word response only adds cheap rhetoric to the conversation.
If you really care about the moral issues, start typing.
I mean, maybe things have changed (I finished college about 20 years ago), but I don't remember producing large volumes of stuff as being a particularly important part of a CS degree.
Between a challenging job market, increasing new frontiers of learning (AI, MLops, parallel hardware) and an average mind like mine, a tool that increases throughput is likely to be adopted by masses, whether you like it or not and quality is not a concern for most, passing and getting an A is (most of my professors actively encourage to use LLMs for reports/code generation/presentations)
"higher speed" isn't an advantage for an encyclopedia.
The fact that Musk's derangement is clear from reading grokipedia articles shows that LLMs are less impervious to ego. Combine easily ego driven writing with "higher speed" and all you get is even worse debates.
what i meant is this may be a good real world litmus test. i dont claim to know if there are differences or not between her word and actions - i have not followed her closely. but i always like 'tests' like this for heads of media orgs as free speech (Free Speech) imo needs to be the backbone of those orgs
Although apparently not a fan of Jimmy Kimmel as a comedian, her Free Press objected to his suspension. "... the FCC’s coercion undermines our most fundamental values"
"Centrist" is an utterly meaningless term, as the only thing it implies is not one of the two major-partisan extremists. You can call me a centrist, with my views being anchored in a libertarian perspective. Back a few decades ago when the major parties' Venn diagrams overlapped a bit more, you could call people at the intersection of the parties' authoritarian policies centrists. And as for Bari Weiss, you can can call her centrist because she will do the bidding of her employer regardless of which Party's administration they are currently bribing.
"Don't anthropomorphize the lawnmower" includes not anthropomorphizing its individual parts, like the blades. Even when those blades are swapped out for new ones, re-sharpened, and put onto a different lawnmower.
Trump, while an objectively horrible person who belongs in prison for many distinct types of crime, is primarily a minstrel for people to hate on. While he is (unfortunately) a good first-pass litmus test for an individual's politics/intelligence, criticizing him is not really the same as critiquing all of the entrenched interests that installed and continue to enable him.
(defun f (x)
(let ((y x))
(setf y (* y x))
(block foo
(if (minusp y)
(return-from foo y))
(loop :for i :from 1 :to 10 :do
...
This is absolutely typical bog-standard left-to-right top-to-bottom structured programming type code. It also must be executed like so:
- Define the function
- Bind the variable
- Mutate the variable
- Set up a named block
- Do a conditional return
- Run a loop
- ...
The order of execution literally matches the order it's written. But not unlike almost all other languages on the planet, expressions are evaluated inside-out.
Haskell's whole raison d'etre is to allow arbitrary nesting and substitution of terms, and all or none of these terms may or may not be evaluated depending on need. De-nesting happens with a copious number of syntax to bind names to values, sometimes before the expression (via let), sometimes after the expression (via where), and sometimes in the middle of an expression (via do).
He says "You paid $100 million and then it made $200 million of revenue. There's some cost to inference with the model, but let's just assume in this cartoonish cartoon example that even if you add those two up, you're kind of in a good state. So, if every model was a company, the model is actually, in this example is actually profitable. What's going on is that at the same time"
importantly you'll notice that he's talking revenue, and assumes that inference is cheap enough/profitable enough that 100M + Inferance_Over_Lifetime < 200M
reply