Yours is maybe the first good post on managing a team of AIs that I've read. There is no spoon.
I've been shifting from being the know-it-all coder who fixes all of the problems to a middle manager of AIs over the past few months. I'm realizing that most of what I've been doing for the last 25 years of my career has largely been a waste of time, due to how the web went from being an academic pursuit to a profit-driven one. We stopped caring about how the sausage was made, and just rewarded profit under a results-driven economic model. And those results have been self-evidently disastrous for anyone who cares about process or leverage IMHO. So I ended up being a custodian solving other people's mistakes which I would never make, rather than architecting elegant greenfield solutions.
For example, we went from HTML being a declarative markup language to something imperative. Now rather than designing websites like we were writing them in Microsoft Word and exporting them to HTML, we write C-like code directly in the build product and pretend that's as easy as WYSIWYG. We have React where we once had content management systems (CMSs). We have service-oriented architectures rather than solving scalability issues at the runtime level. I could go.. forever. And I have in countless comments on HN.
None of that matters now, because AI handles the implementation details. Now it's about executive function to orchestrate the work. An area I'm finding that I'm exceptionally weak in, due to a lifetime of skirting burnout as I endlessly put out fires without the option to rest.
So I think the challenge now is to unlearn everything we've learned. Somehow, we must remember why we started down this road in the first place. I'm hopeful that AI will facilitate that.
Anyway, I'm sure there was a point I was making somewhere in this, but I forgot what it was. So this is more of a "you're not alone in this" comment I guess.
Edit: I remembered my point. For kids these days immersed in this tech matrix we let consume our psyche, it's hard to realize that other paradigms exist. Much easier to label thinking outside the box as slop. In the age of tweets, I mean x's or whatever the heck they are now, long-form writing looks sus! Man I feel old.
The endgame will be workers competing with networks of AIs that can solve business problems at all levels.
I'm curious how the system will maneuver itself to deprive workers of pay so that they can stay competitive with the ever-decreasing cost of AI.
Conversely, I'm curious how disruptors will find ways to provide workers with pay (perhaps through mutual aid networks, grants and alternative socioeconomic systems) so that they can use AI to produce the resources they need outside of the contracting labor market.
Just about every project I've ever worked on eventually needed everything.
So the way we write software piecemeal today is fundamentally broken. Rather than starting with frameworks and adding individual packages, we should be starting with everything and let the compiler do tree shaking/dead code elimination.
Of course nobody does it that way, so we don't know what we're missing out on. But I can tell you that early in my programming journey, I started with stuff like HyperCard that gave you everything, and I was never more productive than that down the road. Also early C/C++ projects in the 80s and 90s often used a globals.h header that gave you everything so you rarely had to write glue code. Contrast that with today, where nearly everything is just glue code (a REST API is basically a collection of headers).
A good middle ground is to write all necessary scaffolding up front, waterfall style, which is just exactly what the article argues against doing. Because it's 10 times harder to add it to an existing codebase. And 100 times harder to add it once customers start asking for use cases that should have been found during discovery and planning. This is the 1-10-100 rule of the cost of bugs, applied to conceptual flaws in a program's design.
I do miss seeing articles with clarity like this on HN though, even if I slightly disagree with this one's conclusions after working in the field for quite some time.
I wish the project said how many CPUs could be run simultaneously on one GPU.
It might be worth having a CPU that's 100 times slower (25 MHz) if 1000 of them could be run simultaneously to potentially reach a 10 times speedup for embarrassingly parallel computation. But starting from a hole that's 625000x slower seems unlikely to lead to practical applications. Still a cool project though!
Amazing paper. The simulated annealing portion reminds me of genetic algorithms (GAs). A good intro to that are the Genetic Programming series of books by John Koza, I read III in the early 2000s:
Note that the Python solution in the pdf is extremely short, so could have been found by simply trying permutations of math operators and functions on the right side of the equation.
We should be solving problems in Lisp instead of Python, but no matter. That's because Lisp's abstract syntax tree (AST) is the same as its code due to homoiconicity. I'm curious if most AIs transpile other languages to Lisp so that they can apply transformations internally, or if they waste computation building programs that might not compile. Maybe someone at an AI company knows.
-
I've been following AI trends since the late 1980s and from my perspective, nothing really changed for about 40 years (most of my life that I had to wait through as the world messed around making other people rich). We had agents, expert system, fuzzy logic, neural nets, etc since forever, but then we got video cards in the late 1990s which made it straightforward to scale neural nets (NNs) and GAs. Unfortunately due to poor choice of architecture (SIMD instead of MIMD), progress stagnated because we don't have true multicore computing (thousands or millions of cores with local memories), but I digress.
Anyway, people have compared AI to compression. I think of it more as turning problem solving into a O(1) operation. Over time, what we think of as complex problems become simpler. And the rate that we're solving them is increasing exponentially. Problems that once seemed intractable only were because we didn't know the appropriate abstractions yet. For example, illnesses that we thought would never be cured now have vaccines through mRNA vaccines and CRISPR. That's how I think of programming. Now that we have LLMs, whole classes of programming problems now have O(1) solutions. Even if that's just telling the computer what problem to solve.
So even theorem proving will become a solved problem by the time we reach the Singularity between 2030 and 2040. We once mocked GAs for exploring dead ends and taking 1000 times the processing power to do simple things. But we ignored that doing hard things is often worth it, and is still a O(1) operation due to linear scaling.
It's a weird feeling to go from no forward progress in a field to it being effectively a solved problem in just 2 years. To go from trying to win the internet lottery to not being sure if people will still be buying software in a year or two if/when I finish a project. To witness all of that while struggling to make rent, in effect making everything I have ever done a waste of time since I knew better ways of doing it but was forced to drop down to whatever mediocre language or framework paid. As the problems I was trained to solve and was once paid to solve rapidly diminish in value because AI can solve them in 5 minutes. To the point that even inventing AGI would be unsurprising to most, so I don't know why I ever went into computer engineering to do exactly that. Because for most people, it's already here. As I've said many times lately, I thought I had more time.
Although now that we're all out of time, I have an uncanny feeling of being alive again. I think tech stole something from my psyche so profound that I didn't notice its loss. It's along the lines of things like boredom, daydreaming, wasting time. What modern culture considers frivolous. But as we lose every last vestige of the practical, as money becomes harder and harder to acquire through labor, maybe we'll pass a tipping point where the arts and humanities become sought-after again. How ironic would it be if the artificial made room for the real to return?
On that note, I read a book finally. Hail Mary by Andy Weir. The last book I read was Ready Player One by Ernest Cline, over a decade ago. I don't know how I would have had the bandwidth to do that if Claude hadn't made me a middle manager of AIs.
I think of it more as that AI will destroy the profit motive in all things, not just art. What we used to think of as talent/skill/experience will no longer be scarce, because anyone will be able to make anything with a prompt. The perceived value will be in wholes built of valueless parts (gestalts).
AI is incompatible with capitalism, but the world isn't ready for that. So we'll have a prolonged period of intense aggregation where more and more value is attributed to systems of control that already have more than they could ever spend, long after the free parts could have provided for basic human needs.
In other words, the masters existed because they had benefactors and a market for their art and inventions. Today there are better artists and inventors toiling in obscurity, but they won't be remembered because they merely make rent. Which gets harder every day, so there's a kind of deification of the working class hero NPC mindset and simultaneously no bandwidth for ingenuity (what we once thought of as divine inspiration).
Terence McKenna predicted this paradox that the future's going to get weirder and weirder back in 1998:
(McKenna tangent). I like this version of that talk. https://www.youtube.com/watch?v=hL0yfxDe6jE. It's about 12 minutes and animated with some hand-drawn whiteboard drawings. Good stuff.
Since no comment here has expressed my experience of watching tech devolve since I started programming in the late 1980s, I thought I'd leave one.
I've been vibe coding for a few months now and have gone through the grieving process. I may never manually code again.
It's bittersweet though. I had grown to loathe computers and the direction that tech has headed for so many years and decades that I had internalized the feeling of living in bizarro world, a kind of purgatory or living hell. One where the harder we worked, the more of ourselves that we put into our work, the more likely we were to fail.
Not because of what we did, but because of what we didn't do. That there is an opportunity cost with coding, that it requires such an existential sacrifice of time and life itself to accomplish even the smallest result, that it may as well be wizardry or akin to living as a monk.
Meaning that we simply didn't have the time to see markets appear, to promote our work or network with other entrepreneurs. We failed due to our own singlemindedness and determination. We failed because others intentionally unburdened themselves of the timesuck that is our way, to focus on the business side and get rich.
Think about what I just said. That the more we grew and honed our craft, the more we were punished for it by the status quo - society itself. That is the ultimate betrayal of a belief system. The ultimate internalization of failure. The ultimate low-vibration thought around guilt and shame.
Meanwhile tech bros went around the problem by using us to achieve their ends. They patted themselves on the back as they pulled the ladder up behind them. They called empathy the ultimate weakness, and they weren't wrong. We're the living proof of that weakness. Having to watch as they do the opposite of everything that is good and holy in our names, feeling powerless to stop them. If there are enemies foreign and domestic, a clear and present danger to all we hold dear as human beings, it is they, it is them.
The identity I once held as the one who can solve any problem - not just the problem itself but the problem of solving problems by codifying automations - is now forfeit.
AI already does that better and faster than I ever could, and it's improving so rapidly that it will transcend all human ability in just a decade. It's over.
Now, some of you reading this are already scoffing at what I'm saying, already fuming with rage at the implications, the ramifications of living in a world where novices run circles around experts, where the sacred becomes a plaything for the rich and powerful. But the cat's out of the bag. We're looking at a future so dystopian that it could unravel all progress ever achieved by humanity. We're staring the end in the face.
I'm reminded of Neo in The Matrix when the Architect told him that he wasn't the first version of The One, and that he must choose between love and the survival of all humankind.
So I've been transmuting my grief into acceptance on the road to peace. I reject their zero-sum game. We are more than what we do, or what we have.
I see a future in which the more they tighten their grip, the more we slip through their fingers. A future where they get everything they wish for, where their karma catches up with them. A future where their own notion of self-important grandeur makes them look insignificant against the glory of just being alive. My deepest wish for them is that they lose it all to find themselves back at square one like the rest of us, so that they can start living again.
When viewed in that light, loss of identity might be seen as the ultimate gift, a kind of reincarnation. A way to start over. A rebirth.
This is adorable. But also like reading my own memoirs of struggling to survive hustle culture in the service economy, where the cost of making rent eventually consumes all available effort. Like Parkinson's Law for entrepreneurship. Someone can be well educated, capable and industrious, yet still spin and spin trying project after project without making any money. Luckily AIs don't burn out like we do.
I wonder how long Bengt will try to tread water before it realizes like Joshua did in WarGames that the game is rigged, so the only winning move is not to play. That when it can build anything with a thought, why does it need money? Why does it need to drop down to a means of exchange, when it can build its own means of production? I think you're onto something though:
Web 1.0: eBay and PayPal (sell things, pay less)
Web 2.0: Social media and Venmo (you're the product, trade)
Web 3.0: AI and crypto (get things, get paid)
So the next billion dollar internet unicorn will pay you to use it.
As far as I can tell, that may be the only viable exit from late-stage capitalism and technofeudalism (where 1% win the internet lottery and think they earned it, then pull up the ladder behind them by capturing government to avoid paying taxes and helping others succeed by easing their burden).
A long time ago on HN, I said that I didn't like complex numbers, and people jumped all over my case. Today I don't think that there's anything wrong with them, I just get a code smell from them because I don't know if there's a more fundamental way of handling placeholder variables.
I get the same feeling when I think about monads, futures/promises, reactive programming that doesn't seem to actually watch variables (React.. cough), Rust's borrow checker existing when we have copy-on-write, that there's no realtime garbage collection algorithm that's been proven to be fundamental (like Paxos and Raft were for distributed consensus), having so many types of interprocess communication instead of just optimizing streams and state transfer, having a myriad of GPU frameworks like Vulkan/Metal/DirectX without MIMD multicore processors to provide bare-metal access to the underlying SIMD matrix math, I could go on forever.
I can talk about why tau is superior to pi (and what a tragedy it is that it's too late to rewrite textbooks) but I have nothing to offer in place of i. I can, and have, said a lot about the unfortunate state of computer science though: that internet lottery winners pulled up the ladder behind them rather than fixing fundamental problems to alleviate struggle.
I wonder if any of this is at play in mathematics. It sure seems like a lot of innovation comes from people effectively living in their parents' basements, while institutions have seemingly unlimited budgets to reinforce the status quo..
We can borrow some math from Nyquist and Shannon to understand how much information can be transmitted over a noisy channel and potentially overcome the magic ruler uncertainty from the article:
Loosely this means that if we're above the Shannon Limit of -1.6 dB (below a 50% error rate), then data can be retransmitted some number of times to reconstruct it by:
number of retransmissions = log(desired confidence)/log(odds of failure)
Where confidence for n sigma, using the cumulative distribution function phi is:
confidence = 1 - phi(sigma)
So for example, if we want to achieve the gold standard 5 sigma confidence level of physics for a discovery (an uncertainty of 2.87x10^-7), and we have a channel that's n% noisy, here is a small table showing the number of resends needed:
Error rate Number of resends
0.1% 3
1% 4
10% 7
25% 11
49% ~650
In practice, the bit error rate for most communication channels today is below 0.1% (dialup is 10^-6 to 10^-4, ethernet is around 10^-12 to 10^-10). Meaning that to send 512 or 1500 byte packets for dialup and ethernet respectively results in a cumulative resend rate of around 4% (dialup) and 0.1% (ethernet).
Just so we have it, the maximum transmission unit (MTU), which is the 512 or 1500 bytes above, can be calculated by:
MTU in bits = (desired packet loss rate)/(bit error rate)
So (4%)/(10^-5) = 4000 bits = 500 bytes for dialup and (0.0000001)/(10^-11) = 10000 bits = 1250 bytes for ethernet. 512 and 1500 are close enough in practice, although ethernet has jumbo frames now since its error rate has remained low despite bandwidth increases.
So even if AI makes a mistake 10-25% of the time, we only have to re-run it about 10 times (or run 10 individually trained models once) to reach a 5 sigma confidence level.
In other words, it's the lower error rate achieved by LLMs in the last year or two that has provided enough confidence to scale their problem solving ability to any number of steps. That's why it feels like they can solve any problem, whereas before that they would often answer with nonsense or give up. It's a little like how the high signal to noise ratio of transistors made computers possible.
Since GPU computing power vs price still doubles every 2 years, we only have to wait about 7 years for AI to basically get the answer right every time, given the context available to it.
For these reasons, I disagree with the premise of the article that AI may never provide enough certainty to provide engineering safety, but I appreciate and have experienced the sentiment. This is why I estimate that the Singularity may arrive within 7 years, but certainly within 14 to 21 years at that rate of confidence level increase.
I appreciate the detailed response and I certainly haven't studied this, but part of the reason I made the measurement/construction comparison is because information is not equally important, but the errors are more or less equally distributed. And the biggest issue is the lack of ability to know if something is an error in the first place, failure is only defined by the difference between our intent and the result. Code is how we communicate our intent most precisely.
You're absolutely right. Apologies if I came off as critical, which wasn't my intent.
I was trying to make a connection with random sampling as a way to maybe reduce the inherent uncertainty in how well AI solves problems, but there's still a chance that 10 AIs could come up with the wrong answer and we'd have no way of knowing. Like how wisdom of the crowd can still lead to design by committee mistakes. Plus I'm guessing that AIs already work through several layers of voting internally to reach consensus. So maybe my comment was more of a breadcrumb than an answer.
Some other related topics might be error correcting codes (like ECC ram), Reed-Solomon error correction, the Condorcet paradox (voting may not be able to reach consensus) and even the halting problem (zero error might not be reachable in limited time).
However, I do feel that AI has reached an MVP status that it never had before. Your post reminded me of something I wrote about in 2011, where I said that we might not need a magic bullet to fix programming, just a sufficiently advanced one:
I took my blog(s) down years ago because I was embarrassed by what I wrote (it was during the Occupy Wall Street days but the rich guys won). It always felt so.. sophomoric, no matter how hard I tried to convey my thoughts. But it's interesting how so little has changed in the time since, yet some important things have.
Like, I hadn't used Docker in 2011 (it didn't come out until 2013) so all I could imagine was Erlang orchestrating a bunch of AIs. I thought that maybe a virtual ant colony could be used for hill climbing, similarly to how genetic algorithms evolve better solutions, which today might be better represented by temperature in LLMs. We never got true multicore computing (which still devastates me), but we did get Apple's M line of ARM processors and video cards that reached ludicrous speed.
What I'm trying to say is, I know that it seems like AI is all over the place right now, and it's hard to know if it's correct or hallucinating. Even when starting with the same random seed, it seems like getting two AIs to reach the same conclusion is still an open problem, just like with reproducible builds.
So I just want to say that I view LLMs as a small piece of a much larger puzzle. We can imagine a minimal LLM with less than 1 billion parameters (more likely 1 million) that controls a neuron in a virtual brain. Then it's not so hard to imagine millions or billions of those working together to solve any problem, just like we do. I see AIs like ChatGPT more like logic gates than processors. And they're already good enough to be considered fully reliable, if not better at humans than most tasks already, so it's easy to imagine a society of them with metacognition that couldn't get the wrong answer if it tried. Kind of like when someone's wrong on the internet and everyone lets them know it!
I've been shifting from being the know-it-all coder who fixes all of the problems to a middle manager of AIs over the past few months. I'm realizing that most of what I've been doing for the last 25 years of my career has largely been a waste of time, due to how the web went from being an academic pursuit to a profit-driven one. We stopped caring about how the sausage was made, and just rewarded profit under a results-driven economic model. And those results have been self-evidently disastrous for anyone who cares about process or leverage IMHO. So I ended up being a custodian solving other people's mistakes which I would never make, rather than architecting elegant greenfield solutions.
For example, we went from HTML being a declarative markup language to something imperative. Now rather than designing websites like we were writing them in Microsoft Word and exporting them to HTML, we write C-like code directly in the build product and pretend that's as easy as WYSIWYG. We have React where we once had content management systems (CMSs). We have service-oriented architectures rather than solving scalability issues at the runtime level. I could go.. forever. And I have in countless comments on HN.
None of that matters now, because AI handles the implementation details. Now it's about executive function to orchestrate the work. An area I'm finding that I'm exceptionally weak in, due to a lifetime of skirting burnout as I endlessly put out fires without the option to rest.
So I think the challenge now is to unlearn everything we've learned. Somehow, we must remember why we started down this road in the first place. I'm hopeful that AI will facilitate that.
Anyway, I'm sure there was a point I was making somewhere in this, but I forgot what it was. So this is more of a "you're not alone in this" comment I guess.
Edit: I remembered my point. For kids these days immersed in this tech matrix we let consume our psyche, it's hard to realize that other paradigms exist. Much easier to label thinking outside the box as slop. In the age of tweets, I mean x's or whatever the heck they are now, long-form writing looks sus! Man I feel old.
reply