There's a great discussion with Stephen Wolfram on the Sean Carroll podcast. Listening to it made me think very highly of Wolfram. He's a free thinking, eccentric, mathematician, scientist; who got started doing serious work at a very young age. He still has a youthful creative approach to thought and science. I hope LLMs do pair well with his tools.
I'm a fan of his work and person too. Not a fanatic or evangelical level, but I do think he's one of the more historically relevant computer scientists and philosophers working today. I can overlook his occasional arrogance, and recognize that there's a genuine and original thinker who's been pursuing truth and knowledge for decades.
Same here. I've found the "me me me" a bit off-putting over the years, but can't deny that he is a genuinely smart, interesting, and forward thinking person. I especially enjoyed his writings on measuring every aspect of his life [1].
Also Wolfram (person and company) don't seem to be stodgy and stuck in old ways. At least as an outside observer (I'm not a mathematician, nor do I use Wolfram's main tools), seem to handle new trends with their own unique contributions to augment those trends:
Wolfram Alpha was a genuinely useful and good tool, perfect for the times.
These tools will actually further supercharge LLMs in certain use cases. They've provided multiple ways to adopt them.
Looking forward to see what people will do with this stuff.
He's been in AI-land forever, the whole idea of Wolfram Alpha circa 2009 was to transform natural language into algorithms. I met him briefly in New York when he was on a panel on AI ethics in 2016, and ya, dude is sharp.
he seems to think his times better spent on software than science it seems. i take it he didnt really crack anything of worth on the physics side then?
Recently I went back to The Ecstasy of Communication by Jean Baudrillard which I couldn't get through back in the day when I first picked it up. I used Haiku to walk me through the first chapter, and Haiku would not state anything verbatim due to copyright, but if I referenced a sentence it knew it exactly.
If you tell your doctor that a parent had polyps removed (say, recently), that will give you your best chance of getting one. Most likely, if you're in an even remotely progressive area, your doc wants you to have one, but their hands are tied by the insurance company. Afaik you dont have to provide any proof of your claim re parental polyps.
> but their hands are tied by the insurance company.
Doctors' ability to prescribe or refer is never restricted by an insurance company. If they think a patient should get whatever healthcare, they are free to say it.
Is the intended meaning that health insurance should pay for anything and everything? Even systems where the government pays directly like the UK have parameters under which the government will pay for a procedure or medicine.
Not at all. Patients are free to pay out of pocket for procedures not covered by insurance. An extra colonoscopy (one not classified as medically necessary), while expensive, is within the financial means of most middle-class adults.
In CA, my doctor can refer me to get a Cologuard. But it's private pay, and they want payment up front since isurance companies don't restrict doctor's ability, only reimbursement.
So they may not be willing (even though they are able) perform procedure/test if they aren't confident they'll get paid.
Unfortunately, one of the struggles in old high tech (thats the only thing i know, are you also experiencing this?) is that the C-level people don't look at Ai and say LLM's can make an individual 10x more productive therefore (and this is the part they miss) we can make our tool 10x better. They think: therefore we can lay off 9 people.
(In the semiconductor industry) We experienced brutal layoffs arguably due to over-investment into Ai products that produce no revenue. So we've had brutal job loss due to Ai, just not in the way people expected.
Having said that, it's hard to imagine jobs like mine (working on np-complete problems) existing if the LLMs continue advancing at the current rate, and its hard to imagine they wont continue to accelerate since they're writing themselves now, so the limitations of human ability are no longer a bottleneck.
Maybe I'm being naive here, but for AI (heck, for any good algorithm) to work well, you need some at least loosely-clearly defined objectives. I assume it's much more straightforward in semi, but there're many industries, once you get into the details, all kinds of incentives start to disalign and I doubt AI could understand all kinds of nuances.
E.g. once I was tasked to build a new matching algorithm for a trading platform, and upon fully understanding of the specs I realized it can be interpreted as a mixed integer programming problem; the idea got shot down right away because PM don't understand it. There're all kinds of limiting factors once you get into the details.
Well, like I said, there're hidden incentives behind the scene; in my case, the hidden incentive is that, the requester/client is one of the company's subpar broker, and PM probably decided to just offer an average level of commitment, not going above and beyond. Hence the plan was to do exactly what the broker want even though that was messy and inferior. You can't just write down that kind of motivation on paper anywhere.
---
I said it because I did the analysis, and realized that if I implement the original version, which basically is a crazy way to iteratively solve the MIP problem, it's much harder to reason with internally, and much harder to code correctly. But obviously it keep the broker happy (the developer is doing exactly what I said)
I think I'm finally realizing that my job probably won't exist in 3-5. Things are moving so fast now that the LLMs are basically writing themselves. I think the earlier iterations moved slower because they were limited by human ability and productivity limitations.
Iirc in the Matrix Morpheus says something like "... no one knows when exactly the singularity occurred, we think some time in the 2020s".
I always loved that little line. I think that when the singularity occurs all of the problems in physics will solve, like in a vacuum, and physics will advance centuries if not millennia in a few pico-seconds, and of course time will stop.
Also:
> As t→ts−t→ts− , the denominator goes to zero. x(t)→∞x(t)→∞. Not a bug. The feature.
> I think that when the singularity occurs all of the problems in physics will solve, like in a vacuum, and physics will advance centuries if not millennia in a few pico-seconds
It doesn't matter how smart you are, you still need to run experiments to do physics. Experiments take nontrivial amounts of time to both run and set up (you can't tunnel a new CERN in picoseconds, again no matter how smart you are). Similarly, the speed of light (= the speed limit of information) and thermodynamics place fundamental limits on computation; I don't think there's any reason at all to believe that intelligence is unbounded.
The "singularity" can be decomposed into 2 mutually-supportive feedback loops - the digital and the physical.
With frontier LLM agents, the digital loop is happening now to an extent (on inference code, harnesses, etc), and that extent probably grows larger (research automation) soon.
Pertinent to your point, however, is the physical feedback loop of robots making better robots/factories/compute/energy. This is an aspect of singularity scenarios like ai-2027.
In these scenarios, these robots will be the control mechanism that the digital uses to bootstrap itself faster, through experimentation and exploration. The usual constraints of physical law still apply, but it feels "unbounded" relative to normal human constraints and timescales.
A separate point: there's also deductive exploration (pure math) as distinct from empirical exploration (physics), which is not bounded by any physical constraints except for those that bound computation itself.
> With frontier LLM agents, the digital loop is happening now to an extent
I see no evidence of this, just a lot of people claiming it (very loudly, for the most part).
> that extent probably grows larger (research automation) soon
The word probably is doing a lot of work here.
> The usual constraints of physical law still apply
There are knowledge constraints, too. I can't build a quark matter processor without understanding quark matter to a vastly higher level than we do now. I can't do that without experiments on quark matter, I can't do experiments without access to a lot of energy, material, land, &c, that need to be assembled. There are a huge number of very difficult and time-consuming instrumental goals on the path to fundamentally better compute.
> A separate point: there's also deductive exploration (pure math) as distinct from empirical exploration (physics), which is not bounded by any physical constraints except for those that bound computation itself.
Sure, but physics requires math that is definitionally applied, not pure, and engineering requires physics.
Kind of, I mean you have to verify things experimentally but thought can go a very long way, no? And we're not talking about humans thinking about things, we're talking about an agent with internet access existing in a digital space, so what experiments it would do within that space are hard for us to imagine. Of course my post isn't meant to be taken seriously, it's more of a fun sci-fi idea. Also I'm implying not necessarily reaching the limits of the things you mentioned, but rather, just taking a massive step in a very short time window. Like, the time window from the discovery of fire to the discoveries of Quantum Mechanics but in a flash.
> what experiments it would do within that space are hard for us to imagine
The only thing you could do in a "digital space" (a.k.a. on a computer) is a simulation. Simulations are extremely useful and help significantly with designing and choosing experiments, but they cannot _replace_ real experiments.
> Like, the time window from the discovery of fire to the discoveries of Quantum Mechanics but in a flash.
And my point is that there's no good reason to think this is possible and many to think it isn't.
> it's more of a fun sci-fi idea
It's being presented as extremely serious possibility by people who stand to gain a _lot_ of money if other people think it's serious... that's the point of the linked post. Unfortunately, these AI boosters make it very difficult to discuss these ideas, even in a fun sci-fi way, without aggravating the social harms those people are causing.
Eh, he actually says “…sometime in the early Twenty-First Century, all of mankind was united in celebration. Through the blinding inebriation of hubris, we marveled at our magnificence as we gave birth to A.I.”
Doesn’t specify the 2020’s.
Either way, I do feel we are fast approaching something of significance as a species.
reply