Hacker Newsnew | past | comments | ask | show | jobs | submit | nphardon's commentslogin

honestly, you couldn't even build your own house.

There's a great discussion with Stephen Wolfram on the Sean Carroll podcast. Listening to it made me think very highly of Wolfram. He's a free thinking, eccentric, mathematician, scientist; who got started doing serious work at a very young age. He still has a youthful creative approach to thought and science. I hope LLMs do pair well with his tools.

To save others a search, here's the podcast with Wolfram.

Stephen Wolfram on Computation, Hypergraphs, and Fundamental Physics - https://podbay.fm/p/sean-carrolls-mindscape-science-society-... (2hr 40min)

I'm a fan of his work and person too. Not a fanatic or evangelical level, but I do think he's one of the more historically relevant computer scientists and philosophers working today. I can overlook his occasional arrogance, and recognize that there's a genuine and original thinker who's been pursuing truth and knowledge for decades.


Sean also publishes transcripts of all episodes; https://www.preposterousuniverse.com/podcast/2021/07/12/155-...

Same here. I've found the "me me me" a bit off-putting over the years, but can't deny that he is a genuinely smart, interesting, and forward thinking person. I especially enjoyed his writings on measuring every aspect of his life [1].

Also Wolfram (person and company) don't seem to be stodgy and stuck in old ways. At least as an outside observer (I'm not a mathematician, nor do I use Wolfram's main tools), seem to handle new trends with their own unique contributions to augment those trends:

Wolfram Alpha was a genuinely useful and good tool, perfect for the times.

These tools will actually further supercharge LLMs in certain use cases. They've provided multiple ways to adopt them.

Looking forward to see what people will do with this stuff.

1: https://writings.stephenwolfram.com/2012/03/the-personal-ana...


He live streams the (internal) Wolfram Alpha product meetings on YouTube. It's really interesting to watch, I've been a fly on the wall for years.

I knew about this but never attended, so cool!

I tried finding this but couldn't find them on youtube. Can you please share the link for one of the videos?

It's under the Live tab of Wolfram's channel:

https://www.youtube.com/@WolframResearch/streams

Sessions are called Live CEOing, e.g.:

https://www.youtube.com/watch?v=id0KH0sfHI8


Thanks, I should have linked. Also he cross-posts to his site:

https://livestreams.stephenwolfram.com/category/live-ceoing/

The next one is today, 4:30 PM ET!


Thank you!!

He's been in AI-land forever, the whole idea of Wolfram Alpha circa 2009 was to transform natural language into algorithms. I met him briefly in New York when he was on a panel on AI ethics in 2016, and ya, dude is sharp.

I'm fairly certain Stephen Wolfram will be one of the few intellectuals today that will still be remembered in 50 years.

I already remember him from 25 years ago

he seems to be a good software engineer at least, but what about his science? does it all revolve around re-modelling the universe in his software?

He got famous solving quantum field theory problems

he seems to think his times better spent on software than science it seems. i take it he didnt really crack anything of worth on the physics side then?

To be fair, he's been trying, he's a big fan of cellular automaton.

Recently I went back to The Ecstasy of Communication by Jean Baudrillard which I couldn't get through back in the day when I first picked it up. I used Haiku to walk me through the first chapter, and Haiku would not state anything verbatim due to copyright, but if I referenced a sentence it knew it exactly.

If you tell your doctor that a parent had polyps removed (say, recently), that will give you your best chance of getting one. Most likely, if you're in an even remotely progressive area, your doc wants you to have one, but their hands are tied by the insurance company. Afaik you dont have to provide any proof of your claim re parental polyps.

> but their hands are tied by the insurance company.

Doctors' ability to prescribe or refer is never restricted by an insurance company. If they think a patient should get whatever healthcare, they are free to say it.


The average American says US healthcare spending, which is 3x to 20x that of other OECD countries on a per capita basis, is way too high.

The average American also thinks they should be provided testing and procedures that their insurance deems medically unnecessary.

Try to reconcile these two beliefs. (Hint: It's impossible)


Maybe there's a bunch of inflated profit margins and people getting filthy rich off a poorly regulated market.

You are just ignoring their intended meaning. Boring.

Is the intended meaning that health insurance should pay for anything and everything? Even systems where the government pays directly like the UK have parameters under which the government will pay for a procedure or medicine.

Not at all. Patients are free to pay out of pocket for procedures not covered by insurance. An extra colonoscopy (one not classified as medically necessary), while expensive, is within the financial means of most middle-class adults.

In CA, my doctor can refer me to get a Cologuard. But it's private pay, and they want payment up front since isurance companies don't restrict doctor's ability, only reimbursement.

So they may not be willing (even though they are able) perform procedure/test if they aren't confident they'll get paid.


Unfortunately, one of the struggles in old high tech (thats the only thing i know, are you also experiencing this?) is that the C-level people don't look at Ai and say LLM's can make an individual 10x more productive therefore (and this is the part they miss) we can make our tool 10x better. They think: therefore we can lay off 9 people.


There aren't 10x revenue gains in most businesses if their workers become 10x more productive. Some markets grow very slow and/or have capped growth.

Therefore, the best way to increase profit is to lower cost.


(In the semiconductor industry) We experienced brutal layoffs arguably due to over-investment into Ai products that produce no revenue. So we've had brutal job loss due to Ai, just not in the way people expected.

Having said that, it's hard to imagine jobs like mine (working on np-complete problems) existing if the LLMs continue advancing at the current rate, and its hard to imagine they wont continue to accelerate since they're writing themselves now, so the limitations of human ability are no longer a bottleneck.


Maybe I'm being naive here, but for AI (heck, for any good algorithm) to work well, you need some at least loosely-clearly defined objectives. I assume it's much more straightforward in semi, but there're many industries, once you get into the details, all kinds of incentives start to disalign and I doubt AI could understand all kinds of nuances.

E.g. once I was tasked to build a new matching algorithm for a trading platform, and upon fully understanding of the specs I realized it can be interpreted as a mixed integer programming problem; the idea got shot down right away because PM don't understand it. There're all kinds of limiting factors once you get into the details.


AI can probably tell you how to best explain that idea to the boss. Or even write it up as a memo for you, if you use a more complex model.


I think those conversations occur due to changes in timeline of deliverables or certainty of result, would that not be an implementation detail?


Well, like I said, there're hidden incentives behind the scene; in my case, the hidden incentive is that, the requester/client is one of the company's subpar broker, and PM probably decided to just offer an average level of commitment, not going above and beyond. Hence the plan was to do exactly what the broker want even though that was messy and inferior. You can't just write down that kind of motivation on paper anywhere.

--- I said it because I did the analysis, and realized that if I implement the original version, which basically is a crazy way to iteratively solve the MIP problem, it's much harder to reason with internally, and much harder to code correctly. But obviously it keep the broker happy (the developer is doing exactly what I said)


I think I'm finally realizing that my job probably won't exist in 3-5. Things are moving so fast now that the LLMs are basically writing themselves. I think the earlier iterations moved slower because they were limited by human ability and productivity limitations.


That quip(?) on Attia is darrrrrrrk. It's saying you must exchange your soul.


Iirc in the Matrix Morpheus says something like "... no one knows when exactly the singularity occurred, we think some time in the 2020s". I always loved that little line. I think that when the singularity occurs all of the problems in physics will solve, like in a vacuum, and physics will advance centuries if not millennia in a few pico-seconds, and of course time will stop.

Also: > As t→ts−t→ts− , the denominator goes to zero. x(t)→∞x(t)→∞. Not a bug. The feature.

Classic LLM lingo in the end there.


> I think that when the singularity occurs all of the problems in physics will solve, like in a vacuum, and physics will advance centuries if not millennia in a few pico-seconds

It doesn't matter how smart you are, you still need to run experiments to do physics. Experiments take nontrivial amounts of time to both run and set up (you can't tunnel a new CERN in picoseconds, again no matter how smart you are). Similarly, the speed of light (= the speed limit of information) and thermodynamics place fundamental limits on computation; I don't think there's any reason at all to believe that intelligence is unbounded.


The "singularity" can be decomposed into 2 mutually-supportive feedback loops - the digital and the physical.

With frontier LLM agents, the digital loop is happening now to an extent (on inference code, harnesses, etc), and that extent probably grows larger (research automation) soon.

Pertinent to your point, however, is the physical feedback loop of robots making better robots/factories/compute/energy. This is an aspect of singularity scenarios like ai-2027.

In these scenarios, these robots will be the control mechanism that the digital uses to bootstrap itself faster, through experimentation and exploration. The usual constraints of physical law still apply, but it feels "unbounded" relative to normal human constraints and timescales.

A separate point: there's also deductive exploration (pure math) as distinct from empirical exploration (physics), which is not bounded by any physical constraints except for those that bound computation itself.


> With frontier LLM agents, the digital loop is happening now to an extent

I see no evidence of this, just a lot of people claiming it (very loudly, for the most part).

> that extent probably grows larger (research automation) soon

The word probably is doing a lot of work here.

> The usual constraints of physical law still apply

There are knowledge constraints, too. I can't build a quark matter processor without understanding quark matter to a vastly higher level than we do now. I can't do that without experiments on quark matter, I can't do experiments without access to a lot of energy, material, land, &c, that need to be assembled. There are a huge number of very difficult and time-consuming instrumental goals on the path to fundamentally better compute.

> A separate point: there's also deductive exploration (pure math) as distinct from empirical exploration (physics), which is not bounded by any physical constraints except for those that bound computation itself.

Sure, but physics requires math that is definitionally applied, not pure, and engineering requires physics.


Kind of, I mean you have to verify things experimentally but thought can go a very long way, no? And we're not talking about humans thinking about things, we're talking about an agent with internet access existing in a digital space, so what experiments it would do within that space are hard for us to imagine. Of course my post isn't meant to be taken seriously, it's more of a fun sci-fi idea. Also I'm implying not necessarily reaching the limits of the things you mentioned, but rather, just taking a massive step in a very short time window. Like, the time window from the discovery of fire to the discoveries of Quantum Mechanics but in a flash.


> what experiments it would do within that space are hard for us to imagine

The only thing you could do in a "digital space" (a.k.a. on a computer) is a simulation. Simulations are extremely useful and help significantly with designing and choosing experiments, but they cannot _replace_ real experiments.

> Like, the time window from the discovery of fire to the discoveries of Quantum Mechanics but in a flash.

And my point is that there's no good reason to think this is possible and many to think it isn't.

> it's more of a fun sci-fi idea

It's being presented as extremely serious possibility by people who stand to gain a _lot_ of money if other people think it's serious... that's the point of the linked post. Unfortunately, these AI boosters make it very difficult to discuss these ideas, even in a fun sci-fi way, without aggravating the social harms those people are causing.


You say that, but someone at CERN has spent at least ten minutes thinking about how they could expose the Haldron Colider as an MCP server.


Eh, he actually says “…sometime in the early Twenty-First Century, all of mankind was united in celebration. Through the blinding inebriation of hubris, we marveled at our magnificence as we gave birth to A.I.”

Doesn’t specify the 2020’s.

Either way, I do feel we are fast approaching something of significance as a species.


Got it. Amazing prescience by the Watchowski's. I'm blown away on rewatches how spot on they were for 1999.


> Not a bug. The feature.

I actually think this was nice writing, and it didn't strike me as LLM thought at all, specifically because of the terse delivery.


My Claude uses that exact phrase rather frequently. I actually like it too! Not saying it's bad.


I don't think people realize how crazy this all is (and might become)


What are people doing with OpenClaw? Seems like some bleeding edge stuff will come out of this sort of experimentation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: