Hacker Newsnew | past | comments | ask | show | jobs | submit | tananan's commentslogin

How are we machines?

What is the non machine part? What do you believe exists other than chemical and electrical systems?

Edit: If you mean machine in a more colloquial sense that's fine. Let us first get clear if we mean machine in that sense or in sense of any physical mechanism.


If the question is what is there about us that's not covered by the body, we can mention things like: feelings, intentions, perceptions, acts of consciousness.

Or however else you want to divide up things that have to do with the mind.

Eliminativists/illusionists may completely deny such things. The rest can fall into many camps, some of them religious.

It's not like there are any surprising new parts. It's about how one chooses to interpret/conceive those we are familiar with.


And what part remains in that space after we have mapped all the brain signals and configurations corresponding to these feelings, intentions and perceptions? I don't feel the need to bring up absurd unproven concepts without waiting for more data. It'd be like me saying there is something aphysical behind Mercury's orbital perturbations if I were born before SR and GR were discovered (as an example). No point in jumping to such an argument without first exhausting more believable causes first. History is very strongly against any kind of bet in the aphysical.

My question to you would be, what do you think remains that's not a simple natural system if/after something like Neuralink is successfully established?


Forgive me if I ramble for too long. I've been seeing a lot of comments in this vein and the thoughts have accumulated.

Tacit in your question is the notion that the inquiries that are important are those that can result in predictive models of phenomena encountered in the world — hence feelings, intentions & perceptions turn into a shorthand for reported accounts of the same — and that given enough reports (data), we could build a dictionary that maps a bundle of reports to a(n equivalence class) of physical system(s).

But when we speak of having feelings, or acting on intentions, most often we are not using these as stand-ins for our failure to pin down the current state of our physical system to another. If I am exposed to fire, I want to get away — I am unconcerned with how well I could translate my report of the pain to a patterns of neural activations. The reality of pain for me is unaffected by the fidelity of my "experience report dictionary". And it is there whether it's a brush fire or a neuralink streaming fire bits to my cortex.

If you decide that primacy ought to always be given to things as they can be modeled, you can choose to elevate the "experience report dictionary" and make the reality of experience a second-class citizen. Then you end up with an eliminativist ontology where indeed, we can rightly be called a mechanism.

But that is a "world-making" decision, a value judgement: "this is how things should be seen". It might be sponsored by our recent history, where we got high on the fruits of applied scientific modelling, nursed by the education which taught us that being a good engineer can have us continue in line with that, and pushed on us by impoverished modern eschatologies promising eternal youth, experience machines and what-not at this point. And it might seem preferable or more dependable than whatever equally impoverished, inhumane eschatologies we may have been presented with before.

It doesn't mean there isn't a whole world of places where we can go instead. But in general, we don't change our value judgements until the current one seems inadequate for some reason.

> If we created a molecule by molecule synthesis of a human being, you'd agree it is conscious and the same thing as a human created via typical reproduction, right?

Sure.


Yes so that was my point, if we can agree that a molecular synthesis of a human being, being a pure naturalistic physical process is as good as any other human then if we assume some aphysical element to consciousness, then we have a purely physical process for achieving a system with aphysicality in it. Which means either its not in fact aphysical or then what, we are left with the quetion at what point during this assembly process this new special aspect arises.

It's my feeling that we are still getting too ahead of ourselves in judging some supernatural element, that it's much like the atomism question in ancient greece. An honest thinker back then could have no really firm reason to support one side over another and they tended toward thesse kinds of endless circular metaphysical discussions. That is until we had further data and observation tools which settled the question experimentally. Juat like certain aspects of consciousness, atomism felt an insolvable question in some ways back then. I feel the problems we will have with consciousness will eventually have a similar fate. This bet has always succeeded for millenia till now.


Eternal youth and experience machines don't seem like problems with any conceptual difficulties. We already know electrical and chemical signals change what the brain perceives and eger6nal youth is no more difficult a concept than making any other form of long lasting machine. Obviously there is a long sequence of research problems to solve in the line but none of it is conceptually impossible or blockaged.

Another different question to help me understand what you think of this. I think you agree with me, but just to clarify. A human being is independent of the process of creation right? If we created a molecule by molecule synthesis of a human being, you'd agree it is conscious and the same thing as a human created via typical reproduction, right?

When we speak of the “despair vectors”, we speak of patterns in the algorithm we can tweak that correspond to output that we recognize as despairing language.

You could implement the forward pass of an LLM with pen & paper given enough people and enough time, and collate the results into the same generated text that a GPU cluster would produce. You could then ask the humans to modulate the despair vector during their calculations, and collate the results into more or less despairing variants of the text.

I trust none of us would presume that the decentralized labor of pen & paper calculations somehow instantiated a “psychology” in the sense of a mind experiencing various levels of despair — such as might be needed to consider something a sentient being who might experience pleasure and pain.

However, to your point, I do think that there is an ethics to working with agents, in the same sense that there is an ethics of how you should hold yourself in general. You don’t want to — in a burst of anger — throw your hammer because you cannot figure out how to put together a piece of furniture. It reinforces unpleasant, negative patterns in yourself, doesn’t lead to your goal (a nice piece of furniture), doesn’t look good to others (or you, once you’ve cooled off), and might actually cause physical damage in the process.

With agents, it’s much easier to break into demeaning, cruel speech, perhaps exactly because you might feel justified they’re not landing on anyone’s ears. But you still reinforce patterns that you wouldn’t want to see in yourself and others, and quite possibly might leak into your words aimed at ears who might actually suffer for it. In that sense, it’s not that different from fantasizing about being cruel to imaginary interlocutors.


> I trust none of us would presume that the decentralized labor of pen & paper calculations somehow instantiated a “psychology” in the sense of a mind experiencing various levels of despair

Your argument is based on an appeal to intuition. But the scenario that you ask people to imagine is profoundly misleading in scale. Let's assume a modern frontier model, around 1 trillion parameters. Let's assume that the math is being done by an immortal monk, who can perform one weight's calculations per second.

The monk will generate the first "token", about 4 characters, in 31,688 years. In a bit over 900,000 years, the immortal monk will have generated a single Tweet.

At that point, I no longer have any intuition. The sort of math I could do by hand in a human lifetime could never "experience" anything.

But I can't rule out the possibility that 900,000 years of math might possibly become a glacial mind, expressing a brief thought across a time far greater than the human species has existed.

As the saying goes, sometimes quantity has a quality all its own.

(This is essentially the "systems response" to Searle's "Chinese room" argument. It's a old discussion.)


I don't personally believe LLMs are sentient, but I've always enjoyed this thought experiment: https://xkcd.com/505. I have a signed copy framed on my wall.

In discussions like this, we're always going to bottom out at certain assumptions we bring with us, so I agree.

One reason I like bringing up examples like this (the xkcd in sister reply is also good) is that it makes really visible what our assumptions are. The scales are big both in space and time in order to emphasize what weight is given to functional equivalence.

I feel pretty confident most people wouldn't presume that doing a bunch of math by hand on paper can create glacial ephiphenomenal experiences (though I like the term).

Another thing that's interesting to me is that the converse assumption, i.e. one with a strong allegiance to functionalism, ends up feeling far more idealistic than you might expect. A box of gas, left on its own for long enough, will engage in a pattern of collisions that in a certain interpretative framework correspond to an LLM forward pass. In another, it can be a game of minesweeper.

The individual particles of course, couldn't care less whether you see them as part of one or the other. Yet your ability to see them in light of the first one is perhaps enough for the lights to truly turn on, if transiently, in some mind somewhere.


> A box of gas, left on its own for long enough, will engage in a pattern of collisions that in a certain interpretative framework correspond to an LLM forward pass.

That's a fun thought experiment. Greg Egan based a delightful science fiction novel on this premise. Permutation City, I believe.

To be clear, I don't necessarily think that current LLMs have subjective experiences. If I had to guess, I'd say "probably not." But:

- If I came from another universe, and if you asked me whether chemistry could have subjective experiences, I'd answer "probably not." And I would be wrong.

- Even if no current frontier models are "aware", it's possible that future models might be. Opus 4.6, for example, behaves far more like a coherent mind than last year's 3 billion parameter toy models. So future 100 trillion parameter models with different internal architectures might be even more like minds. (To be clear, I do not think we should build such models.)

- Awareness and intelligence might be different. Peter Watts' Blindsight is a fun exploration of this idea. Which leads me to conclude that it wouldn't necessarily matter whether an AI like SkyNet has subjective awareness or not. What matters is what kind of long-term plans it could pull off and how much it could reshape the world.


> Which leads me to conclude that it wouldn't necessarily matter whether an AI like SkyNet has subjective awareness or not. What matters is what kind of long-term plans it could pull off and how much it could reshape the world.

Absolutely. Thanks for the references :)


> I trust none of us would presume that the decentralized labor of pen & paper calculations somehow instantiated a “psychology”

Wrong. What you've just done is just reformulating the Chinese room experiment coming to the same wrong conclusions of the original proposer. Yes, the entire damn hand-calculated system has a psychology- otherwise you need to assume the brain has some unknown metaphysical property or process going on that cannot be simulated or approximated by calculating machines.


People go for chinese room for some reason when cartesian theater is the better fit here. What you're doing is placing yourself in the seat of the Homunculus waiting for the show to start. But anatomical investigation reveals that there's no theater at all, and in fact no central system where everything comes together. Instead, the whole design of the brain goes to great pains to tease input signals apart.

Basically, manipulating the symbols won't necessarily have any long term influence on your own state. But the variables you've touched on the paper have changed. Demonstrably; because you've written something down.

If you then act on the result of those calculations, as of course many engineers before you have done, and many after you will do; then you have just executed a functional state change in physical reality, no matter what the ivory tower folks say.

(And that's what the paper is about: Functional states)


Well, then we both assume very different views on the matter, and that’s fine.

And you are just a bunch of atoms. You can't assemble atoms to obtain a psychology, right?

I don’t hold to that view. If I did, I might have that problem.

Fortunately, as you mention in your last sentence, stress is introspectable.

How exactly stress corresponds to biomarkers doesn’t matter if your desire is to lower it.

The issue is that many of us don’t pay attention to how we keep our body & mind throughout the day, or do so on a very superficial level. So strain on the body can accumulate for a long time.

“Stress management” is a lifetime skill. It doesn’t come in bulletpoints, it’s as broad as “living happily”.

Edit: That said, this can make the advice “be less stressed” a bit vacuous.

But people do get scared when random health issues flare up and become more conscious of how they deal with stress in life.

So it’s not bad to keep reminding people either :)


it’s bad in the way of “don’t think about elephants” makes you think about elephants.

“Try not to stress” or “reduce stress” – but how to do that? Stress itself is nebulous, and the countermeasures are inconclusive.

Think of the last time you were angry or frustrated. Did your spouse telling you to “calm down” fix the problem?


It would be really cool if it could highlight the parts of the speech that gave you away your accent. It guesses mine correctly most of the time (though not the first time I tried), but also lets me know my accent is pretty light.


What strikes me as interesting about the idea that there is a class of computations that, however implemented, would result in consciousness, is that is is in some way really idealistic.

There's no unique way to implement a computation, and there's no single way to interpret what computation is even happening in a given system. The notion of what some physical system is computing always requires an interpretation on part of the observer of said system.

You could implement a simulation of the human body on common x86-64 hardware, water pistons, or a fleet of spaceships exchanging sticky notes between colonies in different parts of the galaxy.

None of these scenarios physically resemble each other, yet a human can draw a functional equivalence by interpreting them in a particular way. If consciousness is a result of functional equivalence to some known conscious standard (i.e. alive human being), then there is nothing materially grounding it, other than the possibility of being interpreted in a particular way. Random events in nature, without any human intercession, could be construed as a veritable moment of understanding French or feeling heartbreak, on the basis of being able to draw an equivalence to a computation surmised from a conscious standard.

When I think along these lines, it easy to sympathize with the criticism of functionalism a la Chinese Room.


I like this post.

What I find a practical, related advice is “If you want to get good at something, you have to make yourself glad that you’re doing it.”

This involves reminding yourself why it is that you want to get better at it, perceiving the process of learning as an interesting challenge, and in general generating interest.

There is a lot of creativity in how you actually do this. It is a skill in itself, and a very useful one, especially for skills where you find yourself lacking patience and motivation.


> That ‘glorious hope’ was quickly dashed, however. In Anaxagoras’s account, it seemed to Socrates, Mind had no agency other than initially setting things in motion, and no morality. ... For this reason, Socrates tells us plainly, he completely lost interest in the heavens, in science, and in physical reality (ta onta, ‘the things that are’).

> And so (as I’ve argued in more detail elsewhere) the first global franchise [Christian faith] was set up on an anti-science basis.

Supposedly, Socrates wasn't disenchanted with the disenchantment because he thought it was nonsense, but because it didn't address existential/moral issues that he found pertinent.

I'm not sure this drive is best characterized as anti-science. There's a difference between denying scientific research as today understood and denying a inherently materialistic worldview as one's overarching context of life. The latter is often married to science, but it doesn't have to be.

No shortage of science was and is done by deeply religious individuals. And indeed religions co-opted science in various ways. And we had materialist* views pretty far back (clearly in both Greece and India).

What's changed recently IMO, is that at those ancient times, a materialistic worldview was a sort of "Yeah, and?" sort of deal, since it offered little in terms of giving a direction to the life of an individual. Nowadays, there is at least a technological eschatology, with people expecting or looking forward to luxuries, longevity, and other such things as have usually been the promises of religions. Funnily enough, insofar as this eschatology contains a place for human agency, its mostly been taken up by organizations and corporations few would see as anything but morally corrupt. It's a weird eschatology where the idea is that if you pump enough juice in the greed machine, at some point a phase transition occurs and all of it can be converted in stable welfare for all.


You don't write something like this without being enchanted thoroughly by the language.


There actually is a “The Nag’s Head” in the Balkans - it’s in Montenegro


You are reading into something that isn't there. The study doesn't have to do with music making you more capable of socializing.

The hypothesis being tested is that in the absence of social interaction, people will turn to surrogates in order to make up for the perceived lack. Specifically, they test if music can be such a surrogate. They do some surveys and a kind of silly experiment to provide evidence that yes- it can.

The reason it is rightly called pointless is that it brings nothing actionable to the table.

You cannot extract advice from showing evidence for a common-sense observation: If you feel a certain lack, activities you find pleasurable can diminish that lack.

And look at the experimental setup: They make people play an online game with others where certain people are excluded from playing. It turns out that people who are hyped from listening to their favorite song found this less jarring, hence showing that music can be a "social buffer", i.e. make up for a perceived social exclusion.

Let everyone individually conclude how insightful this experiment is.

EDIT: Misunderstood the nature of the "Cyberball" experiment, fixed


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: