So? Lack of evidence is not proof positive of the contrary position. Even if you accept that there is no evidence, which is doubtful in itself since the fact that only humans seem to be able to think the way humans think could be considered evidence.
It's pretty strong evidence! We understand the basics of how humans (or living things in general) are constructed, and in that framework, built out of physics and chemistry, there's no space for special magic stuff; anything biology can make is made of atoms and could in principle be replicated. Even if there's some exotic whatsit we have somehow not been able to detect thus far, something that lives outside of our existing scientific theories, that would simply require updating those theories, and then figuring out how to follow the same steps biological systems do. Thus, the idea that there is some other "non-physical" thing intrinsically inaccessible to us is an extraordinary claim.
You similarly have no direct evidence that there isn't a bottle A&W root bear on Europa, but our understanding of the history of humanity (and root beer and space travel) makes it very unlikely. It is reasonable to conclude that there is no such bottle, and wildly unreasonable to posit that there is.
Edit: added the word "direct" + minor clarifications
> Even if there's some exotic whatsit we have somehow not been able to detect thus far, something that lives outside of our existing scientific theories, that would simply require updating those theories, and then figuring out how to follow the same steps biological systems do.
Assuming the whatsit could fit into the materialistic/mechanistic framework. But that's not necessarily the case.
> You similarly have no direct evidence that there isn't a bottle A&W root bear on Europa, but our understanding of the history of humanity (and root beer and space travel) makes it very unlikely.
I didn't make a claim, I asked you how you can be so sure of your claim.
Is there a portion of the article that repudiates materialism, or do you mean something more general like "To not believe everything Sam Altman says on Twitter about our glorious A.I. future is to not be a materialist."
I think, broadly, it's that a theory of mind should be informed by empirical evidence, by scientific research, and that liberal doses of the those will dissolve away many of the classic problems in philosophy of mind.
I totally get the sentiment, but I really want to read 3 NYT articles a month.
To make the company what I pay now for a subscription, that would come out to around $6.00 an article. I'm not sure I'd press that $6.00 to read button very often.
Your scheme would broaden the total number of readers, but I'm unsure that the company would actually do well under it.
The Chinese room argument itself isn't very compelling. Surely the constituent parts of the brain are fundamentally governed solely by physics, surely thought arises solely from the physical brain, and surely the constituent parts (and thus thought) could be described by a sufficiently complex discreet computation.
The argument you make here is a reasonable one (IMHO) for the plausibility in principle of what Searle calls “strong AI”, but he claims that his “Chinese Room” argument proves that it must be mistaken. One can simply ignore him, but to refute him takes a little more effort.
It turns out that when one looks at the argument in detail, and in particular at Searle’s responses to various objections (such as the Systems and Virtual Mind replies), it is clear that he is essentially begging the question, and his ultimate argument, “a model is not the thing modeled”, is a non-sequitur.
The argument is essentially that there are no qualia of Chinese comprehension in an automaton or in any system that uses an equivalent algorithm, whether or not run by a human.
It's a sound argument to the extent that qualia clearly exist, but no one has any idea what they are, and even less of an idea how to (dis)prove that they exist in external entities.
It's the materialists who are begging the question, because their approach to qualia is "Well obviously qualia are something that just happens and so what?"
Unfortunately arguments based on "Well obviously..." have a habit of being embarrassingly unscientific.
And besides - written language skills are a poor indicator of human sentience. Human sentience relies at least much on empathy; emotional reading of body language, expression, and linguistic subtexts; shared introspection; awareness of social relationships and behavioural codes; contextual cues from the physical and social environment which define and illuminate relationships; and all kinds of other skills which humans perform effortlessly and machines... don't.
Turing Tests and game AI are fundamentally a nerd's view of human intelligence and interaction. They're so impoverished they're not remotely plausible.
So as long as DALL-E has no obvious qualia, it cannot be described as sentient. It has no introspection and no emotional responses, no subjective internal state (as opposed to mechanical objective state), and no way to communicate that state even if it existed.
And it also has no clue about 3D geometry. It doesn't know what a sphere, only what sphere-like shading looks like. Generally it knows the texture of everything and the geometry of nothing.
Essentially it's a style transfer engine connected to an image search system which performs keyword searches and smushes them together - a nice enough thing, but still light years from AGI, never mind sentience.
Searle’s argument is not about qualia; it is (as Searle himself has repeatedly stressed) about syntax, semantics and understanding. The argument simply does not consider what the room’s occupant feels.
Even if it were about qualia, calling the argument sound “to the extent that” we don’t know enough to tell whether its premises are correct would be a misuse of ‘sound’ and a rather blatant case of burden-shifting - effectively saying “so prove me wrong!” to skeptics.
Materialists can and do make question-begging claims, but that does not somehow cancel out Searle’s own question-begging (furthermore, somewhat ironically, Searle describes himself as a materialist!)
The soundness of the argument cannot be established by showing that current technology is far from being strong AI, as the argument claims much more than just that - it claims it to be impossible in principle. Anyone making such a claim has assumed a heavy burden that demands stronger arguments than you are making here.
>A model is not the thing modeled is a non-sequitur.
Have you ever used a map?
If not, I'd like you to get one and I want you to point out your home in every one. Can you use a magnifying glass, then look at the map hard enough to see yourself looking over a tinier map? Ad infinitum?
That is what Searle means. A map is a model of the world. The model of a world that is a map is not, in fact, in any way equivalent to or interchangeable with the thing it models. It is merely a distilled representation that provides a facsimile representative enough to be useful. So to would be any attempt at modeling consciousness.
This is an argument from the general to the specific which does not apply in this particular case, nor in many others like it. As a mind is plausibly an information process ocurring within the body, this generalization does not rule out an informational model of the physical processes of that body producing a mind.
If the argument you present here could be so easily generalized, it would work just as well for “proving” that a computer model of an Enigma cypher machine cannot encipher text.
I think a considerable subset of the people who do make use of the Chinese room argument also subscribe to some form of mind-body dualism, where consciousness does not or does not completely arise from physical processes.
The Chinese Room and the brain of a Chinese-speaking person are completely different physical processes. Looked at on an atomic level, they have almost nothing in common. Mind-body dualists may or may not agree that the room is not "conscious" in the way a human is, but if consciousness is purely a material process, I can't see how the materialist can possibly conclude all the relevant properties of the completely dissimilar room and person are the same.
Those that would argue the Chinese Room is "conscious" in the same way as the Chinese person are essentially arguing that the dissimilarity of the physical processes is irrelevant: the "consciousness" of the Chinese person doesn't arise from molecules bouncing around their brain in very specific ways, but exists at some higher level of abstraction shared with the constituent molecules of pieces of paper with instructions written in English and outputs written in Chinese.
The idea our consciousness exists in some abstract sense which transcends the physics of the brain is not a new one of course. Historically we called such abstractions souls...
The obvious counterpoint is that if I followed your argument to absurdity, then I would also have to conclude that if I am conscious, then you can't be, because the atoms of our brains aren't arranged in precisely the same way. It clearly makes more sense from a monist point of view to consider consciousness an emergent property of complex systems, rather than one particular process.
Of course! But if the universe is the result of all the quantum field interactions, what if there's a quantum field that, on its own, brings the consciousness interaction, and it manifests in ways that are computationally prohibitive for a process created from atomic-scale logic gates to replicate believably?
What if there's just no way to build consciousness from the building blocks that are within our reach?
Penrose thinks this (that the brain requires quantum computing), but it doesn't seem like anyone agrees with him or that it makes much sense. If I'm a quantum computer, why can't I do Shor's algorithm in my head?
Of course parts of regular computers involve "quantum stuff", like details of how transistors and hard drives work, but that doesn't mean they're magic.
This is an unpopular opinion, and should be wielded very carefully, but I think something often lost in these conversations is that the existence of a human behavior in an animal is not sufficient evidence that it's backed by the full weight of the human-like mental states. It may well be, but additional evidence must be presented.
As humans, we're strongly prone to anthropomorphize–I'm capable of ascribing human feelings even to inanimate objects–and so are prone to doing the above without rigor.
An extreme example: if you drop acid into the water in which a paramecium lives, it will fire up its cilia and frantically try to retreat. It's a single cell, there is no suffering or mental states, but it sure looks like it.
An ant could have a sad looking death, but it surely cannot reach the depths of human sorrow, and the related suffering, that a similar event could elicit. It can't mourn the time it won't spend with its children, or the ways its life could have gone.
I'm not proposing that everything between us and the paramecium cannot suffer, but that arguments in these areas must go beyond X has behavior Y, so X must have full mental state associated with Y.
How come for you the starting point is that animals do not have feelings and emotions like us, and that we have to have evidence for it?
Why isn’t the starting point that they do have feelings like us and we have to find evidence against it?
Really, other animals are so similar to us on every dimension except language that I wonder why people reason this way. Mammals in particular. I’ve seen Denver the guilty dog. She’s behaving like she feels guilty. It’s harder to buy an argument that we are just projecting our human notion of guilt onto her, rather than she simply feels guilt.
To put it another way, your position implies that all of these things were experience — laughter, grief, guilt, shame, deception — might have begun with humans. For me, that’s a position you need a lot of evidence for.
Even more concerning, who's to say a simpler mind / smaller brain would experience less sorrow? Maybe our large brains actually put a cap on the sorrow we are capable of, due to interference, and a simpler brain is capable of experiencing pure sorrow so much more deeply?
> I’ve seen Denver the guilty dog. She’s behaving like she feels guilty [therefore she likely experiences guilt].
I'm not sure I believe this, but there are other believable explanations than yours: consider that humans and the dog could share a non-mental dispositional state (something more basic and hardwired into us) that leads to guilty actions. You would acknowledge that some very simple animals function in this way, and we as humans retain other core systems from simpler times.
Human consciousness could be on top of this and not a guaranteed consequence of it. We additionally rationalize and experience this state and the actions we tend to take from the guilty dispositional state–and as humans call that guilt–but the dispositional state could exist on its own.
>How come for you the starting point is that animals do not have feelings and emotions like us, and that we have to have evidence for it?
That's not exactly what he's saying. He's saying that the overall qualia of an animal is not that of a human. In other words, despite (arguably) having certain experiences that are similar (or even identical) to humans, the totality is different in an important way.
More concretely, the argument is as follows: just because a dog feels guilt doesn't mean (a) it's felt in an equivalent way to humans, nor (b) that the overall experience of a dog is equivalent to that of a human.
It’s not clear how (b) is relevant to the conversation.
Regarding (a) I would have the exact same reply. Unless the word “equivalent” is playing a critical role for you, because it will be impossible to prove or disprove equivalence. My experience of guilt may not even be equivalent to yours strictly speaking, and we belong to the same species, I assume ;-).
(b) is relevant if you think that the qualia of consciousness has a bearing on the ethics of eating meat.
>Unless the word “equivalent” is playing a critical role for you, because it will be impossible to prove or disprove equivalence.
To be clear: I haven't actually stated my position in this debate.
More to your point: this entire debate is an ethical one, so it's in the philosophical realm, so one should disabuse oneself of the notion that anything is going to be "disproven". The best we can aim for is a consistent ethical system, and conversely, the pointing out of inconsistencies.
This having been said, there's a contradiction of terms if you simultaneously hold the belief -- as you seem to do -- that eating animals is wrong because they are in some sense "like us" while at the same time rejecting any notion of (non)equivalence. You exactly appeal to equivalence when you say, "Really, other animals are so similar to us on every dimension except language". So you are being inconsistent.
Therefore, if this is indeed your position, you're going to have to grapple with the issue of threshold. How much similarity is too much? Holistically, is the experience of being $ANIMAL equivalent to (or within some bounds of) the experience of being human? If there are differences, which ones matter and which ones don't ... and in what amount? Those are the questions the GP was asking, and they are directly relevant to the argument.
> This is an unpopular opinion, and should be wielded very carefully, but I think something often lost in these conversations is that the existence of a human behavior in an animal is not sufficient evidence that it's backed by the full weight of the human-like mental states.
Here's an even less popular opinion: most human-like mental states are a fiction, so the distinction you're trying to draw probably doesn't really exist. The mental states you attribute to suffering are merely a proxy for the behaviour you see from both humans and paramecia.
I completely agree, and I think it's gotten out of control.
Lately it has become vogue to attribute human intelligence to slime molds because they do complex things like solve mazes. Then that behavior gets put side by side with, say, mice trying to solve mazes.
Perhaps the most egregious, in my opinion, is the way people unreflectively attribute terms to plants. I hate being a nerd about definitions, but sometimes playing and joking around with definitions serves to embed fundamental misunderstandings about the natural world.
Here at hn and elsewhere, I've seen people insist that plants can "feel", that they "communicate" and are "conscious", very intentionally attempting to insist that it's same in the deepest sense as what humans do.