I think your premises are fair, but assumption #3 ("Any mathematical rule can be computed by a sufficiently advanced computer") is effectively ruled out by Gödel's incompleteness theorem[1] and/or the Church-Turing thesis[2].
The problem then becomes finding an approach to general AI that avoids hitting incompleteness/undecidability[3] issues. My feeling is that this would be difficult. One way to try to avoid these issues is to avoid notions of self-reference, since self-reference spawns a lot of undecidable stuff (eg, "this statement is false" is neither true nor false). It seems to me, though, that the notions of the self and self-awareness are central to human consciousness, and so unavoidable when developing a complete simulation of human consciousness. The self is probably not computable.
Obviously there could be approaches that avoid these pitfalls, but every year that goes by without much progress towards general AI makes me feel more confident in this intuition. I do think there will be lots of useful progress in specialized AIs, but I see this as analogous to developing algorithms to decide the halting problem for special classes of algorithms. General AI is a whole different beast.
But if general AI is physically impossible, how does the human brain "compute" general intelligence at all? It could be that your assumption #1 ("Physicalism is true. Nothing exists that is not part of the physical world.") is not correct. Maybe reality has "layers" and our world is some kind of simulation in another layer. Or maybe there is only one consciousness like many spiritual people and Boltzmann[4] suggest. Or maybe the human experience could be a process of trying to solve an undecidable problem and failing...
>> But if general AI is physically impossible, how does the human brain "compute" general intelligence at all?
Who says that the brain "computes" general intelligence? We don't know enough about the brain to know what it is, but it's certainly nothing like a computer. Only by analogy is intelligence something that can be computed and the only reason we have this analogy in the first place is because we have computers. But isn't the accuracy of the analogy what we would like to know with some certainty, in the first place?
This is just another big assumption that is taken for granted: that the brain is a computational device. It seems an easy assumption to make, given all we know about computation. And yet, like you say, several generations of AI researchers have failed to reproduce intelligence with computers. Perhaps the reason for this is that the brain is not a computer, intelligence is not a program, and that's why we do not BSOD when confronted with paradoxical statements like "this statement is false".
In another reply, I modified point three to be the assumption that all of the physical laws of the universe are defined by computable maths. I believe this is the case to the best of our knowledge, but please let me know if I'm wrong.
Unfortunately, restricting to only computable maths means disallowing the natural numbers, basic arithmetic, or any equivalent structure, since Gödel incompleteness would apply. I doubt any system without access to the full set of natural numbers or basic arithmetic could qualify as "general AI".
Pardon my ignorance. Computers appear to be able to perform basic arithmetic. For example, you can open up the console in your browser and find that the sum of two and two is indeed four. So it is not entirely obvious to me how basic arithmetic is non-computable.
If you permit infinitely many integers it becomes problematic. If you are dealing with a finite entity (e.g. the finite part of the universe that can affect us), then there are no problems.
How do you know all those things about the physical world? For example- you say that "all of the physical laws of the universe are defined by computable maths". Do you really know what all the physical laws of the univese are?
Apologies if my question sounds too contrarian, but I think you are making some very big assumptions about the computability of the laws of physics that are not really based on anything concrete, like a strong knowledge of the mathematics of modern physics.
We is humanity, as far as I know and as far as brief Googling is able to determine. I am not a physicist, so I have good knowledge of physics up to the high school level, and a dabbler's knowledge of what lies beyond. I am open to correction, so feel free to offer some contradictory evidence if you have any.
Are we really communicating here? I'm saying that there is a lot that physicists don't know about physics and that therefore it's impossible to make the assumption that you make, that every law of physics is computable. Because nobody knows all of them, and nobody knows what nobody knows, or how much of it there is.
And you're saying that, given high-school physics and "dabbling", we know all of it and it's all computable.
Infinite computations are not the only computations that are impossible to perform. For example, if I asked you to enumerate (not calculate) the number of X time units in all the time from the start to the end of the universe, setting X to the closest time unit to the time an operation took on your chosen hardware (past, present or future) you would not be able to complete this computation.
For instance, if the fastest hardware available to you performed about one operation each femtosecond, it would not have the time to enumerate all femtoseconds from the birth of the universe to its death. And that number is a finite quantity.
I'm unsure how this matters? The physical universe does not prove itself and does not need to. Godel's theorems just say that certain types of mathematical systems can't prove themselves, which seems quite irrelevant to simulating the universe. Please explain if I'm missing something.
The problem then becomes finding an approach to general AI that avoids hitting incompleteness/undecidability[3] issues. My feeling is that this would be difficult. One way to try to avoid these issues is to avoid notions of self-reference, since self-reference spawns a lot of undecidable stuff (eg, "this statement is false" is neither true nor false). It seems to me, though, that the notions of the self and self-awareness are central to human consciousness, and so unavoidable when developing a complete simulation of human consciousness. The self is probably not computable.
Obviously there could be approaches that avoid these pitfalls, but every year that goes by without much progress towards general AI makes me feel more confident in this intuition. I do think there will be lots of useful progress in specialized AIs, but I see this as analogous to developing algorithms to decide the halting problem for special classes of algorithms. General AI is a whole different beast.
But if general AI is physically impossible, how does the human brain "compute" general intelligence at all? It could be that your assumption #1 ("Physicalism is true. Nothing exists that is not part of the physical world.") is not correct. Maybe reality has "layers" and our world is some kind of simulation in another layer. Or maybe there is only one consciousness like many spiritual people and Boltzmann[4] suggest. Or maybe the human experience could be a process of trying to solve an undecidable problem and failing...
1. https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_...
2. https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis
3. https://en.wikipedia.org/wiki/Undecidable_problem
4. https://en.wikipedia.org/wiki/Boltzmann_brain