Part of Penroses's point (a) is that our brains can solve problems that aren't computable. That's the crux of his brains-aren't-computers argument. So even if computers can in some sense think, their thinking will be strictly more limited than ours, because we can solve problems that they can't. (Assuming that Penrose is right.)
I wonder if LLM's have shaken the ground he stood on when he said that. Penrose never worked with a computer that could answer off the cuff riddles. Or anything even remotely close to it.
So the trouble with this argument is that there is no evidence whatsoever that the brain can solve problems that a turing machine can't.
There's none. No one has been able to formulate a problem in a reasonable way that a computer algorithm can't be devised to solve it that people can solve.
It is basically a bunch of handwaving nonsense like the tripartite nature of god(father, son and holy spirit...)
Searle's chinese room argument is slightly better, but is still ultimately a pile of horseshit. From an external point of view we cannot distinguish between a room full of people who do not speak chinese but can translate it following rigorous instructions and tables and a room full of qualified chinese translators. For all external purposes the black boxes are equivalent except that you can take a chinese translator out of the room and still use them to translate chinese without the rigorous instructions and reference material in the room.
There is no good philosophical argument against Strong AI. It is a bunch of quasi-religious, humans are special because we say so wishy-washy nonsense.