This article introduces a geometric structure known as an "amplituhedron" as a way to understand reality.
"Some researchers are attempting to wean physics off of space-time in order to pave the way toward this deeper theory. Currently, to predict how particles morph and scatter when they collide in space-time, physicists use a complicated diagrammatic scheme invented by Richard Feynman. The so-called Feynman diagrams indicate the probabilities, or “scattering amplitudes,” of different particle-collision outcomes. In 2013, Nima Arkani-Hamed and Jaroslav Trnka discovered a reformulation of scattering amplitudes that makes reference to neither space nor time. They found that the amplitudes of certain particle collisions are encoded in the volume of a jewel-like geometric object, which they dubbed the amplituhedron. Ever since, they and dozens of other researchers have been exploring this new geometric formulation of particle-scattering amplitudes, hoping that it will lead away from our everyday, space-time-bound conception to some grander explanatory structure."
The wikipedia page for amplituhedron provides some kind of simulated visualization but doesn't clarify in layman's terms how reality fits into this particular structure. It seems like an interesting concept, I wish I understood it better.
I am not a physicist, but when I first read about the amplituhedron whatever article it was seemed to indicate it was an interesting way to do computation. It was also simpler and quicker than the usual way. They never indicated any reason for it, or model of reality, just that computing volumes produced the same results as some other method used by physicists. I don't even recall if they proved it to be equivalent or just showed that in every case they tried it produced the correct result.
I suspect the folks reformulating QM in Clifford Algebras may have (or find) a way to explain why a geometric interpretation works the way it does.
The article uses the term "Rashomon effect" without defining it. I hadn't heard of it before, and here's what I found:
The Rashomon effect occurs when an event is given contradictory interpretations by the individuals involved. The effect is named after Akira Kurosawa's 1950 film Rashomon, in which a murder is described in four contradictory ways by four witnesses.[1] The term addresses the motives, mechanism and occurrences of the reporting on the circumstance and addresses contested interpretations of events, the existence of disagreements regarding the evidence of events and subjectivity versus objectivity in human perception, memory and reporting.
The so-called "Rashomon effect" is mis-named. The point of the film was not that everyone has a different perception of the truth. It was that everybody lies to preserve their sense of self-righteousness, no matter who else it damns, even when they have nothing left to lose.
I think the point of the story is subtler than that, its not so much that people consciously lie, its that their recollections are shaped by their own desire to be in the right.
> And the Rashomon effect also suggests that reality isn’t structured in such a reductive, bottom-up way.
Does it? I don't see why. If there are numerous equivalent ways of describing a phenomenon, that seems to suggest that most structures have inherent isomorphisms. I don't think anyone would find this is particularly surprising.
Each more complex theory turns into its simpler predecessors jn certain cases. There’s no magic in that, it’s simple math: if v << c, then the equations of special reativity become much easier and we get Newton mechanics. The same applies to the rest. If you want to discuss why does our world have to obey the math — that’s a whole another story.
“I always found that mysterious, and I do not know the reason why it is that the correct laws of physics are expressible in such a tremendous variety of ways. They seem to be able to get through several wickets at the same time.”
Conjecture: because this conversation is necessarily taking place in a universe where a certain level of intellectual and technological development was able to happen, which means one in which there is an evolutionary path to attaining such, which means one in which you can build a series of models of the universe, each progressively more accurate, but the early ones still give usefully accurate predictions.
I spent a week or so (in 2013 or so) trying to understand the notes and I believe also a youtube lecture. Novel named concepts were simply used without any introduction, now I don't mind naming some new mathematical object with a relevant name (used to calculate amplitudes), but only after rereading the notes and rewatching the lecture did it eventually click: the diagrams depicting schematically the amplituhedron with a bunch of points ... it's a convex hull. Any sane person would introduce each new concept - and I really don't mind giving it a pet name - as long as it is defined in terms of well known objects or concepts. If one can't bring up the effort to just write: "the amplituhedron is defined as the convex hull of a set of points" how can you expect your audience to put in the effort to understand what you are trying to say? Also define what the points represent, give concrete examples, what are the coordinates of these points? etc... After multiple times finally realizing what the author was referring to with a new concept or procedure or so, and then always realizing he could have said the very same in much simpler terms I just gave up trying to decode the rest. Now we are about 6 years later, and there is a wikipedia page, but unless there is a clearly written text that starts from well known concepts, and introduces new ones in terms of known ones instead of using them without introduction and assuming the reader can read your mind... unless there is a clear text I see no point in even trying. If anyone knows if the situation has changed please point me to some clear explanations, otherwise I have no choice but file it under the "suspiciously obfuscated: either malicious or inept" category...
Not that I have any real answers here (re: theories of everything), but I do have something to add regarding "laws".
Charle's Law states that Pressure * Volume = Temperature.
That's true and you can do a million experiments with a million balloons and liquids/gases to validate it - until you find a non-newtonian substance, try to apply it to a solid, introduce moving fluids, etc.
Charle's Law isn't a fundamental law of physics, but a consequence of the material properties and statistical outcomes of a large number of interacting molecules. Before atoms and molecules are characterized, it _does_ look like a fundamental law to an experimenter.
It doesn't seem to me that looking for "lower layers" of theoretical physics (holographics, quantum foam, strings, whathaveyou) is an infinite regress and doomed to never be a completed project. I do think that the project may reach some point where the theories fail to be scientific (read: falsifiable) because it could be possible that reaching deep enough through layers of physical abstraction can not be achieved / no physical instrument can provably be built to look deeper (as an analogy, no instrument can be built to probe the full amplitude space of a quantum state).
We tend to prefer simpler theories that still manage to make good predictions. Is it really a surprise that our best theories are simple in light of our aesthetic preferences? The "true laws of physics" could be god-awful, and we could find ourselves in a universe where we're literally incapable of knowing that due to practical constraints. It's also not surprising at all that theories that are aesthetically-pleasing to us manage to have multiple different mathematical formulations -- if we're assuming from the outset that they're not very arbitrarily-defined theories, it seems perfectly natural that multiple interpretations would exist. Nature could be more horrifyingly complex than we could ever imagine, and even if that were the case, it's conceivable that all of the laws "good enough" to make testable predictions would have a nice mathematical structure. Even then, we're nowhere near the "end of physics" -- how are we supposed to know that we've not just been wading around in the shallow part of the pool, only to find a bottomless pit waiting for us just a few meters away?
I do wonder if we're just a bunch of drunks looking under a lamppost for our keys because that's where the light is.
We've discovered a bunch of phenomena that happen to be able to be described by the mathematical tools we have because those are the only way we can understand them.
There's no reason to believe that any of them are fundamental to nature. There may be an uncountably infinite number of phenomena out there which are fundamentally unable to be formulated in any way which we can understand or predict them.
Our minds aren't constructed to understand reality, they're constructed to help our bodies survive in a savannah a few hundreds of thousands of years ago.
>Our minds aren't constructed to understand reality, they're constructed to help our bodies survive in a savannah a few hundreds of thousands of years ago.
I agree with your main point, however I don't find this a compelling argument. Regardless of what our brains are constructed for, with pen and paper and a set of rules we can roughly approximate a turing machine. Everything we as humanity know about computation suggests that at minimum this is equivalent to any computation possible using all known physical properties & interactions with some amount of overhead (even quantum can be simulated classically).
Thus, while it's possible there is an unknown kind of interaction that we cannot understand or simulate classically, the fact that our brain is more or less designed around survival and reproduction is irrelevant; change or optimize our DNA and the new super-human will still be at best roughly equivalent to a turing machine in terms of computation.
There are computation devices more powerful than the Turing machine, which is why we have complexity classes. (P, #P, #BQP are some examples described by different kinds of Turing-like machines.)
So calling humans equivalent to standard Turing machine is ultimately wrong if you even start to assume quantum processes are important in our thinking. (which is actually unproven as of yet)
Then you need to start with a more intricate parallel quantum Turing machine, high end mathematics required already. It is already non-deterministic.
And no, quantum effects cannot be simulated efficiently classically. That is the difference between P and #P complexity classes. Add parallelism, you get #BQP. You would be able to tell something about the nature of reality by building physical instances of those problems and timing them.
Timing attack on the structure of reality, structure of time itself, anyone? (Of course the required energies would be ridiculous.)
To falsify any Turing model, you need to answer the question: what cannot be done by any given machine? Or by any machine? It is possible that there's a Goedel trap in this question.
A parallel would be to find and answer to what cannot be computed efficiently by best human geniuses ever. Hard introspective question might I add. Answer probably requires building or finding something more than human.
>There are computation devices more powerful than the Turing machine, which is why we have complexity classes
Equivalent meaning computability, not complexity. The two machines can simulate each other (even w/ exponential slowdown) then it's equivalent.
> cannot be simulated efficiently classically
Efficiency isn't really the point. The OP is suggesting there's a literal wall which makes something impossible to understand. This would require a computational behavior that cannot be programmed as a turing machine.
> To falsify any Turing model, you need to answer the question: what cannot be done by any given machine?
Correct. The answer is that a turing machine can't decide a turing-complete decision problem, so hypercomputation is achieved by equipping a turing machine with an oracle for a turing-complete problem. This actually leads to a hierarchy that is each strictly more powerful than the previous ones by using oracles for problems that are "more undecidable" so to speak (see https://en.wikipedia.org/wiki/Arithmetical_hierarchy )
Either way, we have no evidence that suggests theres a natural process that lives higher up on the arithmetical hierarchy. An example of a physical process that would fulfill this criteria: suppose there were a type of particle that only sometimes collide with each other, and it turns out that they only collide not just by running into each other but also when the sum of their velocities encode halting turing machines, otherwise the particles pass right through each other. A classical turing machine cannot decide the halting problem, so it's impossible to simulate a system with these particles because you never know if they will collide or not.
Now, yes, there are models of computation which cannot be simulated by a Turing machine, but I don’t know of any substantial evidence that a physical process uncomputable.
Any computation which has a finite number of possible inputs is computable, using a lookup table, and therefore using a Turing machine.
So, for any physical process to be uncomputable, it must be such that there are infinitely many possible states of it, for it to proceed from.
And, the Bekenstein bound, if correct, would seem to suggest that no structure like that can exist within a bounded region of space.
I agree with your point, but I'm not quite so pessimistic.
Yes we're pretty dumb. I'm continually amazed at how stupid I am, and even other 'clever' people. The way I see it, we're only just barely sentient.
As you say, we've only just managed to get beyond the savannah. Behavioural modernity and the advanced cultures and technologies we have developed arose in insignificantly recent evolutionary time. Therefore we can't have much more than roughly the minimum necessary intellectual capacity to develop a technological civilization on a planet like ours. If we had less, we wouldn't have achieved what we have. If we had much more, well, we'd have developed it sooner and clearly we didn't.
It's possible there were some environmental factors that held back our technological development despite our intellectual capacity, but it's hard to see what those could have been. Our planet seems pretty abundant in resources and environments for us to colonise. It's also possible that once a species develops the required intellectual capacity for technology, it takes long enough to develop the basics to build on that we evolved significantly grater than the minimum intelligence by the time we actually developed a technological society.
This starts touching on theology. The reasons you explained are one of the main reasons I am a fundamental believer in agnosticism. That is, I believe we see reality through a human lens, therefore we're fundamentally incapable of understanding the reality of our universe unclouded by our own limitations (or assumptions). That's why I think the safest wager is not Pascal's, but to say, "Yes, there may be some order to things, but no, I don't have any idea what governs my reality, nor could I."
I don't say this to incite a religious war or discussion (I've purposely omitted the use of the word "God" here), so if we could keep things philosophical that would be appropriate for such a site. I'm curious if this school of thought has another name beyond agnosticism.
Our minds are general purpose information processors, with significant technical debt, shortcuts, and hardwiring between software and hardware. It's optimized for adapting to a number of very different interactions and environments.
To say they're not constructed to understand reality may or may not be true, but is beside the point.
That said, I often share similar thoughts about progress towards the nature of reality.
But the cognitive revolution gave us abstract thinking that allowed us to create mathematics. Since then we can express anything in the mathematical form. It's almost impossible for us to imagine even a 4-D space, not speaking about the N-D space — but we can write the equations for it and solve them.
It's (sort of) discussing the work of Nima Arkani-Hamed and his Colleagues, and their attempt to reformulate quantum field theory so that it doesn't refer explicitely to space-time, in analogy to how the principle of least action of Lagrangian mechanics doesn't refer explicitly to the causality/determinism of Newton's formulation (modulo some subtitles). Here's a bit more about their technical work.
The motivation for doing this is as follows. When the principle of least action was discovered, it was philosophically very puzzling: why should this equivalent, non-causal formulation of newton exist? The reason of course turned out to be quantum mechanics! (it's a kind of stationary phase approximation to the path integral formulation of quantum theory, roughly speaking). Now in the early 21st century, there is a whole bunch of hints that space and possibly time are not truly fundamental quantities (in analogy to how newtonian causality was not fundamental). So how do you proceed?
Method 1: make random guesses.
Method 2: attempt to extend existing physics in non-random ways (e.g. string theory)
Method 3: take a lesson from history
Method 1 never works, method 2 is apparently too hard this time, so they are attempting method 3: if you need to get rid of space-time, don't MODIFY existing physics, but REFORMULATE it so that it doesn't depend explicitly on that concept, in analogy to how lagrangian mechanics reformulates newtownian mechanics to not depend explicitly on causality ( & quantum theory is then a simple deformation away). And if you can reformulate state of the art quantum field theories so they don't depend on space-time explicitly, perhaps a deeper theory will just be a simpler deformation away as well.
Basically, the idea is that there are often multiple theories to try to explain reality, each of which is valid (though some have further-reaching explanatory power than others), and that, perhaps, there is not one theory which can fully explain reality (Grand Unified Theory). Perhaps reality defies such simple explanations, and instead will always produce multiple answers. The question, then, if there are always multiple answers is: what is the question?
Each theory is pretty much based on a previous theory, with some aspect of the previous theory removed to make the theory a better assumption of reality.
- A really cool example of this a scientist named Laplace presenting his work to Napoleon about the origins of the solar system. Previously, it was believed that God was involved somehow but his theory didn't include God. When Napoleon asked why God wasn't included, he said "I had no need for that hypothesis".
Einstein's equations are a pretty good way to explaining things but they break down in certain situations, just like Newton's equations did and Einstein's equations are really just Newton's equations extrapolated a bit more.
Scientists are now discovering ways of explaining the universe that doesn't require Einstein's aspect of space-time to describe motion but equally work quite as well as his theory of general and special relativity.
I wish physics would explore the theory of a simulated universe a bit more. What if these incomplete laws of physics are flaws with the simulation, maybe dark matter is just a hardcoded variable or a hotfix to an existing problem. Is this something physics can tackle or does it lie more in the philosophy realm?
I don't think the simulation theory is falsifiable therefore there's no practical reason to dig into that. You can explain almost anything with it, it's very close to religion and creationism in that sense.
And although we still haven't found the dark matter directly, we notice its effect in the stars motion, galaxies' gravitational lenses, large-scale structure of the Universe and even notice the coinciding data in the cosmic background radiation. Sounds like too much for a "hotfix".
It doesn't matter if "dark matter is just a hardcoded variable or a hotfix to an existing problem" -- all physics cares about is building a theory of dark matter that can make effective predictions. If a "theory of a simulated universe" can make concrete, testable predictions, then it's a physical theory. If not, it's more of a philosophical theory.
What makes you think a simulation complex enough to be a universe would have familiar programming concepts? Would you expect that of alien tech a million years more advanced?
Also, if our universe is a simulation, then we have no idea how faithful that simulation is to the world it runs inside of. Could be a lot different. Maybe our simulated physics is fanciful. Maybe the hardware it runs on is nothing like anything we're familiar with. And maybe the software is completely alien.
To me it seems like the simulation argument is self-defeating, because you can't draw any conclusions about the real world from it in order to tell that it's actually a simulation.
What does it give you though? Just because some behavior could be viewed as equivalent to a "hotfix" for something doesn't mean that the universe is a simulation. And just because the universe is a simulation doesn't mean that it needs hotfixes. I don't even see how it could be considered philosophy, much less physics.
"Some researchers are attempting to wean physics off of space-time in order to pave the way toward this deeper theory. Currently, to predict how particles morph and scatter when they collide in space-time, physicists use a complicated diagrammatic scheme invented by Richard Feynman. The so-called Feynman diagrams indicate the probabilities, or “scattering amplitudes,” of different particle-collision outcomes. In 2013, Nima Arkani-Hamed and Jaroslav Trnka discovered a reformulation of scattering amplitudes that makes reference to neither space nor time. They found that the amplitudes of certain particle collisions are encoded in the volume of a jewel-like geometric object, which they dubbed the amplituhedron. Ever since, they and dozens of other researchers have been exploring this new geometric formulation of particle-scattering amplitudes, hoping that it will lead away from our everyday, space-time-bound conception to some grander explanatory structure."
The wikipedia page for amplituhedron provides some kind of simulated visualization but doesn't clarify in layman's terms how reality fits into this particular structure. It seems like an interesting concept, I wish I understood it better.
https://en.wikipedia.org/wiki/Amplituhedron