The Wolfram Physics Project:
A One-Year Update

Upcoming livestream

The Wolfram Physics Project: A One-Year UpdateThe Wolfram Physics Project: A One-Year Update

How’s It Going?

When we launched the Wolfram Physics Project a year ago today, I was fairly certain that—to my great surprise—we’d finally found a path to a truly fundamental theory of physics, and it was beautiful. A year later it’s looking even better. We’ve been steadily understanding more and more about the structure and implications of our models—and they continue to fit beautifully with what we already know about physics, particularly connecting with some of the most elegant existing approaches, strengthening and extending them, and involving the communities that have developed them.

And if fundamental physics wasn’t enough, it’s also become clear that our models and formalism can be applied even beyond physics—suggesting major new approaches to several other fields, as well as allowing ideas and intuition from those fields to be brought to bear on understanding physics.

Needless to say, there is much hard work still to be done. But a year into the process I’m completely certain that we’re “climbing the right mountain”. And the view from where we are so far is already quite spectacular.

We’re still mostly at the stage of exploring the very rich structure of our models and their connections to existing theoretical frameworks. But we’re on a path to being able to make direct experimental predictions, even if it’ll be challenging to find ones accessible to present-day experiments. But quite independent of this, what we’ve done right now is already practical and useful—providing new streamlined methods for computing several important existing kinds of physics results.

The way I see what we’ve achieved so far is that it seems as if we’ve successfully found a structure for the “machine code” of the universe—the lowest-level processes from which all the richness of physics and everything else emerges. It certainly wasn’t obvious that any such “machine code” would exist. But I think we can now be confident that it does, and that in a sense our universe is fundamentally computational all the way down. But even though the foundations are different, the remarkable thing is that what emerges aligns with important mathematical structures we already know, enhancing and generalizing them.

From four decades of exploring the computational universe of possible programs, my most fundamental takeaway has been that even simple programs can produce immensely complex behavior, and that this behavior is usually computationally irreducible, in the sense that it can’t be predicted by anything much less than just running the explicit computation that produced it. And at the level of the machine code our models very much suggest that our universe will be full of such computational irreducibility.

But an important part of the way I now understand our Physics Project is that it’s about what a computationally bounded observer (like us) can see in all this computational irreducibility. And the key point is that within the computational irreducibility there are inevitably slices of computational reducibility. And, remarkably, the three such slices we know correspond exactly to the great theories of existing physics: general relativity, quantum mechanics and statistical mechanics.

And in a sense, over the past year, I’ve increasingly come to view the whole fundamental story of science as being about the interplay between computational irreducibility and computational reducibility. The computational nature of things inevitably leads to computational irreducibility. But there are slices of computational reducibility that inevitably exist on top of this irreducibility that are what make it possible for us—as computationally bounded entities—to identify meaningful scientific laws and to do science.

There’s a part of this that leads quite directly to specific formal development, and for example specific mathematics. But there’s also a part that leads to a fundamentally new way of thinking about things, that for example provides new perspectives on issues like the nature of consciousness, that have in the past seemed largely in the domain of philosophy rather than science.

What Is Our Universe Made Of?

What Is Our Universe Made Of?

Spatial hypergraphs. Causal graphs. Multiway graphs. Branchial graphs. A year ago we had the basic structure of our models and we could see how both general relativity and quantum mechanics could arise from them. And it could have been that as we went further—and filled in more details—we’d start seeing issues and inconsistencies. But nothing of the sort has happened. Instead, it seems as if at every turn more and more seems to fit beautifully together—and more and more of the phenomena we know in physics seem to inevitably emerge as simple and elegant consequences of our models.

It all starts—very abstractly—with collections of elements and relations. And as I’ve got more comfortable with our models, I’ve started referring to those elements by what might almost have been an ancient Greek term: atoms of space. The core concept is then that space as we know it is made up from a very large number of these atoms of space, connected by a network of relations that can be represented by a hypergraph. And in our models there’s in a sense nothing in the universe except space: all the matter and everything else that “exists in space” is just encoded in the details of the hypergraph that corresponds to space.

Time in our models is—at least initially—something fundamentally different from space: it corresponds to the computational process of successively applying rules that transform the structure of the hypergraph. And in a sense the application of these rules represents the fundamental operation of the universe. And a key point is that this will inevitably show the phenomenon of computational irreducibility—making the progress of time an inexorable and irreducible computational process.

A striking feature of our models is that at the lowest level there’s nothing constant in our universe. At every moment even space is continually being remade by the action of the underlying rules—and indeed it is precisely this action that knits together the whole structure of spacetime. And though it still surprises me that it can be said so directly, it’s possible to identify energy as essentially just the amount of activity in space, with mass in effect being the “inertia” or persistence of this activity.

At the lowest level everything is just atoms of space “doing their thing”. But the crucial result is that—with certain assumptions—there’s large-scale collective behavior that corresponds exactly to general relativity and the observed continuum structure of spacetime. Over the course of the year, the derivation of this result has become progressively more streamlined. And it’s clear it’s all about what a computationally bounded observer will be able to conclude about underlying computationally irreducible processes.

But there’s then an amazing unification here. Because at a formal level the setup is basically the same as for molecular dynamics in something like a gas. Again there’s computational irreducibility in the underlying behavior. And there’s a computationally bounded observer, usually thought of in terms of “coarse graining”. And for that observer—in direct analogy to an observer in spacetime—one then derives the Second Law of Thermodynamics, and the equations of continuum fluid behavior.

But there’s an important feature of both these derivations: they’re somehow generic, in the sense that they don’t depend on underlying details like the precise nature of the molecules in the gas, or the atoms of space. And what this means is that both thermodynamics and relativity are general emergent laws. Regardless of what the precise underlying rules are, they’ll basically always be what one gets in a large-scale limit.

It’s quite remarkable that relativity in a sense formally comes from the same place as thermodynamics. But it’s the genericity of general relativity that’s particularly crucial in thinking about our models. Because it implies that we can make large-scale conclusions about physics without having to know what specific rule is being applied at the level of the underlying hypergraph.

Much like hypersonic flow in a gas, however, there will nevertheless be extreme situations in which one will be able to “see beneath” the generic continuum behavior—and tell that there are discrete atoms of space with particular behavior. Or in other words, that one will be able to see corrections to Einstein’s equations—that depend on the fact that space is actually a hypergraph with definite rules, rather than a continuous manifold.

One important feature of our spatial hypergraph is that—unlike our ordinary experience of space—it doesn’t intrinsically have any particular dimension. Dimension is an emergent large-scale feature of the hypergraph—and it can be an integer, or not, and it can, for example, vary with position and time. So one of the unexpected implications of our models is that there can be dimension fluctuations in our universe. And in fact it seems likely that our universe started essentially infinite-dimensional, only gradually “cooling” to become basically three-dimensional. And though we haven’t yet worked it out, we expect there’ll be a “dimension-changing cosmology” that may well have definite predictions for the observed large-scale structure of our universe.

The underlying discreteness—and variable dimension—of space in our models has many other implications. Traditional general relativity suggests certain exotic phenomena in spacetime, like event horizons and black holes—but ultimately it’s limited by its reliance on describing spacetime in terms of a continuous manifold. In our models, there are all sorts of possible new exotic phenomena—like change in spacetime topology, space tunnels and dynamic disconnection of the hypergraph.

What happens if one sets up a black hole that spins too rapidly? In our models, a piece of spacetime simply disconnects. And it’s been interesting to see how much more direct our models allow one to be in analyzing the structure of spacetime, even in cases where traditional general relativity gives one a hint of what happens.

Calculus has been a starting point for almost all traditional mathematical physics. But our models in a sense require a fundamental generalization of calculus. We have to go beyond the notion of an integer number of “variables” corresponding to particular dimensions, to construct a kind of “hypercalculus” that can for example generalize differential geometry to fractional dimensional space.

It’s a challenging direction in mathematics, but the concreteness of our models helps greatly in defining and exploring what to do—and in seeing what it means to go “below whole variables” and build everything up from fragmentary discrete connections. And one of the things that’s happened over the past year is that we’ve been steadily recapitulating the history of calculus-like mathematics, progressively defining generalizations of notions like tangent spaces, tensors, parallel transport, fiber bundles, homotopy classes, Lie group actions and so on, that apply to limits of our hypergraphs and to the kind of space to which they correspond.

One of the ironies of practical investigations of traditional general relativity is that even though the theory is set up in terms of continuous manifolds and continuous partial differential equations, actual computations normally involve doing “numerical relativity” that uses discrete approximations suitable for digital computers. But our models are “born digital” so nothing like this has to be done. Of course, the actual number of atoms of space in our real universe is immensely larger than anything we can simulate.

But we’ve recently found that even much more modest hypergraphs are already sufficient to reproduce the same kind of results that are normally found with numerical relativity. And so for example we can directly see in our models things like the ring-down of merging black holes. And what’s more, as a matter of practical computation, our models seem potentially more efficient at generating results than numerical relativity. So that means that even if one isn’t interested in models of fundamental physics and in the “underlying machine code” of the universe, our project is already useful—in delivering a new and promising method for doing practical computations in general relativity.

And, by the way, the method isn’t limited to general relativity: it looks as if it can be applied to other kinds of systems based on PDEs—like stress analysis and biological growth. Normally one thinks of taking some region of space, and approximating it by a discrete mesh, that one might adapt and subdivide. But with our method, the hypergraphs—with their variable dimensions—provide a richer way to approximate space, in which subdivision is done “automatically” through the actual dynamics of the hypergraph evolution.

Can We Finally Understand Quantum Mechanics?

Can We Finally Understand Quantum Mechanics?

I already consider it very impressive and significant that our models can start from simple abstract rules and end up with the structure of space and time as we know them in some sense inevitably emerging. But what I consider yet more impressive and significant is that these very same models also inevitably yield quantum mechanics.

It’s often been said (for example by my late friend Richard Feynman) that “nobody really understands quantum mechanics”. But I’m excited to be able to say that—particularly after this past year—I think that we are finally beginning to actually truly understand quantum mechanics. Some aspects of it are at first somewhat mind-bending, but given our new understanding we’re in a position to develop more and more accessible ways of thinking about it. And with our new understanding comes a formalism that can actually be applied in many other places—and from these applications we can expect that in time what now seem like bizarre features of quantum mechanics will eventually seem much more familiar.

In ordinary classical physics, the typical setup is to imagine that definite things happen, and that in a sense every system follows a definite thread of behavior through time. But the key idea of quantum mechanics is to imagine that many threads of possible behavior are followed, with a definite outcome being found only through a measurement made by an observer.

And in our models this picture is not just conceivable, but inevitable. The rules that operate on our underlying spatial hypergraph specify that a particular configuration of elements and relations will be transformed into some other one. But typically there will be many different places in the spatial hypergraphs where any such transformation can be applied. And each possible sequence of such updating events defines a particular possible “thread of history” for the system.

A key idea of our models is to consider all those possible threads of history—and to represent these in a single object that we call a multiway graph. In the most straightforward way of setting this up, each node in the multiway graph is a complete state of the universe, joined to whatever states are reached from it by all possible updating events that can occur in it.

A particular possible history for the universe then corresponds to a particular path through the multiway graph. And the crucial point is that there is branching—and merging—in the multiway graph leading in general to a complicated interweaving of possible threads of history.

But now imagine slicing across the multiway graph—in a sense sampling many threads of history at some particular stage in their evolution. If we were to look at these threads of history separately there might not seem to be any relation between them. But the way they’re embedded in the multiway graph inevitably defines relations between them. And for example we can imagine just saying that any two states in a particular slice of the multiway graph are related if they have a common ancestor, and are each just a result of a different event occurring in that ancestor state. And by connecting such states we form what we call a branchial graph—a graph that captures the relations between multiway branches.

But just like we imagine our spatial hypergraphs limit to something like ordinary continuous physical space, so also we can imagine that our branchial graphs limit to something we can call branchial space. And in our models branchial space corresponds to a space of quantum states, with the branchial graph in effect providing a map of the entanglements between those states.

In ordinary physical space we know that we can define coordinates that label different positions. And one of the things we’re understanding with progressively more clarity is also how to set up coordinatizations of branchial space—so that instead of just talking individually about “points in branchial space” we can talk more systematically about what happens “as a function of position” in branchial space.

But what is the interpretation of “position” in branchial space? It turns out that it is essentially the phase of a quantum amplitude. In the traditional formalism of quantum mechanics, every different state has a certain complex number associated with it that is its quantum amplitude. In our models, that complex number should be thought of in two parts. Its magnitude is associated with a combinatorial counting of possible paths in the multiway graph. But its phase is “position in branchial space”.

Once one has a notion of position, one is led to talk about motion. And in classical mechanics and general relativity a key concept is that things in physical space move by following shortest paths (“geodesics”) between different positions. When space is flat these paths are ordinary straight lines, but when there is curvature in space—corresponding in general relativity to the presence of gravity—the paths are deflected. But what the Einstein equations then say is that curvature in space is associated with the presence of energy-momentum. And in our models, this is exactly what happens: energy-momentum is associated with the presence of update events in the spatial hypergraph, and these lead to curvature and a deflection of geodesics.

So what about motion in branchial space? Here we are interested in how “bundles of nearby histories” progress through time in the multiway graph. And it turns out that once again we are dealing with geodesics that are deflected by the presence of update events that we can interpret as energy-momentum.

But now this deflection is not in physical space but in branchial space. The fundamental underlying mathematical structure is the same in both cases. But the interpretation in terms of traditional physics is different. And in what to me is a singularly beautiful result of our models it turns out that what gives the Einstein equations in physical space gives the Feynman path integral in branchial space. Or in other words, quantum mechanics is the same as general relativity, except in branchial space rather than physical space.

But, OK, so how do we assign positions in branchial space? It’s a mathematically complicated thing to do. Nearly a year ago we found a kind of trick way to do it for a standard simple quantum setup: the double-slit experiment. But over the course of the year, we’ve developed a much more systematic approach based on category theory and categorical quantum mechanics.

In its usual applications in mathematics, category theory talks about things like the patterns of mappings (morphisms) between definite named kinds of objects. But in our models what we want is just the “bulk structure” of category theory, and the general idea of patterns of connections between arbitrary unnamed objects. It’s very much like what we do in setting up our spatial hypergraph. There are symbolic expressions—like in the Wolfram Language—that define structures associated with named kinds of things, and on which transformations can be applied. But we can also consider “bulk symbolic expressions” that don’t in effect “name every element of space”, and where we just consider their overall structure.

It’s an abstract and elaborate mathematical story. But the key point is that in the end our multiway formalism can be shown to correspond to the formalism that has been developed for categorical quantum mechanics—which in turn is known to be equivalent to the standard formalism of quantum mechanics.

So what this means is that we can take a description of a quantum system—say a quantum circuit—and in effect “compile” it into an equivalent multiway system. One thing is that we can think of this as a “proof by compilation”: we know our models reproduce standard quantum mechanics, because standard quantum mechanics can in effect just be systematically compiled into our models.

But in practice there’s something more: by really getting at the essence of quantum mechanics, our models can provide more efficient ways to do actual computations in quantum mechanics. And for example we’ve got recent results on using automated theorem proving methods within our models to more efficiently optimize practical quantum circuits. Much as in the case of general relativity, it seems that by “going underneath” the standard formalism of physics, we’re able to come up with more efficient ways to do computations, even for standard physics.

And what’s more, the formalism we have potentially applies to things other than physics. I’ll talk more about this later. But here let me mention a simple example that I’ve tried to use to build intuition about quantum mechanics. If you have something like tic-tac-toe, you can think of all possible games that can be played as paths through a multiway graph in which the nodes are possible configurations of the tic-tac-toe board. Much like in the case of quantum mechanics, one can define a branchial graph—and then one can start thinking about the analogs of all kinds of “quantum” effects, and how there are just a few final “classical” outcomes for the game.

Most practical computations in quantum mechanics are done at the level of quantum amplitudes—which in our setup corresponds essentially to working out the evolution of densities in branchial space. But in a sense this just tells us that there are lots of different threads of history that a particular system could follow. So how is it then that we come to perceive definite things as happening in the world?

The traditional formalism of quantum mechanics essentially by fiat introduces the so-called Born rule which in effect says how densities in branchial space can be converted to probabilities of different specific outcomes. But in our models we can “go inside” this “process of measurement”.

The key idea—which has become clearer over the course of this year—is at first a bit mind-bending. Remember that our models are supposed to be models for everything in the universe, including us as observers of the universe. In thinking about space and time we might at first imagine that we could just independently trace the individual time evolution of, for example, different atoms of space. But if we’re inside the system no such “absolute tracing” is possible; instead all we can ever perceive is the graph of causal relationships of different events that occur. In a sense we’re only “plugged into” the universe through the causal effects that the universe has on us.

OK, so what about the quantum case? We want to tell what’s going on in the multiway graph of all possible histories. But we’re part of that graph, with many possible histories ourselves. So in a sense what we have to think about is how a “branching brain” perceives a “branching universe”. People have often imagined that somehow having a “conscious observer” is crucial to “making measurements” in quantum mechanics. And I think we can now understand how that works. It seems as if the essence of being a “conscious observer” is precisely having a “single thread of experience”—or in other words conflating the different histories in different branches.

Of course, it is not at all obvious that doing this will be consistent. But in our models there is the notion of causal invariance. In the end this doesn’t have to be an intrinsic feature of specific low-level rules one attributes to the universe; as I’ll talk about a bit later, it seems to be an inevitable emergent feature of the structure of what we call rulial space. But what’s important about causal invariance is that it implies that different possible threads of history must in effect in the end always have the same causal structure—and the same observable causal graph that describes what happens in the universe.

It’s causal invariance that makes different reference frames in physical space (corresponding, for example, to different states of motion) work the same, and that leads to relativistic invariance. And it’s also causal invariance (or at least eventual causal invariance) that makes the conflation of quantum histories be consistent—and makes there be a meaningful notion of objective reality in quantum mechanics, shared by different observers.

There’s more to do in working out the detailed mechanics of how threads of history can be conflated. It can be thought of as closely related to the addition of “completion lemmas” in automated theorem proving. Some aspects of it can be thought of as a “convention”—analogous to a choice of reference frame. But the structure of the model implies certain important “physical constraints”.

We’ve often been asked: “What does all this mean for quantum computing?” The basic idea of quantum computing—captured in a minimal form by something like a multiway Turing machine—is to do different computations in parallel along different possible threads of history. But the key issue (that I’ve actually wondered about since the early 1980s) is then how to corral those threads of history together to figure out a definite answer for the computation. And our models give us ways to look “inside” that process, and see what’s involved, and how much time it should take. We’re still not sure about the answer, but the preliminary indication is that at least at a formal level, quantum computers aren’t going to come out ahead. (In practice, of course, investigating physical processes other than traditional semiconductor electronics will surely lead to even perhaps dramatically faster computers, even if they’re not “officially quantum”.)

One of the surprises to me this year has been just how far we can get in exploring quantum mechanics without ever having to talk about actual particles like electrons or photons. Actual quantum experiments usually involve particles that are somehow localized to particular positions in space. But it seems as if the essentials of quantum mechanics can actually be captured without depending on particles, or space.

What are particles in our models? Like everything else in the universe, they can be thought of as features of space. The general picture is that in the spatial hypergraph there are continual updates going on, but most of them are basically just concerned with “maintaining the structure of space”. But within that structure, we imagine that there can be localized pieces that have a certain stability that allows them to “move largely unchanged through space” (even as “space itself” is continually getting remade). And these correspond to particles.

Analogous to things like vortices in fluids, or black holes in spacetime, we can view particles in our models as some kind of “topological obstructions” that prevent features of the hypergraph from “readily unraveling”. We’ve made some progress this year in understanding what these topological obstructions might be like, and how their structure might be related to things like the quantization of particle spin, and in general the existence of discrete quantum numbers.

It’s an interesting thing to have both “external space” and “internal quantum numbers” encoded together in the structure of the spatial hypergraph. But we’ve been making progress at seeing how to tease apart different features of things like homotopy and geometry in the limit of large hypergraphs, and how to understand the relations between things like foliations and fibrations in the multiway graph describing hypergraph evolution.

We haven’t “found the electron” yet, but we’re definitely getting closer. And one of the things we’ve started to identify is how a fiber bundle structure can emerge in the evolution of the hypergraph—and how local gauge invariance can arise. In a discrete hypergraph it’s not immediately obvious even how something like limiting rotational symmetry would work. We have a pretty good idea how hypergraphs can limit on a large scale to continuous “spatial” manifolds. And it’s now becoming clearer how things like the correspondences between collections of geodesics from a single point can limit to things like continuous symmetry groups.

What’s very nice about all of this is how generic it’s turning out to be. It doesn’t depend on the specifics of the underlying rules. Yes, it’s difficult to untangle, and to set up the appropriate mathematics. But once one’s done that, the results are very robust.

But how far will that go? What will be generic, and what not? Spatial isotropy—and the corresponding spherical symmetry—will no doubt be generic. But what about local gauge symmetry? The SU(3)×SU(2)×U(1) that appears in the Standard Model of particle physics seems on the face of it quite arbitrary. But it would be very satisfying if we were to find that our models inevitably imply a gauge group that is, say, a subgroup of E(8).

We haven’t finished the job yet, but we’ve started understanding features of particle physics like CPT invariance (P and T are space and time inversion, and we suspect that charge conjugation operation C is “branchial inversion”). Another promising possibility relates to the distinction between fermions and bosons. We’re not sure yet, but it seems as if Fermi–Dirac statistics may be associated with multiway graphs where we see only non-merging branches, while Bose–Einstein statistics may be associated with ones where we see all branches merging. Spinors may then turn out to be as straightforward as being associated with directed rather than undirected spatial hypergraphs.

It’s not yet clear how much we’re going to have to understand particles in order to see things like the spin-statistics connection, or whether—like in basic quantum mechanics—we’re going to be able to largely “factor out” the “spatial details” of actual particles. And as we begin to think about quantum field theory, it’s again looking as if there’ll be a lot that can be said in the “bulk” case, without having to get specific about particles. And just as we’ve been able to do for spacetime and general relativity, we’re hoping it’ll be possible to do computations in quantum field theory directly from our models, providing, for example, an alternative to things like lattice gauge theory (presumably with a more realistic treatment of time).

When we mix spatial hypergraphs with multiway graphs we inevitably end up with pretty complex structures—and ones that at least in the first instance tend to be full of redundancy. In the most obvious “global” multiway graph, each multiway graph node is in effect a complete state of the universe, and one’s always (at least conceptually) “copying” every part of this state (i.e. every spatial hypergraph node) at every update, even though only a tiny part of the state will actually be affected by the update.

So one thing we’ve been working on this year is defining more local versions of multiway systems. One version of this is based on what I call “multispace”, in which one effectively “starts from space”, then lets parts of it “bow out” where there are differences between different multiway branches. But a more scalable approach is to make a multiway graph not from whole states, but instead from a mixture of update events and individual “tokens” that knit together to form states.

There’s a definite tradeoff, though. One can set up a “token-event graph” that pretty much completely avoids redundancy. But the cost is that it can be very difficult to reassemble complete states. The full problem of reassembly no doubt runs into the computational irreducibility of the underlying evolution. But presumably there’s some limited form of reassembly that captures actual physical measurements, and that can be done by computationally bounded observers.

Towards Experimental Implications

Towards Experimental Implications

In assessing a scientific theory the core question to ask is whether you get out more than you put in. It’s a bad sign if you carefully set up some very detailed model, and it still can’t tell you much. It’s a good sign if you just set up a simple model, and it can tell you lots of things. Well, by this measure, our models are the most spectacular I have ever seen. A year ago, it was already clear that the models had a rich set of implications. But over the course of this year, it feels as if more and more implications have been gushing out.

And the amazing thing is that they all seem to align with what we know from physics. There’s been no tweaking involved. Yes, it’s often challenging to work out what the models imply. But when we do, it always seems to agree with physics. And that’s what makes me now so confident that our models really do actually represent a correct fundamental theory of physics.

It’s been very interesting to see the methodology of “proof by compilation”. Do our models correctly reproduce general relativity? We can “compile” questions in general relativity into our models—then effectively run at the level of our “machine code”, and generate results. And what we’ve found is that, yes, compiling into our models works, giving the same results as we would get in the traditional theory, though, as it happens, potentially more efficiently.

We’ve found the same thing for quantum mechanics. And maybe we’ll find the same thing also for quantum field theory (where the traditional computations are much harder).

We’ve also been looking at specific effects and phenomena in existing physics—and we’re having excellent success not only in reproducing them in our models (and finding ways to calculate them) but also in (often for the first time) fundamentally understanding them. But what about new effects and phenomena that aren’t seen or expected in existing physics? Especially surprising ones?

It’s already very significant when a theory can efficiently explain things that are already known. But it’s a wonderful “magic trick” if a theory can say “This is what you’ll see”, and then that’s what’s seen in some actual experiment. Needless to say, it can be very difficult to figure out detailed predictions from a theory (and historically it’s often taken decades or even centuries). And when you’re dealing with something that’s never been seen before, it’s often difficult to know if you’ve included everything you need to get the right answer, both in working out theoretical predictions, and in making experimental measurements.

But one of the interesting things about our models is how structurally different they are from existing physics. And even before we manage to make detailed quantitative predictions, the very structure of our models implies the possibility of a variety of unexpected and often bizarre phenomena.

One class of such phenomena relate to the fact that in our models the dimension of space is dynamic, and does not just have a fixed integer value. Our expectation is that in the very early universe, the dimension of space was effectively infinite, gradually “cooling” to approximately 3. And in this setup, there should have been “dimension fluctuations”, which could perhaps have left a recognizable imprint on the cosmic microwave background, or other large-scale features of the universe.

It’s also possible that there could be dimension fluctuations still in our universe today, either as relics from the early universe, or as the result of gravitational processes. And if photons propagate through such dimension fluctuations, we can expect strange optical effects, though the details are still to be worked out. (One can also imagine things like pulsar timing anomalies, or effects on gravitational waves—or just straight local deviations from the inverse square law. Conceivably quantum field theoretic phenomena like anomalous magnetic moments of leptons could be sensitive dimension probes—though on small scales it’s difficult to distinguish dimension change from curvature. Or maybe there would be anomalies or magnetic monopoles made possible by noninteger dimensionality.)

A core concept of our models is that space (and time) are fundamentally discrete. So how might we see signs of this discreteness? There’s really only one fundamental unknown free parameter in our models (at least at a generic level), and there are many seemingly very different experiments that could determine it. But without having the value of this parameter, we don’t ultimately know the scale of discreteness in our models.

We have a (somewhat unreliable) estimate, however, that the elementary length might be around 10-90 meters (and the elementary time around 10-100 seconds). But these are nearly 70 orders of magnitude smaller than anything directly probed by present-day experiments.

So can we imagine any way to detect discreteness on such scales? Conceivably there could be effects left over from a time when the whole universe was very small. In the current universe there could be a signature of momentum discreteness in “maximum boosts” for sufficiently light particles. Or maybe there could be “shot noise” in the propagation of particles. But the best hope for detecting discreteness of spacetime seems to be in connection with large gravitational fields.

Eventually our models must imply corrections to Einstein’s equations. But at least in the most obvious estimates these would only become significant when the scale of curvature is comparable to the elementary length. Of course, it’s conceivable that there could be situations where, for example, there could be, say, a logarithmic signature of discreteness, allowing a more effective “gravitational microscope” to be constructed.

In current studies of general relativity, the potentially most accessible “extreme situation” is a spinning black hole close to critical angular momentum. And in our models, we already have direct simulations of this. And what we see is that as we approach criticality there starts to be a region of space that’s knitted into the rest of space by fewer and fewer updating events. And conceivably when this happens there would be “shot noise”, say visible in gravitational waves.

There are other effects too. In a kind of spacetime analog of vacuum polarization, the discreteness of spacetime should lead to a “black hole wind” of outgoing momentum from an event horizon—though the effect is probably only significant for elementary-length-scale black holes. (Such effects might lead to energy loss from black holes through a different “mode of spacetime deformation” than ordinary gravitational radiation.) Another effect of having a discrete structure to space is that information transmission rates are only “statistically” limited to the speed of light, and so fluctuations are conceivable, though again most likely only on elementary-length-type scales.

In general the discreteness of spacetime leads to all sorts of exotic structures and singularities in spacetime not present in ordinary general relativity. Notable potential features include dynamic topology change, “space tunnels”, “dimension anomalies” and spatial disconnection.

We imagine that in our models particles are some kind of topological obstructions in the spatial hypergraph. And perhaps we will find even quite generic results for the “spectrum” of such obstructions. But it’s also quite possible that there will be “topologically stable” structures that aren’t just like point particles, but are something more exotic. By the way, in computing things like the cosmological constant—or features of dark energy—we need to compare the “total visible particle content” with the total activity in the spatial hypergraph, and there may be generic results to be had about this.

One feature of our models is that they imply that things like electrons are not intrinsically of zero size—but in fact are potentially quite large compared to the elementary length. Their actual size is far out of range of any anticipated experiments, but the fact that they involve so many elements in the underlying spatial hypergraph suggests that there might be particles—that I’ve called oligons—that involve many fewer, and that might have measurable cosmological or astrophysical effects, or even be directly detectable as some kind of very-low-mass dark matter.

In thinking about particles, our models also make one think about some potential highly exotic possibilities. For example, perhaps not every photon in the universe with given energy-momentum and polarization is actually identical. Maybe they have the same “overall topological structure”, but different detailed configuration of (say) the multiway causal graph. And maybe such differences would have detectable effects on sufficiently large coherent collections of photons. (It may be more plausible, however, that particles act a bit like tiny black holes, with their “internal state” not evident outside.)

When it comes to quantum mechanics, our models again have some generic predictions—the most obvious of which is the existence of a maximum entanglement speed ζ, that is the analog of the speed of light, but in branchial space. In our models, the scale of ζ is directly connected to the scale of the elementary length, so measuring one would determine the other—and with our (rather unreliable) estimate for the elementary length ζ might be around 105 solar masses per second.

There are a host of “relativity-analog” effects associated with ζ, an example being the quantum Zeno effect that is effectively time dilation associated with rapidly repeated measurement. And conceivably there is some kind of atomic-scale (or gravitational-wave-detector-deformation-scale) “measurement from the environment” that could be sensitive to this—perhaps associated with what might be considered “noise” for a quantum computer. (By the way, ζ potentially also defines limitations on the effectiveness of quantum computing, but it’s not clear how one would disentangle “engineering issues”.)

Then there are potential interactions between quantum mechanics and the structure of spacetime—perhaps for example effects of features of spacetime on quantum coherence. But probably the most dramatic effects will be associated with things like black holes, where for example the maximum entanglement speed should represent an additional limitation on black hole formation—that with our estimate for ζ might actually be observable in the near term.

Historically, general relativity was fortunate enough to imply effects that did not depend on any unknown scales (like the cosmological constant). The most obvious candidates for similar effects in our models involve things like the quantum behavior of photons orbiting a black hole. But there’s lots of detailed physics to do to actually work any such things out.

In the end, a fundamental model for physics in our setup involves some definite underlying rule. And some of our conclusions and predictions about physics will surely depend on the details of that rule. But one of the continuing surprises in our models is how many implied features of physics are actually generic to a large class of rules. Still, there are things like the masses of elementary particles that at least feel like they must be specific to particular rules. Although—who knows—maybe overall symmetries are determined by the basic structure of the model, maybe the number of generations of fermions is connected to the effective dimensionality of space, etc. These are some of the kinds of things it looks conceivable that we’ll begin to know in the next few years.

Beyond Physics

Beyond Physics

When I first started developing what people have been calling “Wolfram models”, my primary motivation was to understand fundamental physics. But it was quickly clear that the models were interesting in their own right, independent of their potential connection to physics, and that they might have applications even outside of physics. And I suppose one of the big surprises this year has been just how true that is.

I feel like our models have introduced a whole new paradigm, that allows us to think about all kinds of fields in fundamentally new ways, and potentially solve longstanding foundational problems in them.

The general exploration of the computational universe—that I began more than forty years ago—has brought us phenomena like computational irreducibility and has led to all sorts of important insights. But I feel that with our new models we’ve entered a new phase of understanding the computational universe, in particular seeing the subtle but robust interplay between computational reducibility and computational irreducibility that’s associated with the introduction of computationally bounded observers or measurements.

I hadn’t really known how to fit the successes of physics into the framework of what I’d seen in the computational universe. But now it’s becoming clear. And the result is not only that we understand more about the foundations of physics, but also that we can import the successes of physics into our thinking about the computational universe, and all its various applications.

At a very pragmatic level, cellular automata (my longtime favorite examples in the computational universe) provide minimal models for systems in which arbitrary local rules operate on a fixed array in space and time. Our new models now provide minimal models for systems that have no such definite structure in space and time. Cellular automata are minimal models of “array parallel” computational processes; our new models are minimal models of distributed, asynchronous computational processes.

In something like a cellular automaton—with its very organized structure for space and time—it’s straightforward to see “what leads to what”. But in our new models it can be much more complicated—and to represent the causal relationships between different events we need to construct causal graphs. And for me one consequence of studying our models has been that whenever I’m studying anything I now routinely start asking about causal graphs—and in all sorts of cases this has turned out to be very illuminating.

But beyond causal graphs, one feature of our new models is their essentially inevitable multiway character. There isn’t just one “thread of history” for the evolution of the system, there’s a whole multiway graph of them. In the past, there’ve been plenty of probabilistic or nondeterministic models for all sorts of systems. But in a sense I’ve always found them unsatisfactory, because they end up talking about making an arbitrary choice “from outside the system”. A multiway graph doesn’t do that. Instead, it tells our story purely from within the system. But it’s the whole story: “in one gulp” it’s capturing the whole dynamic collection of all possibilities.

And now that the formalism of our models has gotten me used to multiway graphs, I see them everywhere. And all sorts of systems that I thought somehow weren’t well enough defined to be able to study in a systematic way I now realize are amenable to “multiway analysis”.

One might think that a multiway graph that captures all possibilities would inevitably be too complicated to be useful. But this is another key observation from our Physics Project: particularly with the phenomenon of causal invariance, there are generic statements that can be made, without dealing with all the details. And one of the important directions we’ve pursued over the course of this year is to get a better understanding—sometimes using methods from category theory—of the general theory of multiway systems.

But, OK, so what can we apply the formalism of our models to? Lots of things. Some that we’ve at least started to think seriously about are: distributed computing, mathematics and metamathematics, chemistry, biology and economics. And in each case it’s not just a question of having some kind of “add-on” model; it seems like our formalism allows one to start talking about deep, foundational questions in each of these fields.

In distributed computing, I feel like we’re just getting started. For decades I’ve wondered how to think about organizing distributed computing so that we humans can understand it. And now within our formalism, I’ve both understood why that’s hard, and begun to get ideas about how we might do it. A crucial part is getting intuition from physics: thinking about “programming in a reference frame”, causal invariance as a source of eventual consistency, quantum effects as ambiguities of outcome, and so on. But it’s also been important over the past year to study specific systems—like multiway Turing machines and combinators—and be able to see how things work out in these simpler cases.

As an “exercise”, we’ve been looking at using ideas from our formalism to develop a distributed analog of blockchain—in which “intentional events” introduced from outside the system are “knitted together” by large numbers of “autonomous events”, in much the same way as consistent “classical” space arises in our models of physics. (The analog of “forcing consensus” or coming to a definite conclusion is essentially like the process of quantum measurement.)

It’s interesting to try to apply “causal” and “multiway” thinking to practical computation, for example in the Wolfram Language. What is the causal graph of a computation? It’s a kind of dependency trace. And after years of looking for a way to get a good manipulable symbolic representation of program execution this may finally show us how to do it. What about the multiway graph? We’re used to thinking about computations that get done on “data structures”, like lists. But how should we think of a “multiway computation” that can produce a whole bundle of outputs? (In something like logic programming, one starts with a multiway concept, but then typically picks out a single path; what seems really interesting is to see how to systematically “compute at the multiway level”.)

OK, so what about mathematics? There’s an immediate correspondence between multiway graphs and the networks obtained by applying axioms or laws of inference to generate all possible theorems in a given mathematical theory. But now our study of physics makes a suggestion: what would happen if—like in physics—we take a limit of this process? What is “bulk” or “continuum” metamathematics like?

In the history of human mathematics, there’ve been a few million theorems published—defining in a sense the “human geography” of metamathematical space. But what about the “intrinsic geometry”? Is there a theory of this, perhaps analogous to our theory of physics? A “physicalized metamathematics”? And what does it tell us about the “infinite-time limit” of mathematics, or the general nature of mathematics?

If we try to fully formalize mathematics, we typically end up with a very “non-human” “machine code”. In physics there might be a hundred orders of magnitude between the atoms of space and our typical experience. In present-day formalized mathematics, there might be 4 or 5 orders of magnitude from the “machine code” to typical statements of theorems that humans would deal with.

At the level of the machine code, there’s all sorts of computational irreducibility and undecidability, just like in physics. But somehow at the “human level” there’s enough computational reducibility that one can meaningfully “do mathematics”. I used to think that this was some kind of historical accident. But I now suspect that—just like with physics—it’s a fundamental feature of the involvement of computationally bounded human “observers”. And with the correspondence of formalism, one’s led to ask things like what the analog of relativity—or quantum mechanics—is in “bulk metamathematics”, and, for example, how it might relate to things like “computationally bounded category theory”.

And, yes, this is interesting in terms of understanding the nature of mathematics. But mathematics also has its own deep stack of results and intuition, and in studying mathematics using the same formalism as physics, we also get to use this in our efforts to understand physics.

How could all this be relevant to chemistry? Well, a network of all possible chemical reactions is once again a multiway graph. In chemical synthesis one’s usually interested in just picking out one particular “pathway”. But what if we think “multiway style” about all the possibilities? Branchial space is a map of chemical species. And we now have to understand what kind of laws a “computationally bounded chemical sensor” might “perceive” in it.

Imagine we were trying to “do a computation with molecules”. The “events” in the computation could be thought of as chemical reactions. But now instead of just imagining “getting a single molecular result”, consider using the whole multiway system “as the computation”. It’s basically the same story as distributed computing. And while we don’t yet have a good way to “program” like this, our Physics Project now gives us a definite direction. (Yes, it’s ironic that this kind of molecular-scale computation might work using the same formalism as quantum mechanics—even though the actual processes involved don’t have to be “quantum” in the underlying physics sense.)

When we look at biological systems, it’s always been a bit of a mystery how one should think about the complex collections of chemical processes they involve. In the case of genetics we have the organizing idea of digital information and DNA. But in the general case of systems biology we don’t seem to have overarching principles. And I certainly wonder whether what’s missing is “multiway thinking” and whether using ideas from our Physics Project we might be able to get a more global understanding—like a “general relativity” of systems biology.

It’s worth pointing out that the detailed techniques of hypergraph evolution are probably applicable to biological morphogenesis. Yes, one can do a certain amount with things like continuum reaction-diffusion equations. But in the end biological tissue—like, we now believe, physical space—is made of discrete elements. And particularly when it comes to topology-changing phenomena (like gastrulation) that’s probably pretty important.

Biology hasn’t generally been a field that’s big on formal theories—with the one exception of the theory of natural selection. But beyond specific few-whole-species-dynamics results, it’s been difficult to get global results about natural selection. Might the formalism of our models help? Perhaps we’d be able to start thinking about individual organisms a bit like we think about atoms of space, then potentially derive large-scale “relativity-style” results, conceivably about general features of “species space” that really haven’t been addressed before.

In the long list of potential areas where our models and formalism could be applied, there’s also economics. A bit like in the natural selection case, the potential idea is to think about in effect modeling every individual event or “transaction” in an economy. The causal graph then gives some kind of generalized supply chain. But what is the effect of all those transactions? The important point is that there’s almost inevitably lots of computational irreducibility. Or, in other words, much like in the Second Law of Thermodynamics, the transactions rapidly start to not be “unwindable” by a computationally bounded agent, but have robust overall “equilibrium” properties, that in the economic case might represent “meaningful value”—so that the robustness of the notion of monetary value might correspond to the robustness with which thermodynamic systems can be characterized as having certain amounts of heat.

But with this view of economics, the question still remains: are there “physics-like” laws to be found? Are there economic analogs of reference frames? (In an economy with geographically local transactions one might even expect to see effects analogous to relativistic time dilation.)

To me, the most remarkable thing is that the formalism we’ve developed for thinking about fundamental physics seems to give us such a rich new framework for discussing so many other kinds of areas—and for pooling the results and intuitions of these areas.

And, yes, we can keep going. We can imagine thinking about machine learning—for example considering the multiway graph of all possible learning processes. We can imagine thinking about linguistics—starting from every elementary “event” of, say, a word being said by one person to another. We even think about questions in traditional physics—like one of my old favorites, the hard-sphere gas—analyzing them not with correlation functions and partition functions but with causal graphs and multiway graphs.

Towards Ultimate Abstraction

Towards Ultimate Abstraction

A year ago, as we approached the launch of the Wolfram Physics Project, we felt increasingly confident that we’d found the correct general formalism for the “machine code” of the universe, we’d built intuition by looking at billions of possible specific rules, and we’d discovered that in our models many features of physics are actually quite generic, and independent of specific rules.

But we still assumed that in the end there must be some specific rule for our particular universe. We thought about how we might find it. And then we thought about what would happen if we found it, and how we might imagine answering the question “Why this rule, and not another?”

But then we realized: actually, the universe does not have to be based on just one particular rule; in some sense it can be running all possible rules, and it is merely through our perception that we attribute a specific rule to what we see about the universe.

We already had the concept of a multiway graph, generated by applying all possible update events, and tracing out the different histories to which they lead. In an ordinary multiway graph, the different possible update events occur at different places in the spatial hypergraph. But we imagined generalizing this to a rulial multiway graph, generated by applying not just updates occurring in all possible places, but also updates occurring with all possible rules.

At first one might assume that if one used all possible rules, nothing definite could come out. But the fact that different rules can potentially lead to identical states causes a definite rulial multiway graph to be knitted together—including all possible histories, based on all possible sequences of rules.

What could an observer embedded in such a rulial multiway graph perceive? Just as for causal graphs or ordinary multiway graphs, one can imagine defining a reference frame—here a “rulial frame”—that makes the observer perceive the universe as evolving through a series of slices in rulial space, or in effect operating according to certain rules. In other words, the universe follows all possible rules, but an observer in a particular rulial frame describes its operation according to particular rules.

And the critical point is then that this is consistent because the evolution in the rulial multiway graph inevitably shows causal invariance. At first this all might seem quite surprising. But the thing to realize is that the Principle of Computational Equivalence implies that collections of rules will generically show computation universality. And this means that whatever rulial frame one picks—and whatever rules one uses to describe the evolution of the universe—it’ll always be possible to use those rules to emulate any other possible rules.

There is a certain ultimate abstraction and unification in all this. In a sense it says that the only thing one ultimately needs to know about our universe is that it is “computational”—and from there the whole formal structure of our models takes over. It also tells us that there is ultimately only one universe—though different rulial frames may describe it differently.

How should we think about the limiting rulial multiway graph? It turns out that something like it has also appeared in the upper reaches of pure mathematics in connection with higher category theory. We can think of our basic multiway graphs as related to (weak versions of) ordinary categories. It’s a little different from how categorical quantum mechanics works in our models. But when we add in equivalences between branches in the multiway system we get a 2-category. And if we keep adding higher-and-higher-order equivalences, we get higher and higher categories. But in the infinite limit it turns out the structure we get is exactly the rulial multiway graph—so that now we can identify this as an infinity category, or more specifically an infinity groupoid.

Grothendieck’s conjecture suggests that there is in a sense inevitable geometry in the infinity groupoid, and it’s ultimately this structure that seems to “trickle down” from the rulial multiway graph to everything else we look at, and imply, for example, that there can be meaningful notions of physical and branchial space.

We can think of the limiting multiway graph as a representation of physics and the universe. But the exact same structure can also be thought of as a kind of metamathematical limit of all possible mathematics—in a sense fundamentally tying together the foundations of physics and mathematics.

There are many details and implications to this, that we’re just beginning to work out. The ultimate formation of the rulial multiway graph depends on identifying when states or objects can be treated as the same, and merged. In the case of physics, this can be seen as a feature of the observer, and the reference frames they define. In the case of mathematics, it can be seen as a feature of the underlying axiomatic framework used, with the univalence axiom of homotopy type theory being one possible choice.

The whole concept of rulial space raises the question of why we perceive the kind of laws of physics we do, rather than other ones. And the important recent realization is that it seems deeply connected to what we define as consciousness.

I must say that I’ve always been suspicious about attempts to make a scientific framework for consciousness. But what’s recently become clear is that in our approach to physics there’s both a potential way to do it, and in a sense it’s fundamentally needed to explain what we see.

Long ago I realized that as soon as you go beyond humans, the only viable general definition of intelligence is the ability to do sophisticated computation—which the Principle of Computational Equivalence says is quite ubiquitous. One might have thought that consciousness is an “add-on” to intelligence, but actually it seems instead to be a “step down”. Because it seems that the key element of what we consider consciousness is the notion of having a definite “thread of experience” through time—or, in other words, a sequential way to experience the universe.

In our models the universe is doing all sorts of complicated things, and showing all sorts of computational irreducibility. But if we’re going to sample it in the way consciousness does, we’ll inevitably pick out only certain computationally reducible slices. And that’s precisely what the laws of physics we know—embodied in general relativity and quantum mechanics—correspond to. In some sense, therefore, we see physics as we do because we are observing the universe through the sequential thread of experience that we associate with consciousness.

Let me not go deeper into this here, but suffice it to say that from our science we seem to have reached an interesting philosophical conclusion about the way that we effectively “create” our description of the universe as a result of our own sensory and cognitive capabilities. And, yes, that means that “aliens” with different capabilities (or even just different extents in physical or branchial space) could have descriptions of the universe that are utterly incoherent with our own.

But, OK, so what can we say about rulial space? With a particular description of the universe we’re effectively stuck in a particular location or frame in rulial space. But we can imagine “moving” by changing our point of view about how the universe works. We can always make a translation, but that inevitably takes time.

And in the end, just like with light cones in physical space, or entanglement cones in branchial space, there’s a limit to how fast a particular translation distance can be covered, defined by a “translation cone”. And there’s a “maximum translation speed” ρ, analogous to the speed of light c in space or the maximum entanglement speed ζ in branchial space. And in a sense ρ defines the ultimate “processor speed” for the universe.

In defining the speed of light we have to introduce units for length in space. In defining ρ we have to introduce units for the length of descriptions of programs or rules—so, for example, ρ could be measured, say, in units of “Wolfram Language tokens per second”. We don’t know the value of ρ, but an unreliable estimate might be 10450 WLT/second. And just like in general relativity and quantum mechanics one can expect that there will be all sorts of effects scaled by ρ that occur in rulial space. (One example might be a “quantum-like uncertainty” that provides limits on inductive inference by not letting one distinguish “theories of the universe” until they’ve “diverged far enough” in rulial space.)

The concept of rulial space is a very general one. It applies to physics. It applies to mathematics. And it also applies to pure computation. In a sense rulial space provides a map of the computational universe. It can be “coordinatized” by representing computations in terms of Turing machines, cellular automata, Wolfram models, or whatever. But in general we can ask about its limiting geometrical and topological structure. And here we see a remarkable convergence with fundamental questions in theoretical computer science.

For example, particular geodesic paths in rulial space correspond to maximally efficient deterministic computations that follow a single rule. Geodesic balls correspond to maximally efficient nondeterministic computations that can follow a sequence of rules. So then something like the P vs. NP question becomes what amounts to a geometrical or topological question about rulial space.

In our Physics Project we set out to find a fundamental theory for physics. But what’s become clear is that in thinking about physics we’re uncovering a formal structure that applies to much more than just physics. We already had the concept of computation in all its generality—with implications like the Principle of Computational Equivalence and computational irreducibility. But what we’ve now uncovered is unification at a different level, not about all computation, but about computation as perceived by computationally bounded observers, and about the kinds of things about which we can expect to make theories as powerful as the ones we know in physics.

For each field what’s key is to identify the right question. What is the analog of space, or time, or quantum measurement, or whatever? But once we know that, we can start to use the machinery our formalism provides. And the result is a remarkable new level of unification and power to apply to science and beyond.

The Process of the Project: New Ways to Do Science

The Process of the Project: New Ways to Do Science

How should one set about finding a fundamental theory of physics? There was no roadmap for the science to do. And there was no roadmap for how the science should be done. And part of the unfolding story of the Wolfram Physics Project is about its process, and about new ways of doing science.

Part of what has made the Wolfram Physics Project possible is ideas. But part of it is also tools, and in particular the tall tower of technology that is the Wolfram Language. In a sense the whole four decades of history behind the Wolfram Language has led us to this point. The general conception of computational language built to represent everything, including, it now seems, the whole universe. And the extremely broad yet tightly integrated capabilities of the language that make it possible to so fluidly and efficiently pursue each different piece of research that is needed.

For me, the Wolfram Physics Project is an exciting journey that, yes, is going much better than I ever imagined. From the start we were keen to share this journey as widely as possible. We certainly hoped to enlist help. But we also wanted to open things up so that as many people as possible could experience and participate in this unique adventure at the frontiers of science.

And a year later I think I can say that our approach to open science has been a great and accelerating success. An increasing number of talented researchers have become involved in the project, and have been able to make progress with great synergy and effectiveness. And by opening up what we’re doing, we’ve also been able to engage with—and hopefully inspire—a very wide range of people even outside of professional science.

One core part of what’s moving the project forward is our tools and the way we’re using them. The idea of computational language—as the Wolfram Language uniquely embodies—is to have a way to represent things in computational terms, and be able to communicate them like that. And that’s what’s happening all the time in the Wolfram Physics Project. There’s an idea or direction. And it gets expressed in Wolfram Language. And that means it can explicitly and repeatably be understood, run and explored—by anyone.

We’re posting our Wolfram Language working notebooks all the time—altogether 895 of them over the past year. And we’re packaging functions we write into the Wolfram Function Repository130 of them over the past year—all with source code, all documented, and all instantly and openly usable in any Wolfram Language system. It’s become a rhythm for our research. First explore in working notebooks, adding explanations where appropriate to make them readable as computational essays. Then organize important functions and submit them to the Function Repository, then use these functions to take the next steps in the research.

This whole setup means that when people write about their results, there’s immediately runnable computational language code. And in fact, at least in what I’ve personally written, I’ve had the rule that for any picture or result I show (so far 2385 of them) it must be possible to just click it, and immediately get code that will reproduce it. It might sound like a small thing, but this kind of fluid immediacy to being able to reproduce and build on what’s been done has turned out to be tremendously important and powerful.

There are so many details—that in a sense come as second nature given our long experience with production software development. Being careful and consistent about the design of functions. Knowing when it makes sense to optimize at the cost of having less flexible code. Developing robust standardized visualizations. There are lots of what seem like small things that have turned out to be important. Like having consistent color schemes for all our various kinds of graphs, so when one sees what someone has done, one immediately knows “that’s a causal graph”, “that’s a branchial graph” and so on, without even having to read any explanation.

But in addition to opening up the functions and ongoing notebooks we produce, we’ve also done something more radical: we’ve opened up our process of work, routinely livestreaming our working meetings. (There’ve been 168 hours of them this year; we’ve now also posted 331 hours from the 6 months before the launch of the project.) I’ve personally even gone one step further: I’ve posted “video work logs” of my personal ongoing work (so far, 343 hours of them)—right down to, for example, the writing of this very sentence.

We started doing all this partly as an experiment, and partly following the success we’ve had over the past few years in livestreaming our internal meetings designing the Wolfram Language. But it’s turned out that capturing our Physics Project being done has all sorts of benefits that we never anticipated. You see something in a piece I’ve written. You wonder “Where did that come from?”. Well, now you can drill all the way down, to see just what went into making it, missteps and all.

It’s been great to share our experience of figuring things out. And it’s been great to get all those questions, feedback and suggestions in our livestreams. I don’t think there’s any other place where you can see science being done in real time like this. Of course it helps that it’s so uniquely easy to do serious research livecoding in the Wolfram Language. But, yes, it takes some boldness (or perhaps foolhardiness) to expose one’s ongoing steps—forward or backward—in real time to the world. But I hope it helps people see more about what’s involved in figuring things out, both in general and specifically for our project.

When we launched the project, we put online nearly a thousand pages of material, intended to help people get up to speed with what we’d done so far. And within a couple of months after the launch, we had a 4-week track of our Wolfram Summer School devoted to the Wolfram Physics Project. We had 30 students there (as well as another 4 from our High School Summer Camp)—all of whom did projects based on the Wolfram Physics Project.

And after the Summer School, responding to tremendous demand, we organized two week-long study sessions (with 30 more students), followed in January by a 2-week Winter School (with another 17 students). It’s been great to see so many people coming up to speed on the project. And so far there’ve been a total of 79 publications, “bulletins” and posts that have come out of this—containing far more than, for example, I could possibly have summarized here.

There’s an expanding community of people involved with the Wolfram Physics Project. And to help organize this, we created our Research Affiliate and Junior Research Affiliate programs, now altogether with 49 people from around the world involved.

Something else that’s very important is happening too: steadily increasing engagement from a wide range of areas of physics, mathematics and computer science. In fact, with every passing month it seems like there’s some new research community that’s engaging with the project. Causal set theory. Categorical quantum mechanics. Term rewriting. Numerical relativity. Topos theory. Higher category theory. Graph rewriting. And a host of other communities too.

We can view the achievement of our project as being in a sense to provide a “machine code” for physics. And one of the wonderful things about it is how well it seems to connect with a tremendous range of work that’s been done in mathematical physics—even when it wasn’t yet clear how that work on its own might relate to physical reality. Our project, it seems, provides a kind of Rosetta stone for mathematical physics—a common foundation that can connect, inform and be informed by all sorts of different approaches.

Over the past year there’s been a repeated, rather remarkable experience. For some reason or another we’ll get exposed to some approach or idea. Constructor theory. Causal dynamical triangulation. Ontological bases. Synthetic differential geometry. ER=EPR. And we’ll use our models as a framework for thinking about it. And we’ll realize: “Gosh, now we can understand that!” And we’ll see how it fits in with our models, how we can learn more about our models from it—and how we can use our models and our formalism to bring in new ideas to advance the thing itself.

In some ways our project represents a radical shift from the past century or so of physics. And more often than not, when such intellectual shifts are made in the history of science, they’ve been accompanied by all kinds of difficulties in connecting with existing communities. But I’m very happy to report that over the past year our project has been doing quite excellently in connecting with existing communities—no doubt helped by its “Rosetta stone” character. And as we progress, we’re looking forward to an increasing network of collaborations, both within the community that’s already formed and with other communities.

And over the coming year, as we start to more seriously explore the implications of our models and formalism even beyond physics, I’m anticipating still more connections and collaborations.

The Personal Side

The Personal Side

It’s hard to believe it’s only been a little over 18 months since we started working in earnest on the Wolfram Physics Project. So much has happened, and we’ve gotten so much further than I ever thought possible. And it feels like a whole new world has opened up. So many new ideas, so many new ways of looking at things.

I’ve been fortunate enough to have already had a long and satisfying career, and it’s a surprising and remarkable thing at this stage to have what seems like a fresh, new start. Of course, in some respects I’ve spent much of my life preparing for what is now the Wolfram Physics Project. But the actuality of it has been so much more exciting and invigorating than anything I imagined. There’ve been so many questions—about all sorts of different things—that I’ve been accumulating and mulling over for decades. And suddenly it seems as if a door I never knew existed has opened, and now it’s possible to go forward on a dizzying array of fronts.

I’ve spent most of my life building a whole tower of things—alternating between science and technology. And in this tower it’s remarkable the extent to which each level has built on what’s come before: tools from technology have made it possible to explore science, and ideas from science have made it possible to create technology. But a year ago I thought the Wolfram Physics Project might finally be the end of the line: a piece of basic science that was really just science, and nothing but science, with no foreseeable implications for technology.

But it turns out I was completely wrong. And in fact of all the pieces of basic science I’ve ever done, the Wolfram Physics Project may be the one which has the greatest short-term implications for technology. We’re not talking about building starships using physics. We’re talking about taking the formalism we’ve developed for physics—and applying it, now informed by physics, in all sorts of very practical settings in distributed computing, modeling, chemistry, economics and beyond.

In the end, one may look back at many of these applications and say “that didn’t really need the Physics Project; we could have just got there directly”. But in my experience, that’s not how intellectual progress works. It’s only by building a tower of tools and ideas that one can see far enough to understand what’s possible. And without that decades or centuries may go by, with the path forward hiding in what will later seem like plain sight.

A year ago I imagined that in working on the Wolfram Physics Project I’d mostly be doing things that were “obviously physics”. But in actuality the project has led me to pursue all sorts of “distractions”. I’ve studied things like multiway Turing machines, which, yes, are fairly obviously related to questions about quantum mechanics. But I’ve also studied combinators and tag systems (OK, these were induced by the arrival of centenaries). And I spent a while looking at the empirical mathematics of Euclid and beyond.

And, yes, the way I approached all these things was strongly informed by our Physics Project. But what’s surprising is that I feel like doing each of these projects advanced the Physics Project too. The “Euclid” project has started to build a bridge that lets us import the intuition and formalism of metamathematics—informed by the concrete example of Euclid’s Elements. The combinator project deepened my understanding of causal invariance and of the possible structures of things like space. And even the historical scholarship I did on combinators taught me a lot about issues in the foundations of mathematics that have languished for a century but I now realize are important.

In all the pieces I’ve written over the past year add up to about 750 pages of material (and, yes, that number makes me feel fairly productive). But there’s so much more to do and to write. A few times in my life I’ve had the great pleasure of discovering a new paradigm and being able to start exploring what’s possible within it. And in many ways the Wolfram Physics Project has—yes, after three decades of gestation—been the most sudden of these experiences. It’s been an exciting year. And I’m looking forward to what comes next, and to seeing the new paradigm that’s been created develop both in physics and beyond.

Notes & Thanks

One of the great pleasures of this year has been the energy and enthusiasm of people working on the Wolfram Physics Project. But I’d particularly like to mention Jonathan Gorard, who has achieved an exceptional level of productivity and creativity, and has been a driving force behind many of the advances described here.

Stephen Wolfram (2021), "The Wolfram Physics Project: A One-Year Update," Stephen Wolfram Writings. writings.stephenwolfram.com/2021/04/the-wolfram-physics-project-a-one-year-update.
Text
Stephen Wolfram (2021), "The Wolfram Physics Project: A One-Year Update," Stephen Wolfram Writings. writings.stephenwolfram.com/2021/04/the-wolfram-physics-project-a-one-year-update.
CMS
Wolfram, Stephen. "The Wolfram Physics Project: A One-Year Update." Stephen Wolfram Writings. April 14, 2021. writings.stephenwolfram.com/2021/04/the-wolfram-physics-project-a-one-year-update.
APA
Wolfram, S. (2021, April 14). The Wolfram Physics Project: A one-year update. Stephen Wolfram Writings. writings.stephenwolfram.com/2021/04/the-wolfram-physics-project-a-one-year-update.

Posted in: Physics

10 comments

  1. Just incredible! I’m so grateful to be able to see this project unfold in front of my eyes! Keep up the astounding work ❤️

  2. One thought having read this, is the old determinism vs freewill discussion. This study seems to me to tilt things rather strongly toward deterministic existence,

  3. Have you submitted anything to peer reviewed journals, or is it all self published? Surely if you were on the verge of a unifying theory of physics every journal in the world would be falling over themselves to publish whatever you produce.

  4. Exploring the implications from your work is always insightful, and I get immense joy sharing it with the people I know and work with.

  5. You haven’t yet addressed the physical situations where quantum mechanics and general relativity are simultaneously relevant. As an important instance, shouldn’t you be finding this convergence at the Planck scale, ~10^-34 cm, rather than the vastly smaller size of ~10^-100 cm?

  6. Fascinating. This is something that I hope a lot more people will start working on. I’m no physicist, but that’s a lot of strange “coincidences”—more than enough for it to be worth exploring in full. I wish you guys a lot of success, but especially a lot of fun in this intellectual adventure!

  7. Exciting to think of how nascent quantum computing will abet these explorations.

  8. Hello I am an observer, my name is Mack yes spacetime is pure energy and produces the strings in string theroy. By spinning up spacetime from vibration from other stings. Faster than light travel or communication has to do with a new energy basically there was 2 big bangs 1 positive 1 negative. They exist on top of or in the same space. We live in the negative one thus energy heats up. Making it harder for us to advance ourselves into the future. When an electrons change position they travel from one universe to the positive one and back. We can use this new energy to communicate faster than light. Every atom and particle is completely connected and different than one another. They each have a code with there basic information about the universe. They know what and where they are and what other particle or atom to connect to and what to create in this universe. Completely independent and different than it’s other particle. Its code can tell us were we are in the universe. There individual makeup is known throughout the universe and is connected to everything and I mean all things are connected creating the space tunnels and connection needed to communicate or travel. Hack the universe. There are 3 kinds of spacetime # 1 is a black hole spacetime, which is faster then light of matter and warps spacetime into 5 dimensions and completely different than # 2 Galaxy formation spacetime. The gravity and speed of light is different in all 3 areas. In open space there is no structure or like rocks in water (black holes), or gravitys energy working with spacetime energy to create a smooth spacetime. Both forces of gravity are at work in open space there is no structure yet designed. Makes it look like dark energy and matter. No such thing, just spacetime creating and trying to match the positive universe that is also being created in the same space. That’s why gravity can be in 2 places at one time. +&- This is one observers option if you want to know more please feel free to contact me at my email address. Thanks for the great read, so glad to see you seeing it all start to connect. Its Beautiful when it all connects in your mind. It is almost impossible to completely understand.

  9. Fascinating to hear about the progress of the Wolfram Physics Project over the past year. The potential for using computational methods to better understand complex systems in physics is vast, and I’m looking forward to seeing the impact it will have on the field. The fact that Wolfram is already making significant strides in this area is very encouraging. Keep up the great work!