When we launched the Wolfram Physics Project a year ago today, I was fairly certain that—to my great surprise—we’d finally found a path to a truly fundamental theory of physics, and it was beautiful. A year later it’s looking even better. We’ve been steadily understanding more and more about the structure and implications of our models—and they continue to fit beautifully with what we already know about physics, particularly connecting with some of the most elegant existing approaches, strengthening and extending them, and involving the communities that have developed them.

And if fundamental physics wasn’t enough, it’s also become clear that our models and formalism can be applied even beyond physics—suggesting major new approaches to several other fields, as well as allowing ideas and intuition from those fields to be brought to bear on understanding physics.

Needless to say, there is much hard work still to be done. But a year into the process I’m completely certain that we’re “climbing the right mountain”. And the view from where we are so far is already quite spectacular.

We’re still mostly at the stage of exploring the very rich structure of our models and their connections to existing theoretical frameworks. But we’re on a path to being able to make direct experimental predictions, even if it’ll be challenging to find ones accessible to present-day experiments. But quite independent of this, what we’ve done right now is already practical and useful—providing new streamlined methods for computing several important existing kinds of physics results.

The way I see what we’ve achieved so far is that it seems as if we’ve successfully found a structure for the “machine code” of the universe—the lowest-level processes from which all the richness of physics and everything else emerges. It certainly wasn’t obvious that any such “machine code” would exist. But I think we can now be confident that it does, and that in a sense our universe is fundamentally computational all the way down. But even though the foundations are different, the remarkable thing is that what emerges aligns with important mathematical structures we already know, enhancing and generalizing them.

From four decades of exploring the computational universe of possible programs, my most fundamental takeaway has been that even simple programs can produce immensely complex behavior, and that this behavior is usually computationally irreducible, in the sense that it can’t be predicted by anything much less than just running the explicit computation that produced it. And at the level of the machine code our models very much suggest that our universe will be full of such computational irreducibility.

But an important part of the way I now understand our Physics Project is that it’s about what a computationally bounded observer (like us) can see in all this computational irreducibility. And the key point is that within the computational irreducibility there are inevitably slices of computational reducibility. And, remarkably, the three such slices we know correspond exactly to the great theories of existing physics: general relativity, quantum mechanics and statistical mechanics.

And in a sense, over the past year, I’ve increasingly come to view the whole fundamental story of science as being about the interplay between computational irreducibility and computational reducibility. The computational nature of things inevitably leads to computational irreducibility. But there are slices of computational reducibility that inevitably exist on top of this irreducibility that are what make it possible for us—as computationally bounded entities—to identify meaningful scientific laws and to do science.

There’s a part of this that leads quite directly to specific formal development, and for example specific mathematics. But there’s also a part that leads to a fundamentally new way of thinking about things, that for example provides new perspectives on issues like the nature of consciousness, that have in the past seemed largely in the domain of philosophy rather than science.

Spatial hypergraphs. Causal graphs. Multiway graphs. Branchial graphs. A year ago we had the basic structure of our models and we could see how both general relativity and quantum mechanics could arise from them. And it could have been that as we went further—and filled in more details—we’d start seeing issues and inconsistencies. But nothing of the sort has happened. Instead, it seems as if at every turn more and more seems to fit beautifully together—and more and more of the phenomena we know in physics seem to inevitably emerge as simple and elegant consequences of our models.

It all starts—very abstractly—with collections of elements and relations. And as I’ve got more comfortable with our models, I’ve started referring to those elements by what might almost have been an ancient Greek term: atoms of space. The core concept is then that space as we know it is made up from a very large number of these atoms of space, connected by a network of relations that can be represented by a hypergraph. And in our models there’s in a sense nothing in the universe except space: all the matter and everything else that “exists in space” is just encoded in the details of the hypergraph that corresponds to space.

Time in our models is—at least initially—something fundamentally different from space: it corresponds to the computational process of successively applying rules that transform the structure of the hypergraph. And in a sense the application of these rules represents the fundamental operation of the universe. And a key point is that this will inevitably show the phenomenon of computational irreducibility—making the progress of time an inexorable and irreducible computational process.

A striking feature of our models is that at the lowest level there’s nothing constant in our universe. At every moment even space is continually being remade by the action of the underlying rules—and indeed it is precisely this action that knits together the whole structure of spacetime. And though it still surprises me that it can be said so directly, it’s possible to identify energy as essentially just the amount of activity in space, with mass in effect being the “inertia” or persistence of this activity.

At the lowest level everything is just atoms of space “doing their thing”. But the crucial result is that—with certain assumptions—there’s large-scale collective behavior that corresponds exactly to general relativity and the observed continuum structure of spacetime. Over the course of the year, the derivation of this result has become progressively more streamlined. And it’s clear it’s all about what a computationally bounded observer will be able to conclude about underlying computationally irreducible processes.

But there’s then an amazing unification here. Because at a formal level the setup is basically the same as for molecular dynamics in something like a gas. Again there’s computational irreducibility in the underlying behavior. And there’s a computationally bounded observer, usually thought of in terms of “coarse graining”. And for that observer—in direct analogy to an observer in spacetime—one then derives the Second Law of Thermodynamics, and the equations of continuum fluid behavior.

But there’s an important feature of both these derivations: they’re somehow generic, in the sense that they don’t depend on underlying details like the precise nature of the molecules in the gas, or the atoms of space. And what this means is that both thermodynamics and relativity are general emergent laws. Regardless of what the precise underlying rules are, they’ll basically always be what one gets in a large-scale limit.

It’s quite remarkable that relativity in a sense formally comes from the same place as thermodynamics. But it’s the genericity of general relativity that’s particularly crucial in thinking about our models. Because it implies that we can make large-scale conclusions about physics without having to know what specific rule is being applied at the level of the underlying hypergraph.

Much like hypersonic flow in a gas, however, there will nevertheless be extreme situations in which one will be able to “see beneath” the generic continuum behavior—and tell that there are discrete atoms of space with particular behavior. Or in other words, that one will be able to see corrections to Einstein’s equations—that depend on the fact that space is actually a hypergraph with definite rules, rather than a continuous manifold.

One important feature of our spatial hypergraph is that—unlike our ordinary experience of space—it doesn’t intrinsically have any particular dimension. Dimension is an emergent large-scale feature of the hypergraph—and it can be an integer, or not, and it can, for example, vary with position and time. So one of the unexpected implications of our models is that there can be dimension fluctuations in our universe. And in fact it seems likely that our universe started essentially infinite-dimensional, only gradually “cooling” to become basically three-dimensional. And though we haven’t yet worked it out, we expect there’ll be a “dimension-changing cosmology” that may well have definite predictions for the observed large-scale structure of our universe.

The underlying discreteness—and variable dimension—of space in our models has many other implications. Traditional general relativity suggests certain exotic phenomena in spacetime, like event horizons and black holes—but ultimately it’s limited by its reliance on describing spacetime in terms of a continuous manifold. In our models, there are all sorts of possible new exotic phenomena—like change in spacetime topology, space tunnels and dynamic disconnection of the hypergraph.

What happens if one sets up a black hole that spins too rapidly? In our models, a piece of spacetime simply disconnects. And it’s been interesting to see how much more direct our models allow one to be in analyzing the structure of spacetime, even in cases where traditional general relativity gives one a hint of what happens.

Calculus has been a starting point for almost all traditional mathematical physics. But our models in a sense require a fundamental generalization of calculus. We have to go beyond the notion of an integer number of “variables” corresponding to particular dimensions, to construct a kind of “hypercalculus” that can for example generalize differential geometry to fractional dimensional space.

It’s a challenging direction in mathematics, but the concreteness of our models helps greatly in defining and exploring what to do—and in seeing what it means to go “below whole variables” and build everything up from fragmentary discrete connections. And one of the things that’s happened over the past year is that we’ve been steadily recapitulating the history of calculus-like mathematics, progressively defining generalizations of notions like tangent spaces, tensors, parallel transport, fiber bundles, homotopy classes, Lie group actions and so on, that apply to limits of our hypergraphs and to the kind of space to which they correspond.

One of the ironies of practical investigations of traditional general relativity is that even though the theory is set up in terms of continuous manifolds and continuous partial differential equations, actual computations normally involve doing “numerical relativity” that uses discrete approximations suitable for digital computers. But our models are “born digital” so nothing like this has to be done. Of course, the actual number of atoms of space in our real universe is immensely larger than anything we can simulate.

But we’ve recently found that even much more modest hypergraphs are already sufficient to reproduce the same kind of results that are normally found with numerical relativity. And so for example we can directly see in our models things like the ring-down of merging black holes. And what’s more, as a matter of practical computation, our models seem potentially more efficient at generating results than numerical relativity. So that means that even if one isn’t interested in models of fundamental physics and in the “underlying machine code” of the universe, our project is already useful—in delivering a new and promising method for doing practical computations in general relativity.

And, by the way, the method isn’t limited to general relativity: it looks as if it can be applied to other kinds of systems based on PDEs—like stress analysis and biological growth. Normally one thinks of taking some region of space, and approximating it by a discrete mesh, that one might adapt and subdivide. But with our method, the hypergraphs—with their variable dimensions—provide a richer way to approximate space, in which subdivision is done “automatically” through the actual dynamics of the hypergraph evolution.

I already consider it very impressive and significant that our models can start from simple abstract rules and end up with the structure of space and time as we know them in some sense inevitably emerging. But what I consider yet more impressive and significant is that these very same models also inevitably yield quantum mechanics.

It’s often been said (for example by my late friend Richard Feynman) that “nobody really understands quantum mechanics”. But I’m excited to be able to say that—particularly after this past year—I think that we are finally beginning to actually truly understand quantum mechanics. Some aspects of it are at first somewhat mind-bending, but given our new understanding we’re in a position to develop more and more accessible ways of thinking about it. And with our new understanding comes a formalism that can actually be applied in many other places—and from these applications we can expect that in time what now seem like bizarre features of quantum mechanics will eventually seem much more familiar.

In ordinary classical physics, the typical setup is to imagine that definite things happen, and that in a sense every system follows a definite thread of behavior through time. But the key idea of quantum mechanics is to imagine that many threads of possible behavior are followed, with a definite outcome being found only through a measurement made by an observer.

And in our models this picture is not just conceivable, but inevitable. The rules that operate on our underlying spatial hypergraph specify that a particular configuration of elements and relations will be transformed into some other one. But typically there will be many different places in the spatial hypergraphs where any such transformation can be applied. And each possible sequence of such updating events defines a particular possible “thread of history” for the system.

A key idea of our models is to consider all those possible threads of history—and to represent these in a single object that we call a multiway graph. In the most straightforward way of setting this up, each node in the multiway graph is a complete state of the universe, joined to whatever states are reached from it by all possible updating events that can occur in it.

A particular possible history for the universe then corresponds to a particular path through the multiway graph. And the crucial point is that there is branching—and merging—in the multiway graph leading in general to a complicated interweaving of possible threads of history.

But now imagine slicing across the multiway graph—in a sense sampling many threads of history at some particular stage in their evolution. If we were to look at these threads of history separately there might not seem to be any relation between them. But the way they’re embedded in the multiway graph inevitably defines relations between them. And for example we can imagine just saying that any two states in a particular slice of the multiway graph are related if they have a common ancestor, and are each just a result of a different event occurring in that ancestor state. And by connecting such states we form what we call a branchial graph—a graph that captures the relations between multiway branches.

But just like we imagine our spatial hypergraphs limit to something like ordinary continuous physical space, so also we can imagine that our branchial graphs limit to something we can call branchial space. And in our models branchial space corresponds to a space of quantum states, with the branchial graph in effect providing a map of the entanglements between those states.

In ordinary physical space we know that we can define coordinates that label different positions. And one of the things we’re understanding with progressively more clarity is also how to set up coordinatizations of branchial space—so that instead of just talking individually about “points in branchial space” we can talk more systematically about what happens “as a function of position” in branchial space.

But what is the interpretation of “position” in branchial space? It turns out that it is essentially the phase of a quantum amplitude. In the traditional formalism of quantum mechanics, every different state has a certain complex number associated with it that is its quantum amplitude. In our models, that complex number should be thought of in two parts. Its magnitude is associated with a combinatorial counting of possible paths in the multiway graph. But its phase is “position in branchial space”.

Once one has a notion of position, one is led to talk about motion. And in classical mechanics and general relativity a key concept is that things in physical space move by following shortest paths (“geodesics”) between different positions. When space is flat these paths are ordinary straight lines, but when there is curvature in space—corresponding in general relativity to the presence of gravity—the paths are deflected. But what the Einstein equations then say is that curvature in space is associated with the presence of energy-momentum. And in our models, this is exactly what happens: energy-momentum is associated with the presence of update events in the spatial hypergraph, and these lead to curvature and a deflection of geodesics.

So what about motion in branchial space? Here we are interested in how “bundles of nearby histories” progress through time in the multiway graph. And it turns out that once again we are dealing with geodesics that are deflected by the presence of update events that we can interpret as energy-momentum.

But now this deflection is not in physical space but in branchial space. The fundamental underlying mathematical structure is the same in both cases. But the interpretation in terms of traditional physics is different. And in what to me is a singularly beautiful result of our models it turns out that what gives the Einstein equations in physical space gives the Feynman path integral in branchial space. Or in other words, quantum mechanics is the same as general relativity, except in branchial space rather than physical space.

But, OK, so how do we assign positions in branchial space? It’s a mathematically complicated thing to do. Nearly a year ago we found a kind of trick way to do it for a standard simple quantum setup: the double-slit experiment. But over the course of the year, we’ve developed a much more systematic approach based on category theory and categorical quantum mechanics.

In its usual applications in mathematics, category theory talks about things like the patterns of mappings (morphisms) between definite named kinds of objects. But in our models what we want is just the “bulk structure” of category theory, and the general idea of patterns of connections between arbitrary unnamed objects. It’s very much like what we do in setting up our spatial hypergraph. There are symbolic expressions—like in the Wolfram Language—that define structures associated with named kinds of things, and on which transformations can be applied. But we can also consider “bulk symbolic expressions” that don’t in effect “name every element of space”, and where we just consider their overall structure.

It’s an abstract and elaborate mathematical story. But the key point is that in the end our multiway formalism can be shown to correspond to the formalism that has been developed for categorical quantum mechanics—which in turn is known to be equivalent to the standard formalism of quantum mechanics.

So what this means is that we can take a description of a quantum system—say a quantum circuit—and in effect “compile” it into an equivalent multiway system. One thing is that we can think of this as a “proof by compilation”: we know our models reproduce standard quantum mechanics, because standard quantum mechanics can in effect just be systematically compiled into our models.

But in practice there’s something more: by really getting at the essence of quantum mechanics, our models can provide more efficient ways to do actual computations in quantum mechanics. And for example we’ve got recent results on using automated theorem proving methods within our models to more efficiently optimize practical quantum circuits. Much as in the case of general relativity, it seems that by “going underneath” the standard formalism of physics, we’re able to come up with more efficient ways to do computations, even for standard physics.

And what’s more, the formalism we have potentially applies to things other than physics. I’ll talk more about this later. But here let me mention a simple example that I’ve tried to use to build intuition about quantum mechanics. If you have something like tic-tac-toe, you can think of all possible games that can be played as paths through a multiway graph in which the nodes are possible configurations of the tic-tac-toe board. Much like in the case of quantum mechanics, one can define a branchial graph—and then one can start thinking about the analogs of all kinds of “quantum” effects, and how there are just a few final “classical” outcomes for the game.

Most practical computations in quantum mechanics are done at the level of quantum amplitudes—which in our setup corresponds essentially to working out the evolution of densities in branchial space. But in a sense this just tells us that there are lots of different threads of history that a particular system could follow. So how is it then that we come to perceive definite things as happening in the world?

The traditional formalism of quantum mechanics essentially by fiat introduces the so-called Born rule which in effect says how densities in branchial space can be converted to probabilities of different specific outcomes. But in our models we can “go inside” this “process of measurement”.

The key idea—which has become clearer over the course of this year—is at first a bit mind-bending. Remember that our models are supposed to be models for everything in the universe, including us as observers of the universe. In thinking about space and time we might at first imagine that we could just independently trace the individual time evolution of, for example, different atoms of space. But if we’re inside the system no such “absolute tracing” is possible; instead all we can ever perceive is the graph of causal relationships of different events that occur. In a sense we’re only “plugged into” the universe through the causal effects that the universe has on us.

OK, so what about the quantum case? We want to tell what’s going on in the multiway graph of all possible histories. But we’re part of that graph, with many possible histories ourselves. So in a sense what we have to think about is how a “branching brain” perceives a “branching universe”. People have often imagined that somehow having a “conscious observer” is crucial to “making measurements” in quantum mechanics. And I think we can now understand how that works. It seems as if the essence of being a “conscious observer” is precisely having a “single thread of experience”—or in other words conflating the different histories in different branches.

Of course, it is not at all obvious that doing this will be consistent. But in our models there is the notion of causal invariance. In the end this doesn’t have to be an intrinsic feature of specific low-level rules one attributes to the universe; as I’ll talk about a bit later, it seems to be an inevitable emergent feature of the structure of what we call rulial space. But what’s important about causal invariance is that it implies that different possible threads of history must in effect in the end always have the same causal structure—and the same observable causal graph that describes what happens in the universe.

It’s causal invariance that makes different reference frames in physical space (corresponding, for example, to different states of motion) work the same, and that leads to relativistic invariance. And it’s also causal invariance (or at least eventual causal invariance) that makes the conflation of quantum histories be consistent—and makes there be a meaningful notion of objective reality in quantum mechanics, shared by different observers.

There’s more to do in working out the detailed mechanics of how threads of history can be conflated. It can be thought of as closely related to the addition of “completion lemmas” in automated theorem proving. Some aspects of it can be thought of as a “convention”—analogous to a choice of reference frame. But the structure of the model implies certain important “physical constraints”.

We’ve often been asked: “What does all this mean for quantum computing?” The basic idea of quantum computing—captured in a minimal form by something like a multiway Turing machine—is to do different computations in parallel along different possible threads of history. But the key issue (that I’ve actually wondered about since the early 1980s) is then how to corral those threads of history together to figure out a definite answer for the computation. And our models give us ways to look “inside” that process, and see what’s involved, and how much time it should take. We’re still not sure about the answer, but the preliminary indication is that at least at a formal level, quantum computers aren’t going to come out ahead. (In practice, of course, investigating physical processes other than traditional semiconductor electronics will surely lead to even perhaps dramatically faster computers, even if they’re not “officially quantum”.)

One of the surprises to me this year has been just how far we can get in exploring quantum mechanics without ever having to talk about actual particles like electrons or photons. Actual quantum experiments usually involve particles that are somehow localized to particular positions in space. But it seems as if the essentials of quantum mechanics can actually be captured without depending on particles, or space.

What are particles in our models? Like everything else in the universe, they can be thought of as features of space. The general picture is that in the spatial hypergraph there are continual updates going on, but most of them are basically just concerned with “maintaining the structure of space”. But within that structure, we imagine that there can be localized pieces that have a certain stability that allows them to “move largely unchanged through space” (even as “space itself” is continually getting remade). And these correspond to particles.

Analogous to things like vortices in fluids, or black holes in spacetime, we can view particles in our models as some kind of “topological obstructions” that prevent features of the hypergraph from “readily unraveling”. We’ve made some progress this year in understanding what these topological obstructions might be like, and how their structure might be related to things like the quantization of particle spin, and in general the existence of discrete quantum numbers.

It’s an interesting thing to have both “external space” and “internal quantum numbers” encoded together in the structure of the spatial hypergraph. But we’ve been making progress at seeing how to tease apart different features of things like homotopy and geometry in the limit of large hypergraphs, and how to understand the relations between things like foliations and fibrations in the multiway graph describing hypergraph evolution.

We haven’t “found the electron” yet, but we’re definitely getting closer. And one of the things we’ve started to identify is how a fiber bundle structure can emerge in the evolution of the hypergraph—and how local gauge invariance can arise. In a discrete hypergraph it’s not immediately obvious even how something like limiting rotational symmetry would work. We have a pretty good idea how hypergraphs can limit on a large scale to continuous “spatial” manifolds. And it’s now becoming clearer how things like the correspondences between collections of geodesics from a single point can limit to things like continuous symmetry groups.

What’s very nice about all of this is how generic it’s turning out to be. It doesn’t depend on the specifics of the underlying rules. Yes, it’s difficult to untangle, and to set up the appropriate mathematics. But once one’s done that, the results are very robust.

But how far will that go? What will be generic, and what not? Spatial isotropy—and the corresponding spherical symmetry—will no doubt be generic. But what about local gauge symmetry? The SU(3)×SU(2)×U(1) that appears in the Standard Model of particle physics seems on the face of it quite arbitrary. But it would be very satisfying if we were to find that our models inevitably imply a gauge group that is, say, a subgroup of E(8).

We haven’t finished the job yet, but we’ve started understanding features of particle physics like CPT invariance (P and T are space and time inversion, and we suspect that charge conjugation operation C is “branchial inversion”). Another promising possibility relates to the distinction between fermions and bosons. We’re not sure yet, but it seems as if Fermi–Dirac statistics may be associated with multiway graphs where we see only non-merging branches, while Bose–Einstein statistics may be associated with ones where we see all branches merging. Spinors may then turn out to be as straightforward as being associated with directed rather than undirected spatial hypergraphs.

It’s not yet clear how much we’re going to have to understand particles in order to see things like the spin-statistics connection, or whether—like in basic quantum mechanics—we’re going to be able to largely “factor out” the “spatial details” of actual particles. And as we begin to think about quantum field theory, it’s again looking as if there’ll be a lot that can be said in the “bulk” case, without having to get specific about particles. And just as we’ve been able to do for spacetime and general relativity, we’re hoping it’ll be possible to do computations in quantum field theory directly from our models, providing, for example, an alternative to things like lattice gauge theory (presumably with a more realistic treatment of time).

When we mix spatial hypergraphs with multiway graphs we inevitably end up with pretty complex structures—and ones that at least in the first instance tend to be full of redundancy. In the most obvious “global” multiway graph, each multiway graph node is in effect a complete state of the universe, and one’s always (at least conceptually) “copying” every part of this state (i.e. every spatial hypergraph node) at every update, even though only a tiny part of the state will actually be affected by the update.

So one thing we’ve been working on this year is defining more local versions of multiway systems. One version of this is based on what I call “multispace”, in which one effectively “starts from space”, then lets parts of it “bow out” where there are differences between different multiway branches. But a more scalable approach is to make a multiway graph not from whole states, but instead from a mixture of update events and individual “tokens” that knit together to form states.

There’s a definite tradeoff, though. One can set up a “token-event graph” that pretty much completely avoids redundancy. But the cost is that it can be very difficult to reassemble complete states. The full problem of reassembly no doubt runs into the computational irreducibility of the underlying evolution. But presumably there’s some limited form of reassembly that captures actual physical measurements, and that can be done by computationally bounded observers.

In assessing a scientific theory the core question to ask is whether you get out more than you put in. It’s a bad sign if you carefully set up some very detailed model, and it still can’t tell you much. It’s a good sign if you just set up a simple model, and it can tell you lots of things. Well, by this measure, our models are the most spectacular I have ever seen. A year ago, it was already clear that the models had a rich set of implications. But over the course of this year, it feels as if more and more implications have been gushing out.

And the amazing thing is that they all seem to align with what we know from physics. There’s been no tweaking involved. Yes, it’s often challenging to work out what the models imply. But when we do, it always seems to agree with physics. And that’s what makes me now so confident that our models really do actually represent a correct fundamental theory of physics.

It’s been very interesting to see the methodology of “proof by compilation”. Do our models correctly reproduce general relativity? We can “compile” questions in general relativity into our models—then effectively run at the level of our “machine code”, and generate results. And what we’ve found is that, yes, compiling into our models works, giving the same results as we would get in the traditional theory, though, as it happens, potentially more efficiently.

We’ve found the same thing for quantum mechanics. And maybe we’ll find the same thing also for quantum field theory (where the traditional computations are much harder).

We’ve also been looking at specific effects and phenomena in existing physics—and we’re having excellent success not only in reproducing them in our models (and finding ways to calculate them) but also in (often for the first time) fundamentally understanding them. But what about new effects and phenomena that aren’t seen or expected in existing physics? Especially surprising ones?

It’s already very significant when a theory can efficiently explain things that are already known. But it’s a wonderful “magic trick” if a theory can say “This is what you’ll see”, and then that’s what’s seen in some actual experiment. Needless to say, it can be very difficult to figure out detailed predictions from a theory (and historically it’s often taken decades or even centuries). And when you’re dealing with something that’s never been seen before, it’s often difficult to know if you’ve included everything you need to get the right answer, both in working out theoretical predictions, and in making experimental measurements.

But one of the interesting things about our models is how structurally different they are from existing physics. And even before we manage to make detailed quantitative predictions, the very structure of our models implies the possibility of a variety of unexpected and often bizarre phenomena.

One class of such phenomena relate to the fact that in our models the dimension of space is dynamic, and does not just have a fixed integer value. Our expectation is that in the very early universe, the dimension of space was effectively infinite, gradually “cooling” to approximately 3. And in this setup, there should have been “dimension fluctuations”, which could perhaps have left a recognizable imprint on the cosmic microwave background, or other large-scale features of the universe.

It’s also possible that there could be dimension fluctuations still in our universe today, either as relics from the early universe, or as the result of gravitational processes. And if photons propagate through such dimension fluctuations, we can expect strange optical effects, though the details are still to be worked out. (One can also imagine things like pulsar timing anomalies, or effects on gravitational waves—or just straight local deviations from the inverse square law. Conceivably quantum field theoretic phenomena like anomalous magnetic moments of leptons could be sensitive dimension probes—though on small scales it’s difficult to distinguish dimension change from curvature. Or maybe there would be anomalies or magnetic monopoles made possible by noninteger dimensionality.)

A core concept of our models is that space (and time) are fundamentally discrete. So how might we see signs of this discreteness? There’s really only one fundamental unknown free parameter in our models (at least at a generic level), and there are many seemingly very different experiments that could determine it. But without having the value of this parameter, we don’t ultimately know the scale of discreteness in our models.

We have a (somewhat unreliable) estimate, however, that the elementary length might be around 10^{-90} meters (and the elementary time around 10^{-100} seconds). But these are nearly 70 orders of magnitude smaller than anything directly probed by present-day experiments.

So can we imagine any way to detect discreteness on such scales? Conceivably there could be effects left over from a time when the whole universe was very small. In the current universe there could be a signature of momentum discreteness in “maximum boosts” for sufficiently light particles. Or maybe there could be “shot noise” in the propagation of particles. But the best hope for detecting discreteness of spacetime seems to be in connection with large gravitational fields.

Eventually our models must imply corrections to Einstein’s equations. But at least in the most obvious estimates these would only become significant when the scale of curvature is comparable to the elementary length. Of course, it’s conceivable that there could be situations where, for example, there could be, say, a logarithmic signature of discreteness, allowing a more effective “gravitational microscope” to be constructed.

In current studies of general relativity, the potentially most accessible “extreme situation” is a spinning black hole close to critical angular momentum. And in our models, we already have direct simulations of this. And what we see is that as we approach criticality there starts to be a region of space that’s knitted into the rest of space by fewer and fewer updating events. And conceivably when this happens there would be “shot noise”, say visible in gravitational waves.

There are other effects too. In a kind of spacetime analog of vacuum polarization, the discreteness of spacetime should lead to a “black hole wind” of outgoing momentum from an event horizon—though the effect is probably only significant for elementary-length-scale black holes. (Such effects might lead to energy loss from black holes through a different “mode of spacetime deformation” than ordinary gravitational radiation.) Another effect of having a discrete structure to space is that information transmission rates are only “statistically” limited to the speed of light, and so fluctuations are conceivable, though again most likely only on elementary-length-type scales.

In general the discreteness of spacetime leads to all sorts of exotic structures and singularities in spacetime not present in ordinary general relativity. Notable potential features include dynamic topology change, “space tunnels”, “dimension anomalies” and spatial disconnection.

We imagine that in our models particles are some kind of topological obstructions in the spatial hypergraph. And perhaps we will find even quite generic results for the “spectrum” of such obstructions. But it’s also quite possible that there will be “topologically stable” structures that aren’t just like point particles, but are something more exotic. By the way, in computing things like the cosmological constant—or features of dark energy—we need to compare the “total visible particle content” with the total activity in the spatial hypergraph, and there may be generic results to be had about this.

One feature of our models is that they imply that things like electrons are not intrinsically of zero size—but in fact are potentially quite large compared to the elementary length. Their actual size is far out of range of any anticipated experiments, but the fact that they involve so many elements in the underlying spatial hypergraph suggests that there might be particles—that I’ve called oligons—that involve many fewer, and that might have measurable cosmological or astrophysical effects, or even be directly detectable as some kind of very-low-mass dark matter.

In thinking about particles, our models also make one think about some potential highly exotic possibilities. For example, perhaps not every photon in the universe with given energy-momentum and polarization is actually identical. Maybe they have the same “overall topological structure”, but different detailed configuration of (say) the multiway causal graph. And maybe such differences would have detectable effects on sufficiently large coherent collections of photons. (It may be more plausible, however, that particles act a bit like tiny black holes, with their “internal state” not evident outside.)

When it comes to quantum mechanics, our models again have some generic predictions—the most obvious of which is the existence of a maximum entanglement speed ζ, that is the analog of the speed of light, but in branchial space. In our models, the scale of ζ is directly connected to the scale of the elementary length, so measuring one would determine the other—and with our (rather unreliable) estimate for the elementary length ζ might be around 10^{5} solar masses per second.

There are a host of “relativity-analog” effects associated with ζ, an example being the quantum Zeno effect that is effectively time dilation associated with rapidly repeated measurement. And conceivably there is some kind of atomic-scale (or gravitational-wave-detector-deformation-scale) “measurement from the environment” that could be sensitive to this—perhaps associated with what might be considered “noise” for a quantum computer. (By the way, ζ potentially also defines limitations on the effectiveness of quantum computing, but it’s not clear how one would disentangle “engineering issues”.)

Then there are potential interactions between quantum mechanics and the structure of spacetime—perhaps for example effects of features of spacetime on quantum coherence. But probably the most dramatic effects will be associated with things like black holes, where for example the maximum entanglement speed should represent an additional limitation on black hole formation—that with our estimate for ζ might actually be observable in the near term.

Historically, general relativity was fortunate enough to imply effects that did not depend on any unknown scales (like the cosmological constant). The most obvious candidates for similar effects in our models involve things like the quantum behavior of photons orbiting a black hole. But there’s lots of detailed physics to do to actually work any such things out.

In the end, a fundamental model for physics in our setup involves some definite underlying rule. And some of our conclusions and predictions about physics will surely depend on the details of that rule. But one of the continuing surprises in our models is how many implied features of physics are actually generic to a large class of rules. Still, there are things like the masses of elementary particles that at least feel like they must be specific to particular rules. Although—who knows—maybe overall symmetries are determined by the basic structure of the model, maybe the number of generations of fermions is connected to the effective dimensionality of space, etc. These are some of the kinds of things it looks conceivable that we’ll begin to know in the next few years.

When I first started developing what people have been calling “Wolfram models”, my primary motivation was to understand fundamental physics. But it was quickly clear that the models were interesting in their own right, independent of their potential connection to physics, and that they might have applications even outside of physics. And I suppose one of the big surprises this year has been just how true that is.

I feel like our models have introduced a whole new paradigm, that allows us to think about all kinds of fields in fundamentally new ways, and potentially solve longstanding foundational problems in them.

The general exploration of the computational universe—that I began more than forty years ago—has brought us phenomena like computational irreducibility and has led to all sorts of important insights. But I feel that with our new models we’ve entered a new phase of understanding the computational universe, in particular seeing the subtle but robust interplay between computational reducibility and computational irreducibility that’s associated with the introduction of computationally bounded observers or measurements.

I hadn’t really known how to fit the successes of physics into the framework of what I’d seen in the computational universe. But now it’s becoming clear. And the result is not only that we understand more about the foundations of physics, but also that we can import the successes of physics into our thinking about the computational universe, and all its various applications.

At a very pragmatic level, cellular automata (my longtime favorite examples in the computational universe) provide minimal models for systems in which arbitrary local rules operate on a fixed array in space and time. Our new models now provide minimal models for systems that have no such definite structure in space and time. Cellular automata are minimal models of “array parallel” computational processes; our new models are minimal models of distributed, asynchronous computational processes.

In something like a cellular automaton—with its very organized structure for space and time—it’s straightforward to see “what leads to what”. But in our new models it can be much more complicated—and to represent the causal relationships between different events we need to construct causal graphs. And for me one consequence of studying our models has been that whenever I’m studying anything I now routinely start asking about causal graphs—and in all sorts of cases this has turned out to be very illuminating.

But beyond causal graphs, one feature of our new models is their essentially inevitable multiway character. There isn’t just one “thread of history” for the evolution of the system, there’s a whole multiway graph of them. In the past, there’ve been plenty of probabilistic or nondeterministic models for all sorts of systems. But in a sense I’ve always found them unsatisfactory, because they end up talking about making an arbitrary choice “from outside the system”. A multiway graph doesn’t do that. Instead, it tells our story purely from within the system. But it’s the whole story: “in one gulp” it’s capturing the whole dynamic collection of all possibilities.

And now that the formalism of our models has gotten me used to multiway graphs, I see them everywhere. And all sorts of systems that I thought somehow weren’t well enough defined to be able to study in a systematic way I now realize are amenable to “multiway analysis”.

One might think that a multiway graph that captures all possibilities would inevitably be too complicated to be useful. But this is another key observation from our Physics Project: particularly with the phenomenon of causal invariance, there are generic statements that can be made, without dealing with all the details. And one of the important directions we’ve pursued over the course of this year is to get a better understanding—sometimes using methods from category theory—of the general theory of multiway systems.

But, OK, so what can we apply the formalism of our models to? Lots of things. Some that we’ve at least started to think seriously about are: distributed computing, mathematics and metamathematics, chemistry, biology and economics. And in each case it’s not just a question of having some kind of “add-on” model; it seems like our formalism allows one to start talking about deep, foundational questions in each of these fields.

In distributed computing, I feel like we’re just getting started. For decades I’ve wondered how to think about organizing distributed computing so that we humans can understand it. And now within our formalism, I’ve both understood why that’s hard, and begun to get ideas about how we might do it. A crucial part is getting intuition from physics: thinking about “programming in a reference frame”, causal invariance as a source of eventual consistency, quantum effects as ambiguities of outcome, and so on. But it’s also been important over the past year to study specific systems—like multiway Turing machines and combinators—and be able to see how things work out in these simpler cases.

As an “exercise”, we’ve been looking at using ideas from our formalism to develop a distributed analog of blockchain—in which “intentional events” introduced from outside the system are “knitted together” by large numbers of “autonomous events”, in much the same way as consistent “classical” space arises in our models of physics. (The analog of “forcing consensus” or coming to a definite conclusion is essentially like the process of quantum measurement.)

It’s interesting to try to apply “causal” and “multiway” thinking to practical computation, for example in the Wolfram Language. What is the causal graph of a computation? It’s a kind of dependency trace. And after years of looking for a way to get a good manipulable symbolic representation of program execution this may finally show us how to do it. What about the multiway graph? We’re used to thinking about computations that get done on “data structures”, like lists. But how should we think of a “multiway computation” that can produce a whole bundle of outputs? (In something like logic programming, one starts with a multiway concept, but then typically picks out a single path; what seems really interesting is to see how to systematically “compute at the multiway level”.)

OK, so what about mathematics? There’s an immediate correspondence between multiway graphs and the networks obtained by applying axioms or laws of inference to generate all possible theorems in a given mathematical theory. But now our study of physics makes a suggestion: what would happen if—like in physics—we take a limit of this process? What is “bulk” or “continuum” metamathematics like?

In the history of human mathematics, there’ve been a few million theorems published—defining in a sense the “human geography” of metamathematical space. But what about the “intrinsic geometry”? Is there a theory of this, perhaps analogous to our theory of physics? A “physicalized metamathematics”? And what does it tell us about the “infinite-time limit” of mathematics, or the general nature of mathematics?

If we try to fully formalize mathematics, we typically end up with a very “non-human” “machine code”. In physics there might be a hundred orders of magnitude between the atoms of space and our typical experience. In present-day formalized mathematics, there might be 4 or 5 orders of magnitude from the “machine code” to typical statements of theorems that humans would deal with.

At the level of the machine code, there’s all sorts of computational irreducibility and undecidability, just like in physics. But somehow at the “human level” there’s enough computational reducibility that one can meaningfully “do mathematics”. I used to think that this was some kind of historical accident. But I now suspect that—just like with physics—it’s a fundamental feature of the involvement of computationally bounded human “observers”. And with the correspondence of formalism, one’s led to ask things like what the analog of relativity—or quantum mechanics—is in “bulk metamathematics”, and, for example, how it might relate to things like “computationally bounded category theory”.

And, yes, this is interesting in terms of understanding the nature of mathematics. But mathematics also has its own deep stack of results and intuition, and in studying mathematics using the same formalism as physics, we also get to use this in our efforts to understand physics.

How could all this be relevant to chemistry? Well, a network of all possible chemical reactions is once again a multiway graph. In chemical synthesis one’s usually interested in just picking out one particular “pathway”. But what if we think “multiway style” about all the possibilities? Branchial space is a map of chemical species. And we now have to understand what kind of laws a “computationally bounded chemical sensor” might “perceive” in it.

Imagine we were trying to “do a computation with molecules”. The “events” in the computation could be thought of as chemical reactions. But now instead of just imagining “getting a single molecular result”, consider using the whole multiway system “as the computation”. It’s basically the same story as distributed computing. And while we don’t yet have a good way to “program” like this, our Physics Project now gives us a definite direction. (Yes, it’s ironic that this kind of molecular-scale computation might work using the same formalism as quantum mechanics—even though the actual processes involved don’t have to be “quantum” in the underlying physics sense.)

When we look at biological systems, it’s always been a bit of a mystery how one should think about the complex collections of chemical processes they involve. In the case of genetics we have the organizing idea of digital information and DNA. But in the general case of systems biology we don’t seem to have overarching principles. And I certainly wonder whether what’s missing is “multiway thinking” and whether using ideas from our Physics Project we might be able to get a more global understanding—like a “general relativity” of systems biology.

It’s worth pointing out that the detailed techniques of hypergraph evolution are probably applicable to biological morphogenesis. Yes, one can do a certain amount with things like continuum reaction-diffusion equations. But in the end biological tissue—like, we now believe, physical space—is made of discrete elements. And particularly when it comes to topology-changing phenomena (like gastrulation) that’s probably pretty important.

Biology hasn’t generally been a field that’s big on formal theories—with the one exception of the theory of natural selection. But beyond specific few-whole-species-dynamics results, it’s been difficult to get global results about natural selection. Might the formalism of our models help? Perhaps we’d be able to start thinking about individual organisms a bit like we think about atoms of space, then potentially derive large-scale “relativity-style” results, conceivably about general features of “species space” that really haven’t been addressed before.

In the long list of potential areas where our models and formalism could be applied, there’s also economics. A bit like in the natural selection case, the potential idea is to think about in effect modeling every individual event or “transaction” in an economy. The causal graph then gives some kind of generalized supply chain. But what is the effect of all those transactions? The important point is that there’s almost inevitably lots of computational irreducibility. Or, in other words, much like in the Second Law of Thermodynamics, the transactions rapidly start to not be “unwindable” by a computationally bounded agent, but have robust overall “equilibrium” properties, that in the economic case might represent “meaningful value”—so that the robustness of the notion of monetary value might correspond to the robustness with which thermodynamic systems can be characterized as having certain amounts of heat.

But with this view of economics, the question still remains: are there “physics-like” laws to be found? Are there economic analogs of reference frames? (In an economy with geographically local transactions one might even expect to see effects analogous to relativistic time dilation.)

To me, the most remarkable thing is that the formalism we’ve developed for thinking about fundamental physics seems to give us such a rich new framework for discussing so many other kinds of areas—and for pooling the results and intuitions of these areas.

And, yes, we can keep going. We can imagine thinking about machine learning—for example considering the multiway graph of all possible learning processes. We can imagine thinking about linguistics—starting from every elementary “event” of, say, a word being said by one person to another. We even think about questions in traditional physics—like one of my old favorites, the hard-sphere gas—analyzing them not with correlation functions and partition functions but with causal graphs and multiway graphs.

A year ago, as we approached the launch of the Wolfram Physics Project, we felt increasingly confident that we’d found the correct general formalism for the “machine code” of the universe, we’d built intuition by looking at billions of possible specific rules, and we’d discovered that in our models many features of physics are actually quite generic, and independent of specific rules.

But we still assumed that in the end there must be some specific rule for our particular universe. We thought about how we might find it. And then we thought about what would happen if we found it, and how we might imagine answering the question “Why this rule, and not another?”

But then we realized: actually, the universe does not have to be based on just one particular rule; in some sense it can be running all possible rules, and it is merely through our perception that we attribute a specific rule to what we see about the universe.

We already had the concept of a multiway graph, generated by applying all possible update events, and tracing out the different histories to which they lead. In an ordinary multiway graph, the different possible update events occur at different places in the spatial hypergraph. But we imagined generalizing this to a rulial multiway graph, generated by applying not just updates occurring in all possible places, but also updates occurring with all possible rules.

At first one might assume that if one used all possible rules, nothing definite could come out. But the fact that different rules can potentially lead to identical states causes a definite rulial multiway graph to be knitted together—including all possible histories, based on all possible sequences of rules.

What could an observer embedded in such a rulial multiway graph perceive? Just as for causal graphs or ordinary multiway graphs, one can imagine defining a reference frame—here a “rulial frame”—that makes the observer perceive the universe as evolving through a series of slices in rulial space, or in effect operating according to certain rules. In other words, the universe follows all possible rules, but an observer in a particular rulial frame describes its operation according to particular rules.

And the critical point is then that this is consistent because the evolution in the rulial multiway graph inevitably shows causal invariance. At first this all might seem quite surprising. But the thing to realize is that the Principle of Computational Equivalence implies that collections of rules will generically show computation universality. And this means that whatever rulial frame one picks—and whatever rules one uses to describe the evolution of the universe—it’ll always be possible to use those rules to emulate any other possible rules.

There is a certain ultimate abstraction and unification in all this. In a sense it says that the only thing one ultimately needs to know about our universe is that it is “computational”—and from there the whole formal structure of our models takes over. It also tells us that there is ultimately only one universe—though different rulial frames may describe it differently.

How should we think about the limiting rulial multiway graph? It turns out that something like it has also appeared in the upper reaches of pure mathematics in connection with higher category theory. We can think of our basic multiway graphs as related to (weak versions of) ordinary categories. It’s a little different from how categorical quantum mechanics works in our models. But when we add in equivalences between branches in the multiway system we get a 2-category. And if we keep adding higher-and-higher-order equivalences, we get higher and higher categories. But in the infinite limit it turns out the structure we get is exactly the rulial multiway graph—so that now we can identify this as an infinity category, or more specifically an infinity groupoid.

Grothendieck’s conjecture suggests that there is in a sense inevitable geometry in the infinity groupoid, and it’s ultimately this structure that seems to “trickle down” from the rulial multiway graph to everything else we look at, and imply, for example, that there can be meaningful notions of physical and branchial space.

We can think of the limiting multiway graph as a representation of physics and the universe. But the exact same structure can also be thought of as a kind of metamathematical limit of all possible mathematics—in a sense fundamentally tying together the foundations of physics and mathematics.

There are many details and implications to this, that we’re just beginning to work out. The ultimate formation of the rulial multiway graph depends on identifying when states or objects can be treated as the same, and merged. In the case of physics, this can be seen as a feature of the observer, and the reference frames they define. In the case of mathematics, it can be seen as a feature of the underlying axiomatic framework used, with the univalence axiom of homotopy type theory being one possible choice.

The whole concept of rulial space raises the question of why we perceive the kind of laws of physics we do, rather than other ones. And the important recent realization is that it seems deeply connected to what we define as consciousness.

I must say that I’ve always been suspicious about attempts to make a scientific framework for consciousness. But what’s recently become clear is that in our approach to physics there’s both a potential way to do it, and in a sense it’s fundamentally needed to explain what we see.

Long ago I realized that as soon as you go beyond humans, the only viable general definition of intelligence is the ability to do sophisticated computation—which the Principle of Computational Equivalence says is quite ubiquitous. One might have thought that consciousness is an “add-on” to intelligence, but actually it seems instead to be a “step down”. Because it seems that the key element of what we consider consciousness is the notion of having a definite “thread of experience” through time—or, in other words, a sequential way to experience the universe.

In our models the universe is doing all sorts of complicated things, and showing all sorts of computational irreducibility. But if we’re going to sample it in the way consciousness does, we’ll inevitably pick out only certain computationally reducible slices. And that’s precisely what the laws of physics we know—embodied in general relativity and quantum mechanics—correspond to. In some sense, therefore, we see physics as we do because we are observing the universe through the sequential thread of experience that we associate with consciousness.

Let me not go deeper into this here, but suffice it to say that from our science we seem to have reached an interesting philosophical conclusion about the way that we effectively “create” our description of the universe as a result of our own sensory and cognitive capabilities. And, yes, that means that “aliens” with different capabilities (or even just different extents in physical or branchial space) could have descriptions of the universe that are utterly incoherent with our own.

But, OK, so what can we say about rulial space? With a particular description of the universe we’re effectively stuck in a particular location or frame in rulial space. But we can imagine “moving” by changing our point of view about how the universe works. We can always make a translation, but that inevitably takes time.

And in the end, just like with light cones in physical space, or entanglement cones in branchial space, there’s a limit to how fast a particular translation distance can be covered, defined by a “translation cone”. And there’s a “maximum translation speed” ρ, analogous to the speed of light *c* in space or the maximum entanglement speed ζ in branchial space. And in a sense ρ defines the ultimate “processor speed” for the universe.

In defining the speed of light we have to introduce units for length in space. In defining ρ we have to introduce units for the length of descriptions of programs or rules—so, for example, ρ could be measured, say, in units of “Wolfram Language tokens per second”. We don’t know the value of ρ, but an unreliable estimate might be 10^{450} WLT/second. And just like in general relativity and quantum mechanics one can expect that there will be all sorts of effects scaled by ρ that occur in rulial space. (One example might be a “quantum-like uncertainty” that provides limits on inductive inference by not letting one distinguish “theories of the universe” until they’ve “diverged far enough” in rulial space.)

The concept of rulial space is a very general one. It applies to physics. It applies to mathematics. And it also applies to pure computation. In a sense rulial space provides a map of the computational universe. It can be “coordinatized” by representing computations in terms of Turing machines, cellular automata, Wolfram models, or whatever. But in general we can ask about its limiting geometrical and topological structure. And here we see a remarkable convergence with fundamental questions in theoretical computer science.

For example, particular geodesic paths in rulial space correspond to maximally efficient deterministic computations that follow a single rule. Geodesic balls correspond to maximally efficient nondeterministic computations that can follow a sequence of rules. So then something like the P vs. NP question becomes what amounts to a geometrical or topological question about rulial space.

In our Physics Project we set out to find a fundamental theory for physics. But what’s become clear is that in thinking about physics we’re uncovering a formal structure that applies to much more than just physics. We already had the concept of computation in all its generality—with implications like the Principle of Computational Equivalence and computational irreducibility. But what we’ve now uncovered is unification at a different level, not about all computation, but about computation as perceived by computationally bounded observers, and about the kinds of things about which we can expect to make theories as powerful as the ones we know in physics.

For each field what’s key is to identify the right question. What is the analog of space, or time, or quantum measurement, or whatever? But once we know that, we can start to use the machinery our formalism provides. And the result is a remarkable new level of unification and power to apply to science and beyond.

How should one set about finding a fundamental theory of physics? There was no roadmap for the science to do. And there was no roadmap for how the science should be done. And part of the unfolding story of the Wolfram Physics Project is about its process, and about new ways of doing science.

Part of what has made the Wolfram Physics Project possible is ideas. But part of it is also tools, and in particular the tall tower of technology that is the Wolfram Language. In a sense the whole four decades of history behind the Wolfram Language has led us to this point. The general conception of computational language built to represent everything, including, it now seems, the whole universe. And the extremely broad yet tightly integrated capabilities of the language that make it possible to so fluidly and efficiently pursue each different piece of research that is needed.

For me, the Wolfram Physics Project is an exciting journey that, yes, is going much better than I ever imagined. From the start we were keen to share this journey as widely as possible. We certainly hoped to enlist help. But we also wanted to open things up so that as many people as possible could experience and participate in this unique adventure at the frontiers of science.

And a year later I think I can say that our approach to open science has been a great and accelerating success. An increasing number of talented researchers have become involved in the project, and have been able to make progress with great synergy and effectiveness. And by opening up what we’re doing, we’ve also been able to engage with—and hopefully inspire—a very wide range of people even outside of professional science.

One core part of what’s moving the project forward is our tools and the way we’re using them. The idea of computational language—as the Wolfram Language uniquely embodies—is to have a way to represent things in computational terms, and be able to communicate them like that. And that’s what’s happening all the time in the Wolfram Physics Project. There’s an idea or direction. And it gets expressed in Wolfram Language. And that means it can explicitly and repeatably be understood, run and explored—by anyone.

We’re posting our Wolfram Language working notebooks all the time—altogether 895 of them over the past year. And we’re packaging functions we write into the Wolfram Function Repository—130 of them over the past year—all with source code, all documented, and all instantly and openly usable in any Wolfram Language system. It’s become a rhythm for our research. First explore in working notebooks, adding explanations where appropriate to make them readable as computational essays. Then organize important functions and submit them to the Function Repository, then use these functions to take the next steps in the research.

This whole setup means that when people write about their results, there’s immediately runnable computational language code. And in fact, at least in what I’ve personally written, I’ve had the rule that for any picture or result I show (so far 2385 of them) it must be possible to just click it, and immediately get code that will reproduce it. It might sound like a small thing, but this kind of fluid immediacy to being able to reproduce and build on what’s been done has turned out to be tremendously important and powerful.

There are so many details—that in a sense come as second nature given our long experience with production software development. Being careful and consistent about the design of functions. Knowing when it makes sense to optimize at the cost of having less flexible code. Developing robust standardized visualizations. There are lots of what seem like small things that have turned out to be important. Like having consistent color schemes for all our various kinds of graphs, so when one sees what someone has done, one immediately knows “that’s a causal graph”, “that’s a branchial graph” and so on, without even having to read any explanation.

But in addition to opening up the functions and ongoing notebooks we produce, we’ve also done something more radical: we’ve opened up our process of work, routinely livestreaming our working meetings. (There’ve been 168 hours of them this year; we’ve now also posted 331 hours from the 6 months before the launch of the project.) I’ve personally even gone one step further: I’ve posted “video work logs” of my personal ongoing work (so far, 343 hours of them)—right down to, for example, the writing of this very sentence.

We started doing all this partly as an experiment, and partly following the success we’ve had over the past few years in livestreaming our internal meetings designing the Wolfram Language. But it’s turned out that capturing our Physics Project being done has all sorts of benefits that we never anticipated. You see something in a piece I’ve written. You wonder “Where did that come from?”. Well, now you can drill all the way down, to see just what went into making it, missteps and all.

It’s been great to share our experience of figuring things out. And it’s been great to get all those questions, feedback and suggestions in our livestreams. I don’t think there’s any other place where you can see science being done in real time like this. Of course it helps that it’s so uniquely easy to do serious research livecoding in the Wolfram Language. But, yes, it takes some boldness (or perhaps foolhardiness) to expose one’s ongoing steps—forward or backward—in real time to the world. But I hope it helps people see more about what’s involved in figuring things out, both in general and specifically for our project.

When we launched the project, we put online nearly a thousand pages of material, intended to help people get up to speed with what we’d done so far. And within a couple of months after the launch, we had a 4-week track of our Wolfram Summer School devoted to the Wolfram Physics Project. We had 30 students there (as well as another 4 from our High School Summer Camp)—all of whom did projects based on the Wolfram Physics Project.

And after the Summer School, responding to tremendous demand, we organized two week-long study sessions (with 30 more students), followed in January by a 2-week Winter School (with another 17 students). It’s been great to see so many people coming up to speed on the project. And so far there’ve been a total of 79 publications, “bulletins” and posts that have come out of this—containing far more than, for example, I could possibly have summarized here.

There’s an expanding community of people involved with the Wolfram Physics Project. And to help organize this, we created our Research Affiliate and Junior Research Affiliate programs, now altogether with 49 people from around the world involved.

Something else that’s very important is happening too: steadily increasing engagement from a wide range of areas of physics, mathematics and computer science. In fact, with every passing month it seems like there’s some new research community that’s engaging with the project. Causal set theory. Categorical quantum mechanics. Term rewriting. Numerical relativity. Topos theory. Higher category theory. Graph rewriting. And a host of other communities too.

We can view the achievement of our project as being in a sense to provide a “machine code” for physics. And one of the wonderful things about it is how well it seems to connect with a tremendous range of work that’s been done in mathematical physics—even when it wasn’t yet clear how that work on its own might relate to physical reality. Our project, it seems, provides a kind of Rosetta stone for mathematical physics—a common foundation that can connect, inform and be informed by all sorts of different approaches.

Over the past year there’s been a repeated, rather remarkable experience. For some reason or another we’ll get exposed to some approach or idea. Constructor theory. Causal dynamical triangulation. Ontological bases. Synthetic differential geometry. ER=EPR. And we’ll use our models as a framework for thinking about it. And we’ll realize: “Gosh, now we can understand that!” And we’ll see how it fits in with our models, how we can learn more about our models from it—and how we can use our models and our formalism to bring in new ideas to advance the thing itself.

In some ways our project represents a radical shift from the past century or so of physics. And more often than not, when such intellectual shifts are made in the history of science, they’ve been accompanied by all kinds of difficulties in connecting with existing communities. But I’m very happy to report that over the past year our project has been doing quite excellently in connecting with existing communities—no doubt helped by its “Rosetta stone” character. And as we progress, we’re looking forward to an increasing network of collaborations, both within the community that’s already formed and with other communities.

And over the coming year, as we start to more seriously explore the implications of our models and formalism even beyond physics, I’m anticipating still more connections and collaborations.

It’s hard to believe it’s only been a little over 18 months since we started working in earnest on the Wolfram Physics Project. So much has happened, and we’ve gotten so much further than I ever thought possible. And it feels like a whole new world has opened up. So many new ideas, so many new ways of looking at things.

I’ve been fortunate enough to have already had a long and satisfying career, and it’s a surprising and remarkable thing at this stage to have what seems like a fresh, new start. Of course, in some respects I’ve spent much of my life preparing for what is now the Wolfram Physics Project. But the actuality of it has been so much more exciting and invigorating than anything I imagined. There’ve been so many questions—about all sorts of different things—that I’ve been accumulating and mulling over for decades. And suddenly it seems as if a door I never knew existed has opened, and now it’s possible to go forward on a dizzying array of fronts.

I’ve spent most of my life building a whole tower of things—alternating between science and technology. And in this tower it’s remarkable the extent to which each level has built on what’s come before: tools from technology have made it possible to explore science, and ideas from science have made it possible to create technology. But a year ago I thought the Wolfram Physics Project might finally be the end of the line: a piece of basic science that was really just science, and nothing but science, with no foreseeable implications for technology.

But it turns out I was completely wrong. And in fact of all the pieces of basic science I’ve ever done, the Wolfram Physics Project may be the one which has the greatest short-term implications for technology. We’re not talking about building starships using physics. We’re talking about taking the formalism we’ve developed for physics—and applying it, now informed by physics, in all sorts of very practical settings in distributed computing, modeling, chemistry, economics and beyond.

In the end, one may look back at many of these applications and say “that didn’t really need the Physics Project; we could have just got there directly”. But in my experience, that’s not how intellectual progress works. It’s only by building a tower of tools and ideas that one can see far enough to understand what’s possible. And without that decades or centuries may go by, with the path forward hiding in what will later seem like plain sight.

A year ago I imagined that in working on the Wolfram Physics Project I’d mostly be doing things that were “obviously physics”. But in actuality the project has led me to pursue all sorts of “distractions”. I’ve studied things like multiway Turing machines, which, yes, are fairly obviously related to questions about quantum mechanics. But I’ve also studied combinators and tag systems (OK, these were induced by the arrival of centenaries). And I spent a while looking at the empirical mathematics of Euclid and beyond.

And, yes, the way I approached all these things was strongly informed by our Physics Project. But what’s surprising is that I feel like doing each of these projects advanced the Physics Project too. The “Euclid” project has started to build a bridge that lets us import the intuition and formalism of metamathematics—informed by the concrete example of Euclid’s *Elements*. The combinator project deepened my understanding of causal invariance and of the possible structures of things like space. And even the historical scholarship I did on combinators taught me a lot about issues in the foundations of mathematics that have languished for a century but I now realize are important.

In all the pieces I’ve written over the past year add to about 750 pages of material (and, yes, that number makes me feel fairly productive). But there’s so much more to do and to write. A few times in my life I’ve had the great pleasure of discovering a new paradigm and being able to start exploring what’s possible within it. And in many ways the Wolfram Physics Project has—yes, after three decades of gestation—been the most sudden of these experiences. It’s been an exciting year. And I’m looking forward to what comes next, and to seeing the new paradigm that’s been created develop both in physics and beyond.

One of the great pleasures of this year has been the energy and enthusiasm of people working on the Wolfram Physics Project. But I’d particularly like to mention Jonathan Gorard, who has achieved an exceptional level of productivity and creativity, and has been a driving force behind many of the advances described here.

Finding bugs and fixing them is more than a passion of mine—it’s a compulsion. Several years ago, as a QA developer, I created the MUnit unit testing framework for the Wolfram Language, which is a framework for authoring and running unit tests in the language. Since then, I’ve created more tools to help developers write better Wolfram Language code while seamlessly checking for bugs in the process.

Writing good tests requires a lot of knowledge and a great deal of time. Since we need to be able to test and resolve bugs as quickly as possible in order to release new features on schedule, we turn to static analysis to be able to do so.

Static analysis is the process of examining source code before running it in order to try to predict its behavior and find problems. As a testing method, it’s incredibly useful. Finding problems while the code is running isn’t always viable. It can also be very expensive to run the code—all the more so if the code fails.

Considering the sheer volume of code that makes up the Wolfram Language (there are 1.2 million lines of kernel startup Wolfram Language code across 1,900 files and an additional 850,000 lines of paclet Wolfram Language code across 3,700 files), it’s imperative to have a strategy to test all of this code for bugs. Wolfram has tests dedicated to every square inch of the Wolfram Language—some of which I wrote!

The CodeInspector paclet is one of those vital static analysis tools that allow developers to do better work. Included in the recent release of Mathematica 12.2, CodeInspector scans Wolfram Language code and reports problems without requiring the user to manually run the paclet. CodeInspector along with CodeParser and CodeFormatter form the CodeTools suite, which is used by both internal and external users to improve the quality of their Wolfram Language code.

In general, static analysis cannot find all possible bugs in a program. (That is a consequence of the undecidability of the halting problem by way of Rice’s theorem.) But static analysis can still provide plenty of important information!

For example, it is easy to see that `&&` `True` is not needed in the test here:

This may be leftover debug code, or simply a mistake in logic. A static analysis tool may warn that the `&& True` is not needed and could be removed or changed to something else. While static analysis tools cannot discern the intention of the author, they can find classes of “likely problems” that merit investigation.

Creating a static analysis tool to test for bugs in the Wolfram Language comes with a very specific set of challenges. The Wolfram Language is incredibly dynamic and flexible as a coding language. While this would usually be considered a bonus for developers, it does make abstract modeling very difficult. Functions can be redefined at runtime, and it’s complicated to define precisely the concept of a value in the Wolfram Language.

Given the limitations inherent in the language, CodeInspector does lightweight static analysis based on pattern matching of syntax trees. This is similar to the “linting tools” that exist for other languages. (In fact, the original name of the CodeInspector paclet was Lint! It quickly became apparent that it would be doing more than just linting, so it was renamed to CodeInspector.)

CodeInspector currently has around two hundred built-in rules that are applied to code under inspection. The rules range from common syntactical problems (such as missing commas) to more obscure ones (such as using Q functions in symbolic solvers). Many rules include suggestions for fixing the code.

CodeInspector is included in Mathematica 12.2. If you have an older version of Mathematica, you can get CodeInspector by evaluating the following:

✕
PacletInstall["CodeParser"] PacletInstall["CodeInspector"]; Needs["CodeInspector`"] |

In order to programmatically get a list of all problems in the following code snippet:

… you can run this test:

✕
CodeInspect["If[a && True, b, b]"] |

To get a visual summary of all the problems found in the test, use CodeInspectSummarize (included in the CodeInspector paclet):

✕
CodeInspectSummarize["If[a && True, b, b]"] |

You can even use CodeInspectSummarize on the command line:

There are various ways to control the output of CodeInspectSummarize. In order to do so, we need to categorize problems, which is an interesting problem in and of itself! This is because we need to strike the right balance between exposing many properties of problems in a queryable way versus having a system that is easy for humans to consume and understand.

I use two dimensions, at least for now: severity and ConfidenceLevel. If the output shows that there are problems, severity denotes how severe each problem is. Will the problem ever impact users? Will it accidentally launch nuclear warheads? Knowledge is power, especially when you need to understand the impact of the problems at hand.

ConfidenceLevel denotes the level of confidence that the problem is actually a problem and not a false positive. ConfidenceLevel is a `Real` value between 0.0 and 1.0. ConfidenceLevel → 0.0 means no confidence at all in the problem being reported, while ConfidenceLevel → 1.0 means that there is definitely an issue at hand, like mismatched brackets in a function. A ConfidenceLevel of 0.5 would mean that roughly half the time this problem appears, it is a false positive. ConfidenceLevel is 1.0 in the event of a mismatched bracket. More experimental rules in CodeInspector will have lower ConfidenceLevel, and as I add heuristics to remove false positives, I increase the ConfidenceLevel for problems. Re-appropriating the ConfidenceLevel symbol for my purposes may be an abuse of notation, but it is convenient.

Because the Wolfram Language is so dynamic, it’s difficult to tell when an alleged bug is actually a bug. Even in the previous examples, it is possible that the `If` statement was written deliberately. Only syntax errors such as:

… can be flagged with 100% certainty. Note that even “obvious” problems such as:

… don’t necessarily have ConfidenceLevel → 1.0. Thus, every problem reported by CodeInspector has an associated ConfidenceLevel that indicates the confidence that the problem is actually a problem.

CodeInspectSummarize, by default, reports issues with 95% confidence or higher.

There are also four different severities associated with problems:

- A Remark is a problem with code style more than anything else.
- A Warning is a problem that may not give incorrect results but is still incorrect.
- An Error is a problem that will execute incorrect code and give incorrect results.
- A Fatal is an unrecoverable error such as a syntax error.

These severities should be interpreted at the same time as ConfidenceLevel. Severities are only meaningful if the problem is not a false positive.

The Wolfram Language has a powerful built-in pattern matcher, and it can be used to do static analysis on expressions.

I designed CodeInspector’s rule engine to include knowledge of the relative position of the code under inspection, so we can move up the syntax tree to parent nodes and ask other questions. This is useful when writing a rule to make sure that some syntax occurs lexically within some other container syntax.

For example:

✕
CodeInspectSummarize["Select[names, FileType[#] === Directory]", ConfidenceLevel -> 0.8] |

This illustrates a common mistake: forgetting the `&`.

Starting with the location of the `#`, we go up the tree, looking for a matching `&`:

No `&` is ever found, so a problem is reported. Notice that this rule has a lower confidence and I need to specify ConfidenceLevel → 0.8 to see it.

You can choose from different rules depending on the syntax that you care about. For example, if you wanted a rule to find cases where a `Real` is being added to an `Integer`, then you do not care about the concrete syntax of `1.2+3` versus `Plus``[1.2, 3]`.

There are three different levels of syntax:

- Concrete syntax: where white space matters.
- Aggregate syntax: trivia has been removed and you care about the actual operators used.
- Abstract syntax: more abstract issues such as unused variables, bad symbols, bad function calls, etc.

In this example, I forgot to put a semicolon at the end of the line, so the entire expression is treated as `a=1*a+b`. This is incorrect, and leads to infinite recursion when the code is run:

✕
CodeInspectSummarize[" Module[{a}, a = 1 a + b ] "] |

In this example, I forgot to insert a question mark for `PatternTest`.

CodeInspector catches cases when Q functions are being treated as a `Head` and suggests inserting a question mark:

✕
CodeInspectSummarize["a_EvenQ"] |

In this example, I am trying to specify `ImageSize` using the output of `ImageDimensions`, but the two functions do not have the same units. The `ImageSize` option expects points, but `ImageDimensions` returns pixels:

✕
CodeInspectSummarize["Image[incolor,ImageSize->ImageDimensions[\ img]]", ConfidenceLevel -> 0.8] |

CodeInspector is run regularly on the internal code written by developers at Wolfram Research. The following are two recently encountered problems that were found and fixed by CodeInspector. These problems are subtle, and would have been hard to find by writing tests.

Parentheses are needed to wrap the entire right-hand side. The original code was equivalent to:

This is certainly not what the author intended.

The extra underscore _ after `inc` means that `{__}` was being treated as the `Optional` value of `inc`. But the intention was for `inc` to match the pattern `{__}`. CodeInspector was able to find these issues and get them fixed before releasing the code.

CodeInspectSummarize reports problems with a given `File` in the exact same way as it reports problems with a given `String`.

Because Wolfram Language code is interpreted, and therefore does not have a compilation step, it may not be clear when would be the best time to scan for problems. In practice, I’ve found that the time when paclets are built is a good time to scan.

I have scripted CMake to scan each Wolfram Language file before building the paclet. Here is what it looks like when I have a typo in my code and I try to build the CodeInspector paclet itself:

As such, I can see the typo in my code and fix it immediately in the source code. Otherwise, I would have built the paclet with bad code, and would have encountered strange errors while trying to run the code. This highlights one of the many reasons why it’s important to catch and fix problems as soon as possible—demonstrating the significance of CodeInspector by testing CodeInspector itself.

New rules are continually being added to CodeInspector, which you can check out in the CodeInspector repository on GitHub. Many of the current rules were inspired by suggestions from users, so please let me know in the comments section if you have any ideas or suggestions.

Get full access to the latest Wolfram Language functionality with a Mathematica 12.2 or Wolfram|One trial. |

The Wolfram Language has several hundred built-in functions, ranging from sine to Heun. As a user, you can extend this collection in infinitely many ways by applying arithmetic operations and function composition. This could lead you to defining expressions of bewildering complexity, such as the following:

✕
f = SinhIntegral[ LogisticSigmoid[ ScorerHi[Tanh[AiryAi[HermiteH[-(1/2), x] - x + 1]]]]]; |

You may then ask, “Is continuous?” or “Can be written as a composition of an increasing function with another function?” The powerful new tools for studying function properties in Version 12.2 provide quick answers to such questions—opening the doors for applying a network of theorems and ideas that have been developed by mathematicians during the last few centuries.

The ancient Babylonians constructed tables for the squares and cubes of natural numbers (nowadays, we would refer to them as functions defined on the set of natural numbers). Although more informal uses of functions were made during the centuries that followed, the systematic use of functions commenced after the discovery of analytic geometry by René Descartes. In particular, Sir Isaac Newton made extensive use of power series representations for functions in his development of calculus.

Gottfried Leibniz, the co-inventor of calculus, made the first formal use of the word “function” in 1673. Next, Leonhard Euler took a giant leap forward when he identified a function with its analytic expression (essentially, a formula). Euler’s simple characterization of a function was questioned by Joseph Fourier, who gave examples of discontinuous functions that could be represented by infinite trigonometric series—showing that analytic expressions were inadequate for describing functions that commonly arise in many practical applications.

Augustin-Louis Cauchy, Karl Weierstrass and Bernhard Riemann developed the theory of complex functions, in which the singularities of functions determine their global behavior in the complex plane. Complex functions also provided the correct setting for the magnificent theory of elliptic functions and integrals, developed by mathematical geniuses Niels Henrik Abel and Carl Jacobi.

Since then, the function concept has continued to evolve, driven by the needs of pure and applied mathematics. These days, we think of a function simply as an abstract, many-to-one relation between arbitrary sets of objects.

Let’s begin our exploration of the new function properties in Version 12.2 by using the Babylonian examples of the square and cube functions (denoted by and , respectively):

✕
s[x_] := x^2 |

✕
c[x_] := x^3 |

Here is a plot of the functions:

✕
GraphicsRow[{Plot[s[x], {x, -5, 5}, PlotLabel -> "Square"], Plot[ c[x], {x, -5, 5}, PlotLabel -> "Cube"]}] |

As shown in the following, a horizontal line drawn above the axis intersects the first graph at a pair of points, while any horizontal line intersects the second graph at exactly one point:

✕
GraphicsRow[{Plot[{s[x], 20}, {x, -5, 5}], Plot[{c[x], 20}, {x, -5, 5}]}] |

Thus, is not injective (one-to-one), while is injective. This can be confirmed by using `FunctionInjective`:

✕
FunctionInjective[s[x], x] |

✕
FunctionInjective[c[x], x] |

Similarly, by considering horizontal lines drawn below the axis, one can conclude that is not surjective (onto), while is surjective:

✕
{FunctionSurjective[s[x], x], FunctionSurjective[c[x], x]} |

Combining these two facts, we conclude that the seemingly simple square function is not bijective (one-to-one and onto), while the less elementary cube function has this property:

✕
{FunctionBijective[s[x], x], FunctionBijective[c[x], x]} |

On the other hand, the square function is non-negative everywhere, while the cube function takes on both positive and negative values. This can be expressed succinctly by using `FunctionSign` as follows:

✕
{FunctionSign[s[x], x], FunctionSign[c[x], x]} |

The situation is reversed if strict positivity is enforced for the square function and the domain of the cube function is restricted to the positive real numbers:

✕
{FunctionSign[s[x], x, StrictInequalities -> True], FunctionSign[{c[x], x > 0}, x]} |

Finally, note that the square and cube functions belong to the family of polynomial functions, and therefore are both continuous:

✕
{FunctionContinuous[s[x], x], FunctionContinuous[c[x], x]} |

The trigonometric functions are traditionally regarded as elementary, but they provide nontrivial examples for some of the deeper function properties that are available in the latest release.

The sine function, `Sin`, occurs in problems such as those involving mechanical and electrical oscillations. It’s not a polynomial function, but it can be represented by a power series (a polynomial without a last term!) and is therefore an analytic function. This can be confirmed by using `FunctionAnalytic`:

✕
FunctionAnalytic[Sin[x], x] |

Here are the first few terms of its power series expansion:

✕
Asymptotic[Sin[x], {x, 0, 12}] |

The following plot shows that the approximation is valid over a limited range of :

✕
Plot[Evaluate[{Sin[x], %}], {x, -2 Pi, 2 Pi}, PlotStyle -> Thickness[0.01]] |

The tangent function, `Tan`, is our first example of a meromorphic function (i.e. a function that is analytic everywhere, except at isolated pole singularities):

✕
FunctionMeromorphic[Tan[x], x] |

`Tan` inherits its singularities from the zeros of its denominator, `Cos`:

✕
FunctionSingularities[Tan[x], x] |

`Plot` uses a knowledge of these singularities to provide an accurate graph of `Tan`:

✕
Plot[Tan[x], {x, -5 Pi/2, 5 Pi/2}] |

In contrast to `Tan`, its inverse, `ArcTan`, is smooth—as is its composition with the square function:

✕
FunctionAnalytic[ArcTan[x^2], x] |

A `Plot` of the function on the real line confirms this property:

✕
Plot[ArcTan[x^2], {x, -5, 5}] |

However, the extension of this function to the complex plane results in singularities, as shown here:

✕
FunctionSingularities[ArcTan[z^2], z, Complexes] |

`Reduce` can be used to obtain a detailed description of these singularities:

✕
Reduce[%, z] |

Here is a pretty `ComplexPlot` of the function and its singularities:

✕
ComplexPlot[ArcTan[z^2], {z, -2 - 2 I, 2 + 2 I}, ExclusionsStyle -> Red] |

The situation described here, in which the extension of a function to the complex plane leads to singularities, is commonplace in the study of mathematical functions, and will be encountered again in the next section.

The elliptic functions, which arise in the study of nonlinear oscillations and many other applications, have an air of mystery about them because they are seldom discussed in undergraduate courses. They become a little less mysterious when they are studied alongside the trigonometric functions.

In order to illustrate them, consider `JacobiSN` (analogous to `Sin` in the elliptic world):

✕
Plot[JacobiSN[x, 1/4], {x, -20, 20}, PlotStyle -> {Red, Thickness[0.01]}] |

Like the sine function, `JacobiSN` is an analytic and periodic function of :

✕
FunctionAnalytic[JacobiSN[x, 1/4], x] |

✕
FunctionPeriod[JacobiSN[x, 1/4], x] |

The situation changes dramatically when this function is extended to the complex plane. This happens because `JacobiSN` is a quotient of `EllipticTheta` functions, which are themselves analytic and quasi-doubly periodic functions of .

During the division process, `JacobiSN` picks up singularities from the complex zeros of its denominator, while a certain phase factor miraculously cancels out to make it doubly periodic. Thus:

✕
FunctionMeromorphic[JacobiSN[z, 1/4], z] |

✕
FunctionPeriod[JacobiSN[z, 1/4], z, Complexes] |

Here is a plot of `JacobiSN` that shows the singularities of the function, as well as the tessellation of the plane that results from the double periodicity:

✕
ContourPlot[ Re[JacobiSN[x + I y, 1/2]], {x, -4, 4}, {y, -4, 4}, Sequence[ Contours -> 20, FrameTicks -> {{{(-2) EllipticK[ Rational[1, 2]], -EllipticK[ Rational[1, 2]], 0, EllipticK[ Rational[1, 2]], 2 EllipticK[ Rational[1, 2]]}, None}, {{(-2) EllipticK[ Rational[1, 2]], 0, 2 EllipticK[ Rational[1, 2]]}, None}}]] |

The theory of elliptic functions is unmatched in its elegance and was pursued by many outstanding nineteenth-century mathematicians—including Charles Hermite, who once remarked, “I cannot leave the elliptic domain. Where the goat is attached she must graze.” I urge you to explore this wonderful subject further using the built-in elliptic functions and integrals in the Wolfram Language.

Piecewise-defined functions occur naturally in electrical engineering, finance and other applications. Discontinuities can arise at the boundaries where the different pieces of such a function are stitched together. `FunctionDiscontinuities` gives the location of these discontinuities.

For example, consider `RealSign`, which indicates the sign of the real number :

✕
Plot[RealSign[x], {x, -1, 1}] |

`FunctionDiscontinuities` confirms that `RealSign` has a discontinuity at :

✕
FunctionDiscontinuities[RealSign[x], x] |

On the other hand, this function can be approximated by a continuous Fourier sine series:

✕
FourierTrigSeries[RealSign[x], x, 15] |

The overshoot (or “ringing”) of the Fourier series at the jump discontinuity in the following plot is a manifestation of the Gibbs phenomenon:

✕
Plot[Evaluate[{RealSign[x], %}], {x, -1, 1}, PlotStyle -> Thickness[0.01]] |

As another example, let’s compute the discontinuities of , where θ is the Heaviside step function:

✕
f = x + Cos[x] UnitStep[Sin[x]]; FunctionDiscontinuities[f, x] |

The discontinuity points between and can be found using `Reduce` as follows:

✕
Solve[% && -5*Pi <= x <= 5*Pi, x, Reals] |

The function and its discontinuities are visualized here:

✕
Plot[f, {x, -5 Pi, 5 Pi}, Epilog -> {Red, Dotted, InfiniteLine[{x, 0}, {0, 1}] /. %}] |

Finally, here is an injective `Piecewise` function that is not monotonic:

✕
f = Piecewise[{{x, Sin[\[Pi] x] >= 0}, {2 Ceiling[x] - 1 - x, Sin[\[Pi] x] < 0}}]; Plot[f, {x, 0, 10}] |

✕
{FunctionInjective[{f, 0 < x < 10}, x], FunctionMonotonicity[{f, 0 < x < 10}, x]} |

The Wolfram Language has over 250 special functions, including the classical special functions from mathematical physics as well as more modern functions that were created due to their relevance for probability and statistics or other application domains.

The new function properties are very useful for solving problems involving special functions. We use them here to find the global minimum for the example function from the introduction:

✕
f = SinhIntegral[ LogisticSigmoid[ ScorerHi[Tanh[AiryAi[HermiteH[-(1/2), x] - x + 1]]]]]; |

To begin, define the functions and by:

✕
g = SinhIntegral[LogisticSigmoid[ScorerHi[Tanh[z]]]]; h = HermiteH[-1/2, x] - x + 1;' title='h = HermiteH[-1/2, x] - x + 1; |

The function is monotonic on the real line:

✕
FunctionMonotonicity[g, z] |

Next, the function can be written as a composition of and `AiryAi``[h]`:

✕
f === (g /. z -> AiryAi[h]) |

Now, as indicated by the following plot, `AiryAi` has infinitely many local minima:

✕
Plot[AiryAi[x], {x, -30, 30}] |

Its global minimum cannot be found by computing all its local minima.

However, `Minimize` has built-in knowledge about global minima of special functions and quickly finds the required global minimum value:

✕
Minimize[AiryAi[x], x] |

It now suffices to show that the global minimum point of `AiryAi` is among the values attained by . As a first step in the proof, note that:

✕
{Limit[h, x -> -\[Infinity]], Limit[h, x -> \[Infinity]]} |

By the intermediate value theorem, to prove that attains all real values, it suffices to show that it is continuous, which can be done using `FunctionContinuous`:

✕
FunctionContinuous[h, x] |

Also, is monotonic:

✕
FunctionMonotonicity[h, x] |

Hence, the global minimum of is unique.

`Minimize` automatically uses a similar method to find the minimum value of :

✕
Minimize[f, x] |

Finally, here is a plot of along with its unique global minimum value:

✕
Plot[f, {x, -30, 30}, Epilog -> { PointSize[Large], Red, Point[{ ReplaceAll[x, Part[ Out[], 2]], Part[ Out[], 1]}]}] // Quiet |

All the examples considered so far have used a single real or complex variable. Let’s look at a few examples for computing function properties of multivariable functions, using the spectacular visualization capabilities of the Wolfram Language to illustrate the results.

As a first example, consider the real bivariate function defined here:

✕
f = (x^2 + y^2) Csc[x y]; |

The singularities of are simply the zeros of its “denominator” :

✕
FunctionSingularities[f, {x, y}] |

The following graphic captures the complicated nature of the singularities of :

✕
Plot3D[f, {x, -3, 3}, {y, -3, 3}, Sequence[ ExclusionsStyle -> Directive[ Opacity[0.7], Red], PlotPoints -> 50]] |

Next, multivariate rational functions such as the following are always meromorphic:

✕
{FunctionMeromorphic[(x^2 - y^2 + 1)/(x^2 - y), {x, y}], FunctionMeromorphic[(x^2 - y^2)/(x^2 + y^2 + 1), {x, y}]} |

However, unlike functions of a single variable, the singularities of such functions will typically lie along curves. For example, the singularities of the first function (shown in the previous image) lie along the parabola :

✕
Plot3D[(x^2 - y^2 + 1)/(x^2 - y), {x, -3, 3}, {y, -3, 3}, Sequence[ ClippingStyle -> None, BoxRatios -> 1, PlotPoints -> 100]] |

On the other hand, plotting the second function in the `Re` `Im` plane shows the blowup of this function along the hyperbolas :

✕
Plot3D[(x^2 + (I y)^2)/(x^2 + (y I)^2 + 1), {x, -4, 4}, {y, -4, 4}, ClippingStyle -> None, BoxRatios -> 1] |

The `Beta` function provides an interesting example of a meromorphic, multivariate special function:

✕
FunctionMeromorphic[Beta[x, y], {x, y}] |

In fact, `Beta` can be considered a multivariate rational function in `Gamma`:

✕
Beta[x, y] // FunctionExpand |

The following plot shows the singularities of the function, which arise due to the poles of the `Gamma` factors being at negative integer values:

✕
Plot3D[Beta[x, y], {x, -2, 2}, {y, -2, 2}, Sequence[ PlotPoints -> 150, ClippingStyle -> None]] |

Finally, here is an example of a strictly convex function:

✕
f = (x - 3 y - 7)^2 + (x + y - 5)^2 + (2 x - 3 y - 4)^2; FunctionConvexity[f, {x, y}, StrictInequalities -> True] |

Such a function has at most one local minimum, which can be found in this case by using `Minimize`:

✕
{mval, mpoint} = Minimize[f, {x, y}] |

The following plot shows along with its unique global minimum:

✕
Show[{Plot3D[{f, mval}, {x, -5, 5}, {y, -5, 5}, PlotStyle -> Opacity[0.7]], Graphics3D[{Red, PointSize[Large], Point[ ReplaceAll[{x, y, mval}, mpoint]]}]}] |

You can learn more about the new function properties in the latest release by exploring their reference pages, which demonstrate the scope of each function and include applications to geometry, calculus and other areas, in the Wolfram Language & System Documentation Center.

Many of the examples in this blog post were drawn from the reference pages for `FunctionDiscontinuities`, `FunctionConvexity`, etc. I recommend exploring the new documentation for this functionality further, starting from the guide page.

Since Version 1 of the Wolfram Language was released in 1988, a variety of mathematical functions have been added. The function properties introduced in Version 12.2 will serve to unify this vast collection of functions by classifying them based on a few characteristics, such as continuity, analyticity, etc.

Any comments or suggestions about the new functionality are welcome.

It is a pleasure to thank Roger Germundsson for his elegant design of the new functionality; Adam Strzebonski for his outstanding implementation of this feature; Itai Seggev and Michal Strzebonski for their creativity in preparing the documentation examples; and Tim Shedelbower for the striking illustration at the top of the blog post.

In 2020, Melbourne, Australia, had a 112-day lockdown of the entire city to help stop the spread of COVID-19. The wearing of masks was mandatory and we were limited to one hour a day of outside activity. Otherwise, we were stuck in our homes. This gave me lots of time to look into interesting problems I’d been putting off for years.

I was inspired by a YouTube video by David Oranchak, which looked at the Zodiac Killer’s 340-character cipher (Z340), which is pictured below. This cipher is considered one of the holy grails of cryptography, as at the time the cipher had resisted attacks for 50 years, so any attempts to find a solution were truly a moonshot.

In his presentation, David explored the idea that the cipher is both a homophonic substitution cipher and a transposition cipher. Highly efficient programs for solving homophonic substitution ciphers exist, the best of which is AZdecrypt. Experiments suggest that AZdecrypt can solve all homophonic substitution ciphers of the same length and symbol distribution as the Z340. However, AZdecrypt cannot be used to solve the Z340 because when you run it on the Z340, it does not produce a solution. Perhaps solving the Z340 is a case of finding by trial and error the correct transposition, then using AZdecrypt to solve the homophonic substitution cipher.

David outlined one particular transposition, which was discovered independently and posted to zodiackillersite.com by user “daikon” and Jarl van Eycke (the author of AZdecrypt): the “period-19,” which had some interesting statistical properties that would suggest that they were closer to the correct transposition. Just for fun, I decided to plot this transposition using Mathematica:

✕

Partition[ Table[1 + Mod[19 i, 340] -> i, {i, 0, 339}] // SparseArray // Normal, 17] // drawTransposition; Magnify[%, 0.5]

However, this looked nothing like daikon and Jarl’s period-19 transposition. To my surprise, it turned out their transposition used a period-18 when wrapping around (periodically) vertically.

While this transposition was visually interesting, it didn’t strike me as a very natural pencil-and-paper construction. It should be noted that Z340 was created in 1969, and therefore was almost certainly constructed using pencil and paper.

I saw a connection between the period-19 transposition and a 1,2-decimation of the cipher. That is, starting from the top-left corner and moving one vertical step, then two horizontal steps, wrapping periodically both horizontally and vertically, like the cipher is wrapped around a torus. This transposition takes similar diagonals to the period-19 transposition:

✕

decimate2D[array_, {n_, m_}] := Module[{d0, d1}, {d0, d1} = Dimensions[array]; (Table[array[[Mod[n i, d0] + 1, Mod[m i, d1] + 1]], {i, 0, d0 d1 - 1}]) /; CoprimeQ[d0, n] && CoprimeQ[d1, m] && CoprimeQ[d0, d1] ]; z12decimation = Partition[decimate2D[Partition[Range[0, 340 - 1], 17], {1, 2}], 17];

Running the 1,2-decimation transposition of the Z340 through AZdecrypt did not produce a solution.

One way to investigate the likelihood of finding the correct transposition of a homophonic substitution cipher is by counting the number of repeating bigrams (pairs of symbols). In Mathematica, it’s easy to write up this code for arbitrary -grams:

✕

countngrams[l_List, n_Integer] := Total[Map[Last, Tally[Partition[l, n, 1]], 1] - 1]

Then we can construct a large number of Z340-like ciphers and compare their bigram count distribution to a large number of random shuffles of the Z340:

✕

Histogram[{randomShuffle340, z340CiphersBigrams}, {1}, ChartBaseStyle -> EdgeForm[Thin], Frame -> True, Axes -> False, FrameLabel -> {Style["repeating bigram count", 12], Style["frequency", 12]}, ChartLegends -> {"random shuffles of Z340", "random Z340-like ciphers"}, ChartStyle -> {RGBColor[0.514366, 0.731746, 0.415503], RGBColor[0.996414, 0.825742, 0.330007]}]

The mean number of bigrams for the random shuffles was 19.8, and for random Z340-like ciphers was 34.5. The Z340 has 25 repeating bigrams, while the daikon and Jarl period-19 transposition and the 1,2-decimation has 37 repeating bigrams. Thus, statistically, we thought we were on the right track.

After the 1,2-decimation transposition did not produce a solution we decided to work on a large search through candidate transpositions. It was difficult to find out what transpositions had been tested in the past, so I decided to enumerate all reasonable 1- and 2-step transpositions, sort them by their bigram count and run them through AZdecrypt. Some of these transpositions, for example, included:

I also included all proper one-dimensional and two-dimensional decimation transpositions. For one-dimensional enumerations, we have the following 128 proper decimations:

✕

Select[Range[340], CoprimeQ[#, 340] &] % // Length

For two-dimensional enumerations, we have the following 128 proper decimations:

✕

Join @@ Outer[List, Select[Range[19], CoprimeQ[#, 20] &], Range[16]] % // Length

For example, the 3,4-decimation generates the following transposition:

✕

Partition[decimate2D[Partition[Range[0, 339], 17], {3, 4}], 17] // invert // drawTransposition; Magnify[%, 0.5]

Using AZdecrypt, we tested all row–major, column–major, alternating row–column, alternating column–row, inward spirals, outward spirals, diagonals and proper one-dimensional and two-dimensional decimation transpositions. This experiment didn’t yield anything looking like a solution, so we tested all pairs of transpositions. Then we considered testing all 3-tuples of transpositions; however, this would require testing 155,929,364,660,224 candidate ciphers. Naively checking one a second would take over five million years. So we limited our experiments to decimations which would be reasonable to write out by hand and then only tested candidates with a high bigram count. Once again, this search turned up nothing.

Perhaps there’s another step we are missing? Given the way the Zodiac Killer’s 408-character cipher (pictured below) was sent in three equally sized sections, we conjectured the Z340 was constructed from a number of distinct segments, then encrypted with a transposition and a homophonic substitution:

We considered splitting the cipher horizontally into two and three segments, vertically into two and three segments, and both horizontally and vertically into and segments. For example:

Then we used `Reduce` to compute all possible segments, which resulted in proper two-dimensional decimations:

✕

Reduce[a > 0 && b > 0 && c > 0 && d > 0 && a + b == 17 && c + d == 20 && GCD[a, c] == 1 && GCD[a, d] == 1 && GCD[b, c] == 1 && GCD[b, d] == 1, {a, b, c, d}, Integers]

Given the high bigram count of the 1,2-decimation transposition, we started our search with two-dimensional decimations, with each segment having the same (single) transposition. As we had seen so many times before, this experiment didn’t turn up anything.

The next search for compositions of multiple transpositions and all combinations of transpositions for all sections would be a significantly larger undertaking. So we decided to reanalyse the results from the initial search.

Out of the 650,000 transpositions we tested, one contained a few particularly interesting segments of plaintext:

This was even more interesting as the transposition that produced this candidate decryption was the 1,2-decimation, with the cipher split into three vertical segments (pictured below):

Investigating this result further, David used our 9,9,2-vertical segment, 1,2-decimation transposition and AZdecrypt to crib the phrases “HOPE YOU ARE,” “TRYING TO CATCH ME” and “THE GAS CHAMBER.” With these cribs locked in place, AZdecrypt found the following solution of the first segment:

Eureka! After 51 years, we had decrypted some of the Z340. This was a very special moment. The discovery of the 9,9,2-vertical segment, 1,2-decimation transposition and the power of AZdecrypt for solving homophonic substitution ciphers had produced a partial decrypt of the Z340.

What about the remaining two segments? It was possible that we had found just one correct vertical split of 9 rows and the remaining 11 rows required a different segmentation, or it was possible that a different transposition was needed, or even a different key for the substitution cipher, or any two combinations of these possibilities, or even all three possibilities. Our work was far from over.

David discovered that we could use the key from the first segment on the last segment to produce the following plaintext—without any transposition:

Including some spaces, the cipher’s candidate plaintext gives:

Then, reversing a few words:

This seemed to be a pretty reasonable decryption of the last segment.

What about the second segment? If we crib all the legible text from the first section we get the following decryption:

Some parts of this kind of made sense, but we certainly weren’t there yet. We asked Jarl van Eycke, the author of AZdecrypt, to help us with this segment. He made the following brilliant observations:

- The LIFEIS plaintext (the second segment) is read left to right.
- The LIFEIS plaintext is excluded from the 1,2-decimation transposition and read left to right.
- Numerous spelling mistakes are corrected if “H” on row six is moved to the fourth column.
- Apply the 1,2-decimation transposition, skipping the positions containing “LIFEIS”:

Then the second segment becomes:

The cipher key and transposition that we discovered for the Z340 cipher are given by:

Fifty-one years after the Zodiac Killer mailed this cipher to the *San Francisco Chronicle*, we had a solution. David submitted this solution to the FBI Cryptanalysis and Racketeering Records Unit (CRRU) on Saturday, December 5, 2020. Shortly after, the FBI was able to officially confirm the validity of our solution.

Essentially all my work on the Z340 was done in Mathematica. I used the Spartan high-performance computing cluster at the University of Melbourne to eliminate candidate transpositions using zkdecrypto and David used AZdecrypt. Otherwise, all the statistical analysis of the Z340 and the creation and analysis of the millions of candidate transpositions was done using Mathematica. The reason for my use of Mathematica is simple; it is by far the most time-efficient language I could use for such a task.

*Sam Blake has a PhD in mathematics from Monash University in Australia and is a research fellow at the University of Melbourne. He’s been an avid Mathematica user since 2004 and worked for Wolfram Research for four years. He is broadly interested in research related to numerical modeling, data science, symbolic computation, steganography and cryptography.*

It is widely believed that students and others spending their 2020 spring break in Florida helped spread COVID-19 far and wide, in the US and elsewhere (see also this study). The picture in 2021 is quite different in several ways. For one, the disease has been in the US for over a year, and an approximated 30% of the population has antibodies from prior exposure. Also, several vaccines are now in use, and close to 20% have received at least one inoculation at the time of this writing. (Since those two groups overlap, the total is believed to be in the ballpark of 45% of the total population.) We now know that children under the age of 16 do not get the disease in large numbers and are not a major vector for its spread. Social distancing practices are in use to varying degrees, and infection numbers are currently falling across the country. This is believed to be due to a combination of increased immunity and non-pharmacological interventions (NPIs) such as social distancing and mask use.

The proverbial elephant in the room is the emergence of several new variants of SARS-CoV-2, all possessing unfortunate characteristics. The new variants seem to be more contagious than the initial forms of the virus. Some appear to do a better job of evading antibodies formed in response to exposure from the prior forms. There is a concern that some may be less amenable to countering by at least some of the existing vaccines (this is the topic of a recent study). In locations where they first appear, they all seem to have become the most prevalent form in the viral pool. These critters are termed variants of concern (VoC) for good reason. I list the ones under scrutiny here, using the genomic designation followed by the location of origin (corresponding to the names commonly given to each) in parentheses. They are B.1.1.7 (England/UK), B.1.351 (South Africa), B.1.427 and B.1.429 (Southern California), P.1 (Manaus, Brazil) and B.1.526 (New York City).

As we are now beginning a new spring break season, it is timely to look at the picture from Florida in terms of these variants. I cannot give a current view, however. My data comes from the GISAID repository of SARS-CoV-2 genome sequences, as it is the largest that is available to all researchers (GenBank from the NCBI is also good, and it can be accessed directly from the curated SARS-CoV-2 sequences in the Wolfram Data Repository). Due to the lag time between specimen collection, laboratory sequencing and eventual upload to GISAID, of necessity this data does not include current or recently collected sequences. Typically, there are very few samples from the past three weeks, and virtually none from the past two. What I will show is the outlook as we went from mid-February to late February.

These variants are characterized by specific (overlapping) sets of mutations. Of course, the gold standard (for purposes of classification) is to detect one of those sets of mutations in a given sequence. For that purpose, one needs to know in careful detail what to look for. And the search can take place within a “noisy” landscape: other mutations can be nearby, some characteristic mutations might be themselves slightly altered by further mutation, etc. I do not have the resources to navigate such shoals. I rely instead on a general-purpose approach to genomic comparison, one that happens to work fairly well for the purpose at hand.

This brings us to a (mercifully?) brief description of the strategy used here to analyze genomic sequences of SARS-CoV-2. Recall that genetic DNA and RNA sequences can be characterized as strings comprised of four letters (*A*, *C*, *G*, *T*), using *U* instead of *T* in RNA sequences. These correspond to the four nucleotides adenine, guanine, cytosine and thymine/uracil. Sequences can be compared using direct string-comparison computations. These are the alignment methods. Loosely speaking, techniques that avoid direct string comparison fall into the family known as “alignment-free methods.” The latter often enjoy advantages of better speed and versatility, at the expense of typically delivering somewhat weaker results. My approach begins with something called the chaos game representation (CGR), wherein gene sequences are illustrated as fractal-like square images. This method was pioneered three decades ago by Joel Jeffrey, and has been refined along the way. Many practitioners, myself included, nowadays use a version known as the frequency chaos game representation (FCGR).

Once one has images for genetic sequences, there still remains the question of how to compare them. For the FCGR, there is a pixelation level involved that determines the size of the image: at level there are pixels. My typical setting is to have set to 7 or 8, so that’s either 16K or 64K pixels (here I use the common convention that 1K is , or 1024). Stated differently, at a pixelation level of 8, the images are 256×256.

I show a couple of these images, cribbed from a prior Wolfram Community post (I forget which one in particular):

Comparing objects with 64K elements is a daunting task, and we do not try to do that. Instead, we use two fairly common methods of dimension reduction to bring these to a more manageable size.

First, we use the discrete cosine transform (DCT) to cull out high-frequency components in the images. This has the effect of coarse-graining the images by removing “noisy” high frequencies. One might ask why we do not simply use coarser-grained FCGR images. The answer is that those simply do not work as well as the finer FCGR images with higher-frequency components removed. In any case, my typical setting will retain a 32×32 array of frequency components from the original 256×256 FCGR image.

Once we have a set of these images, with dimension reduced using the DCT, the next step is to use a method known as principal components analysis (PCA), which in turn is based on a linear algebra matrix function known as the singular value decomposition (SVD). Here is what the previous pair looks like after the dimension-reduction steps (and a bit of manipulation to recreate them as images):

Thus far, this post has been a mass of technical terminology. Readers interested in a detailed exposition (with pictures and algorithmic complexity analysis) can have a look at my article on the subject, published at the end of 2019. On the same day (December 30), Reuters published an article about an unusual outbreak of a pneumonia-like disease in Wuhan, China. So I had a set of tools and, within a few short weeks, a viral outbreak generating data on which to apply them. Something much tamer would have been preferred, but we don’t always get to choose.

I should remark that there is a growing body of literature on the topic of using FCGR-based methods to classify the SARS-CoV-2 genome. I have done this as well in a series of posts on Wolfram Community (see “Genome Analysis and the SARS-CoV-2,” “From Sequenced SARS-CoV-2 Genomes to a Phylogenetic Tree,” “Analyzing the Spread of SARS-CoV-2 Variants in California” and “Analyzing the Spread of SARS-CoV-2 Variants in Florida”). But I am by no means the only person doing such things. What I have done that is perhaps novel is to show that these methods can detect differences at the level of variants. Published papers show how the genome fits within the family of sarbecoronaviruses, for example, but those that use the FCGR do not (as far I am aware) attempt to compare and/or cluster different variants of SARS-CoV-2.

For reference, the sequences I downloaded from GISAID were sets of the five variants that were collected during certain time periods in 2021. The idea was to be certain they were fairly recent (and, indeed, one of these variants was only identified in 2021), and to use time periods sufficient to get in the ballpark of 200–400 samples for each. I also downloaded three sets collected in Florida during three successive time periods in February. GISAID requires that proper attribution to collecting and sequencing laboratories and researchers be provided for genetic sequences used from that site (and, rather helpfully, they have an automated way of obtaining this data as formatted PDF files). I include cells with this content in the notebook version of this post.

First, I will show a picture of the several variants. Each point represents a particular genomic sequence. The data points are obtained by yet another round of dimension reduction, this time to 3D using a method known as multidimensional scaling (MDS):

The main point here is this. Even after reduction from genomic sequences (comprised of nearly 30 thousand nucleotides, in the case of SARS-CoV-2) to 3D vectors, there remains sufficient information content to discern very distinct clustering of the several variants. The seeming exception (the red/blue intermixing) is from the two close relatives that together comprise the California variant.

There are a handful of outliers as well. This is not terribly surprising. For one, noise mutations that are not essential to the variant status will have some influence on where these appear in the 3D image. Also, it is not a certainty that every reference sequence is perfectly categorized. And it is plausible that some might have been classified without having all mutations specific to their variant. This can happen, for example, when classification is done using genetic marker tests.

I should mention that there are a number of useful ways to reduce dimensions. Indeed, several are built into the Mathematica function `DimensionReduce`. One such method, called latent semantic analysis (LSA), is a cousin to MDS. But I prefer the latter, and while it is not (at this time) available via `DimensionReduce`, there is a version of `MultidimensionalScaling` readily available in the Wolfram Function Repository.

To indicate which sequences are likely to belong to one of the variant classes, I will create what are called phylogenetic trees, which are dendrograms based on measures of genetic proximity. What’s a dendrogram? Have a look at the documentation for the function `Dendrogram`. The reader is to be warned that the remainder of this blog is likely to be monotonous, since it is mostly just these tree graphs. In my experience, monotony is the rule when it comes to data analysis. But one can glean useful information, and that’s what actually matters.

First, I show another MDS plot. This time, I decimated the variant sequences by a factor of 3 in order to avoid clutter (and this will be all the more important when we look at the trees). I tried to show a viewing perspective that would give a reasonably accurate idea of which sequences from Florida appear to cluster among the variants. One sees several in the B.1.1.7 (gray) sequences, and likewise for the blue and red. This also appears to happen with the purple and pink clusters. We will see that those last two are a bit misleading. What happens is that three dimensions are not always sufficient to separate nonrelated sequences. (One can, of course, use three-dimensional images that emphasize non-overlapping aspects, but that amounts to using six dimensions and is still difficult to visualize.)

A phylogenetic tree plot can give a better idea of whether sequences are related. In addition to proximity of placement, there is the length of tree branches that separates a given pair from their closest common branching point. We make use of this in the following tree. Here, we decimate the variants by a factor of 6 for readability (I have pored over trees that use less decimation—but the clustering of the Florida sequences among the variants and the relative branch lengths do not change by much):

There are but two sequences that cluster among the pink P.1 variant. The one that might be with the purple 1.526 variant really has a branch too long to consider it as likely to be that variant (although this is a heuristic, and I would not state that as a certainty). My own estimate is that 84 are plausibly the B.1.1.7 variant and 35 are one or the other class of the California variant. Overall, this amounts to around 29% of the Florida sequences collected during February 9 to 12.

We move up a few days, analyzing SARS-CoV-2 sequences collected in Florida during February 13 to 17:

Eyeballing it, I see one each of the P.1 and B.1.526 variants, 32 of the B.1.427/B.1.429 variant and 56 of the B.1.1.7 variant. There are 292 in total from our test set, so this time, around 31% of the total is comprised of the variants.

The most recent set, collected on or after February 18, is comprised of 270 sequences. All but 17 were collected no later than February 25, so this set is from more than three weeks ago at the time of this writing (March 16):

It appears that there are now three and five from the P.1 and B.1.526 variants, respectively. At least 26 appear to be from the B.1.427/B.1.429 family. And there are now 63 from the B.1.1.7 variant. That amounts to 36% being comprised of the variants.

We have seen an upward trend during the month of February in percentages of the variants present in Florida. This is not terribly unexpected; epidemiologists had predicted this would happen. At present, based on data such as from genetic marker tests (which is more recent than what is available from GISAID), the B.1.1.7 variant is said to account for 36% of all current cases in Florida and is believed that it will hit 50% in the near future. What I have shown can be viewed as corroboration, with the caveat that some of the other variants (in particular the one from California) are also on the rise. Moreover, recent news claims the total number of new confirmed cases per day in Florida has stopped declining and has perhaps even risen modestly (it is currently around five thousand). This is not a great scenario. On the brighter side, spring break is bringing a smaller influx to Florida than it did last year, and this time a percentage of the population (including many of the most vulnerable) will have received vaccinations prior to the post-break exodus.

Wolfram Language users make up an incredibly diverse community. People from all around the globe use Wolfram technologies in a variety of fields and industries. High-school and college students begin to use the Wolfram Language in all types of classes as well as for their own projects, and educators at all institutional levels use Wolfram products to prepare for and teach courses—at the world’s top 200 universities and beyond.

We’ve rounded up some of our users’ recently published books, and were honored to speak with two authors about their projects.

*Beginning Mathematica and Wolfram for Data Science: Applications in Data Analysis, Machine Learning, and Neural Networks* by Jalil Villalobos Alva introduces readers to the Wolfram Language, its syntax and the structure of Mathematica. Published by Apress on February 2, 2021, the book demonstrates how to use Mathematica for data management and mathematical computations, as well as how to use notebooks as a standard format (which also serves to create detailed reports of the processes carried out). This book introduces readers to the Wolfram Data Repository and Mathematica’s machine learning functionality; shows how to import, export and visualize data; and demonstrates how to create datasets. *Beginning Mathematica and Wolfram for Data Science *is available for purchase on Amazon.

Villalobos Alva is an avid Wolfram Language programmer and Mathematica user who graduated with a degree in engineering physics from Universidad Iberoamericana in Mexico City. His research background covers quantum physics, bioinformatics, proteomics and protein design. Villalobos Alva’s academic interests include quantum technology, machine learning, stochastic processes and space engineering.

We caught up with Villalobos Alva to ask him about his new book and his experience with Mathematica.

Q: How did you first encounter Mathematica and the Wolfram Language?

A: The first time I used Mathematica was during my undergraduate degree in my calculus course. The version of Mathematica I used was Version 8—I remember that because the splash screen always appeared after running the program. I remember that the course was taught in a small computer lab, where the workstation had an old HP computer and a large worktable. Since then, I have been an active user of Mathematica and the Wolfram Language, with the recent collaboration and participation in the Wolfram Summer School, class of 2019.

Q: How does *Beginning Mathematica and Wolfram for Data Science* fit into the literature on data science?

A: The topics covered here are valuable concepts where data science is used in data management and many other fields, with the specific contribution that is this book. Key terms and concepts that are related to data science are used inside Mathematica. Along the way, the contribution done by this book is not limited to data science, but also offers an introductory perspective to the framework of neural networks with Mathematica.

Q: What was your writing process like?

A: I think that doing a project like a book is a challenge that you set yourself to know your limits. Also, I deeply believe that it is a self-learning process, and it is a way of knowing what my intellectual abilities can do. Of course, writing a book is a long, time-consuming process. At first, it seemed like a daunting task that I needed to find time for in any part of my daily life, and this is one of the amazing things about being able to write a book. There are many ways to organize yourself, but in my experience, it teaches you to organize your time seriously. It encouraged me to be more simple, focused and aware of the details, which together build a different perspective and make you more than you are now.

Q: Who do you see as the primary audience for *Beginning Mathematica and Wolfram for Data Science*?

A: This book is aimed at an audience that is interested in learning about data science and in turn learn its applications with Mathematica and the Wolfram Language. In general, it is not necessary to know about the language before, but prior knowledge of the Wolfram Language syntax would be an advantage for those who know it. This book will also be useful for coders and hobbyists, as well as for people in academia for research and scholarly projects, including students, researchers, professors and many others.

Q: How do you apply the topics of this book in your life?

A: The topics are important since they present how to use the Wolfram Language for data science from a theoretical and practical perspective, including a description of processes that are carried out throughout the day: coming up with a problem statement, thinking of new ideas and putting them into practice. Part of my day also demands a lot of reading, keeping up with the state of the art of the topics being discussed. Besides, the presentation and communication of results are essential, since a line of contact with other colleagues is often established where the form of communication is crucial to handing out and exposing information.

Dr. Joseph W. Goodman, author of *Fourier Transforms Using Mathematica*, is an engineer and physicist who attended Harvard as an undergrad and received his master’s degree and PhD in electrical engineering from Stanford. He taught electrical engineering at Stanford, and throughout his career focused on the field of optics. Dr. Goodman has retired from teaching but remains a professor emeritus and has been primarily spending time writing.

The title we’re featuring came together with two goals in mind: introducing the reader to the properties of Fourier transforms and their uses, and showing the reader the basics of using Mathematica and demonstrating its use in Fourier analysis. Unlike many other introductory treatments of the Fourier transform, this book focuses from the start on both one-dimensional and two-dimensional transforms—the latter of which play an important role in optics and digital image processing, as well as in many other applications.

*Fourier Transforms Using Mathematica* is now available for purchase on Amazon. We were honored to talk with Dr. Goodman about his experience using Mathematica and the process of writing his new book.

Q: How did you first encounter Mathematica and the Wolfram Language?

A: I first encountered Mathematica when I set about writing the second edition of my book, entitled *Introduction to Fourier Optics*. I used it extensively to create graphical illustrations of mathematical results. I have used it for a similar purpose in all of my subsequent books.

Q: How does this book fit into the currently available literature on Fourier transforms, and into Mathematica literature in general?

A: The two most popular books on Fourier transforms are one by Ron Bracewell and one by A. Papoulis. Both were written in the early sixties, as I recall, and predated the availability of powerful home computers. Mine follows Ron Bracewell’s approach (I learned Fourier transforms from his class at Stanford) with the addition of a strong emphasis on Mathematica code for illustrating the mathematical properties of such transforms. The real purpose of the book is to simultaneously teach Fourier transforms and Mathematica together, with the proviso that the Mathematica code is intentionally pretty simple and straightforward. I’m unaware of any other books on Fourier transforms that include code of any kind.

Author Márcio Rosa developed this book over 15 years of pursuing the best pedagogical strategies to use as a teacher. The goal of the book is to teach students the use of mathematics software in their studies. As computer software becomes more sophisticated, the human element of the work becomes more focused on formulating questions, interpreting results, understanding ideas and supervising the work of the computer—making mathematics more enjoyable and fun. The book takes advantage of colors, curves, surfaces, transparencies and animations to develop students’ ability to perform calculations in Wolfram Notebooks. Using the computational power and AI built into Mathematica, readers are able to immerse themselves in the content in a way that would not be available with traditional pencil-and-paper workbooks. This book aims to show students that mathematics can be colorful, artistic and fun.

When author R. Gökhan Türeci noticed a lack of Turkish-language resources in the increasingly important subjects of computer programming and numerical computing, he decided to write his own. The book is designed to guide readers and researchers who are interested in learning to use a computer to perform physics and engineering calculations, but don’t know where to start. *Fizik ve mühendislikte Wolfram Mathematica* draws many of its examples from physics, and is geared toward students studying basic science and engineering.

Authors Tom G. MacKay and Akhlesh Lakhtakia provide an overview of state-of-the-art analytical homogenization formalisms used to estimate the effective electromagnetic properties of complex composite materials. This book introduces the reader to homogenization and progressing, and covers constitutive and depolarization dyadics as well as homogenization formalisms for linear and nonlinear materials. It also dives into these topics’ applications with multiple examples using Mathematica code. To enable readers to readily perform their own Mathematica calculations, the book includes explicit formulas provided for the homogenization of isotropic, anisotropic and bianisotropic composite materials, as well as numerical data for a wide range of representative homogenized composite materials. This text is a valuable reference for PhD students and researchers working on the electromagnetic theory of complex composite materials.

Be sure to sign up for updates on Stephen Wolfram’s forthcoming book, *Combinators: A Centennial View*.

Combinators have inspired ideas about computation ever since they were first invented in 1920, and in this innovative book, Wolfram provides a modern view of combinators and their significance. Informed by his work on the computational universe of possible programs and on computational language design, Wolfram explains new and existing ideas about combinators with unique clarity and stunning visualizations, as well as provides insights on their historical connections and the curious story of Moses Schönfinkel, inventor of combinators. Though invented well before Turing machines, combinators have often been viewed as an inaccessibly abstract approach to computation. This book brings them to life as never before in a thought-provoking and broadly accessible exposition of interest across mathematics and computer science, as well as to those concerned with the foundations of formal and computational thinking, and with the history of ideas.

If you’re interested in finding more books that use the Wolfram Language, check out the full collection on the Wolfram Books site. If you’re working on a book about Mathematica or the Wolfram Language, contact us to find out more about our options for author support and to have your book featured in an upcoming blog post!

`Association` has become one of the most commonly used symbols for developers working with any kind of data since it was introduced in Version 10 of the Wolfram Language in 2014. While there are many built-in tools for working with an `Association`, developers also made many tools themselves as they modernized their code. Now many of those tools have found their way into the Wolfram Function Repository. Here I’ll highlight some of my favorites and show how they compare to built-in Wolfram Language functions.

An `Association` stores key-value data. There are many Wolfram Language functions for creating an `Association`, including `AssociationMap`, `AssociationThread`, `Counts` and `GroupBy`. The Function Repository also includes several functions for creating new associations.

You can use `Association` directly on a list of rules to convert it, but it only works at the top level. `ToAssociations` converts lists of rules deep in the expression as well:

✕
Association@{"Beatles" -> {"Drums" -> "Ringo", "Guitar" -> "George"}} |

✕
ResourceFunction[ ResourceObject[ Association[ "Name" -> "ToAssociations", "ShortName" -> "ToAssociations", "UUID" -> "03f9ac8a-b9ca-4ed3-8f4e-6476f054192d", "ResourceType" -> "Function", "Version" -> "1.0.0", "Description" -> "Recursively replace lists of rules with \ associations", "RepositoryLocation" -> URL[ "https://www.wolframcloud.com/objects/resourcesystem/api/1.0"], "SymbolName" -> "FunctionRepository`$\ 54fdf8dcd89543ffb46a3ea9b1182424`ToAssociations", "FunctionLocation" -> CloudObject[ "https://www.wolframcloud.com/objects/1ece3c97-188a-4956-8d38-\ 22b164389275"]], ResourceSystemBase -> Automatic]]@{"Beatles" -> {"Drums" -> "Ringo", "Guitar" -> "George"}} |

`AssociationMap` creates an `Association` by mapping a single function over a list, using elements of the list as keys and the outputs as values. `AssociationThrough` does the opposite. It maps several functions over a single value:

✕
AssociationMap[Sqrt, {1, 4, 9, 16}] |

✕
ResourceFunction[ ResourceObject[ Association[ "Name" -> "AssociationThrough", "ShortName" -> "AssociationThrough", "UUID" -> "74d9311e-aa45-4f03-a0d5-72d677dd4d37", "ResourceType" -> "Function", "Version" -> "1.0.0", "Description" -> "Generate an Association from applying different \ operations to an expression", "RepositoryLocation" -> URL[ "https://www.wolframcloud.com/objects/resourcesystem/api/1.0"], "SymbolName" -> "FunctionRepository`$\ 3edb581b6fa745baace2a4c91f202de7`AssociationThrough", "FunctionLocation" -> CloudObject[ "https://www.wolframcloud.com/obj/21f685d1-26ec-4335-b378-\ f4524169c849"]], { ResourceSystemBase -> "https://www.wolframcloud.com/objects/\ resourcesystem/api/1.0"}]][{Sqrt, FactorInteger, Exp}, 16] |

`SparseArray` is an efficient way to store sparse numeric arrays. The values are indexed numerically. `SparseAssociation` generalized that concept to `Association` so that values are indexed with keys:

✕
rareanimalsightings = ResourceFunction[ ResourceObject[ Association[ "Name" -> "SparseAssociation", "ShortName" -> "SparseAssociation", "UUID" -> "8fabea7a-e08a-4e94-96b5-b1b7f580aa8d", "ResourceType" -> "Function", "Version" -> "3.0.0", "Description" -> "Create a rectangular data structure that \ behaves like a SparseArray indexed by string keys", "RepositoryLocation" -> URL[ "https://www.wolframcloud.com/objects/resourcesystem/api/1.0"], "SymbolName" -> "FunctionRepository`$\ 94720eb656ac4ab988411292dd6c42be`SparseAssociation", "FunctionLocation" -> CloudObject[ "https://www.wolframcloud.com/obj/d0ae185d-771e-48ba-a8d8-\ a0206b100607"]], ResourceSystemBase -> Automatic]][{{"April", "Black Bear"} -> 1, {"July", "Bald Eagle"} -> 4, {"August", "Bald Eagle"} -> 1, {"November", "Moose"} -> 1}, Automatic] |

It works like an `Association`:

✕
rareanimalsightings["April"]["Black Bear"] |

And it gives zero as the value for keys that were not explicitly given:

✕
rareanimalsightings["July"]["Black Bear"] |

Because those values are not stored in the `SparseAssociation`, it is smaller when there are many default values:

✕
sparseAssoc = ResourceFunction[ ResourceObject[ Association[ "Name" -> "SparseAssociation", "ShortName" -> "SparseAssociation", "UUID" -> "8fabea7a-e08a-4e94-96b5-b1b7f580aa8d", "ResourceType" -> "Function", "Version" -> "3.0.0", "Description" -> "Create a rectangular data structure that \ behaves like a SparseArray indexed by string keys", "RepositoryLocation" -> URL[ "https://www.wolframcloud.com/objects/resourcesystem/api/1.0"], "SymbolName" -> "FunctionRepository`$\ 94720eb656ac4ab988411292dd6c42be`SparseAssociation", "FunctionLocation" -> CloudObject[ "https://www.wolframcloud.com/obj/d0ae185d-771e-48ba-a8d8-\ a0206b100607"]], ResourceSystemBase -> Automatic]][ RandomInteger[1, {100, 200}], {Range[100], Range[200]}, 0] |

✕
ByteCount[sparseAssoc] |

✕
ByteCount@Normal[sparseAssoc] |

Associations can be modified using many standard Wolfram Language symbols like `Map`, `KeyMap`, `MapAt` and `Set`. However, the number of functions that data scientists want for manipulating their data is infinite, so they have created some of their own. Here are some that have been published in the Function Repository.

While `MapAt` can apply a single function to the values of specific keys in an `Association`, `MapAtKey` can apply different functions to different keys:

✕
MapAt[CubeRoot, <|"Odds" -> {-27, -8, -1}, "Evens" -> {16, 9, 4}|>, "Odds"] |

✕
ResourceFunction[ ResourceObject[ Association[ "Name" -> "MapAtKey", "ShortName" -> "MapAtKey", "UUID" -> "3d4313c6-a2fc-43b3-8eae-a8c8710a63c6", "ResourceType" -> "Function", "Version" -> "1.0.0", "Description" -> "Apply functions to specific keys in an \ association", "RepositoryLocation" -> URL[ "https://www.wolframcloud.com/objects/resourcesystem/api/1.0"], "SymbolName" -> "FunctionRepository`$\ 5312b39560ba48f6a984bd7c8146f70f`MapAtKey"], ResourceSystemBase -> Automatic]][{"Odds" -> CubeRoot, "Evens" -> Sqrt}, <|"Odds" -> {-27, -8, -1}, "Evens" -> {16, 9, 4}|>] |

`KeyCombine` is a combination of `Merge` and `KeyMap` that allows you to combine elements of an `Association` based on their keys:

✕
data = <|Entity["City", {"Chicago", "Illinois", "UnitedStates"}] -> "Windy City", Entity["City", {"LosAngeles", "California", "UnitedStates"}] -> "City of Angels", Entity["City", {"NewOrleans", "Louisiana", "UnitedStates"}] -> "The Big Easy"|>; |

Using `KeyMap` causes values to be lost:

✕
KeyMap[#["TimeZone"] &, data] |

`KeyCombine` preserves all values in a list:

✕
ResourceFunction[ ResourceObject[ Association[ "Name" -> "KeyCombine", "ShortName" -> "KeyCombine", "UUID" -> "fa2c6e6e-8664-409b-bfc5-a93c29fc6b01", "ResourceType" -> "Function", "Version" -> "2.0.0", "Description" -> "Map a function over the keys of an association, \ and collect or combine values in the event of key collisions", "RepositoryLocation" -> URL[ "https://www.wolframcloud.com/objects/resourcesystem/api/1.0"], "SymbolName" -> "FunctionRepository`$\ bea3f5ae2c744deba53ef74f53c8aab4`KeyCombine", "FunctionLocation" -> CloudObject[ "https://www.wolframcloud.com/obj/6e1d9eef-04c3-4671-a333-\ 0a7d7fbf10e2"]], ResourceSystemBase -> Automatic]][#[ "TimeZone"] &, data] |

Manually editing the content of an `Association` in a notebook can be challenging. `AssociationEditor` provides a convenient GUI form for editing content. I modified Bob’s value in the following example and used the print button to print out the updated `Association`:

<|Rick → 159,Paco → 90,Bob → 91,Michael → 74|>
✕
ResourceFunction[ ResourceObject[ Association[ "Name" -> "AssociationEditor", "ShortName" -> "AssociationEditor", "UUID" -> "4608276a-cfbc-4224-9c37-66c678f4c9b7", "ResourceType" -> "Function", "Version" -> "1.0.0", "Description" -> "Create an interface for editing an association", "RepositoryLocation" -> URL[ "https://www.wolframcloud.com/objects/resourcesystem/api/1.0"], "SymbolName" -> "FunctionRepository`$\ d9159c68ab564c33b8cad1ec86343ee7`AssociationEditor", "FunctionLocation" -> CloudObject[ "https://www.wolframcloud.com/obj/fce36585-c7cc-4b1e-9db6-\ 27851f5da755"]], ResourceSystemBase -> Automatic]][<|"Rick" -> 159, "Paco" -> 90, "Bob" -> 90, "Michael" -> 74|>] |

In an `Association`, the keys can be any expression, including lists. A side effect of that feature is that a list usually cannot be used to specify a location deep inside a nested `Association`. Several Function Repository functions have been published specifically to help work with nested associations.

`NestedLookup` treats lists as an index deep within nested associations:

✕
ResourceFunction[ ResourceObject[ Association[ "Name" -> "NestedLookup", "ShortName" -> "NestedLookup", "UUID" -> "019287c7-c661-4b15-9eda-8a21b26f628e", "ResourceType" -> "Function", "Version" -> "1.0.0", "Description" -> "Look up a set of keys in order to get deeper \ parts of an association or list of rules", "RepositoryLocation" -> URL[ "https://www.wolframcloud.com/objects/resourcesystem/api/1.0"], "SymbolName" -> "FunctionRepository`$\ b791da76177d4107869cb5c5f6d20bf3`NestedLookup", "FunctionLocation" -> CloudObject[ "https://www.wolframcloud.com/objects/a3fc4cdf-12b6-48a5-b385-\ 12e52c08a9bb"]], { ResourceSystemBase -> "https://www.wolframcloud.com/objects/\ resourcesystem/api/1.0"}]][<| "Bob" -> <|"Age" -> 37, "Location" -> <| "City" -> Entity["City", {"SaintLouis", "Missouri", "UnitedStates"}], "Country" -> Entity["Country", "UnitedStates"]|>|>|>, {"Bob", "Location", "City"}] |

It also handles missing values at any level:

✕
ResourceFunction[ ResourceObject[ Association[ "Name" -> "NestedLookup", "ShortName" -> "NestedLookup", "UUID" -> "019287c7-c661-4b15-9eda-8a21b26f628e", "ResourceType" -> "Function", "Version" -> "1.0.0", "Description" -> "Look up a set of keys in order to get deeper \ parts of an association or list of rules", "RepositoryLocation" -> URL[ "https://www.wolframcloud.com/objects/resourcesystem/api/1.0"], "SymbolName" -> "FunctionRepository`$\ b791da76177d4107869cb5c5f6d20bf3`NestedLookup", "FunctionLocation" -> CloudObject[ "https://www.wolframcloud.com/objects/a3fc4cdf-12b6-48a5-b385-\ 12e52c08a9bb"]], { ResourceSystemBase -> "https://www.wolframcloud.com/objects/\ resourcesystem/api/1.0"}]][<| "Bob" -> <|"Age" -> 37, "Location" -> <| "City" -> Entity["City", {"SaintLouis", "Missouri", "UnitedStates"}], "Country" -> Entity["Country", "UnitedStates"]|>|>|>, {"Bob", "Weight"}, Missing["None of your business"]] |

`NestedAssociate` adds or modifies values deep in a nested `Association`:

✕
ResourceFunction[ ResourceObject[ Association[ "Name" -> "NestedAssociate", "ShortName" -> "NestedAssociate", "UUID" -> "6a1de66c-29c0-4b4b-974f-3f0965226b28", "ResourceType" -> "Function", "Version" -> "2.0.0", "Description" -> "Append a value in a nested association", "RepositoryLocation" -> URL[ "https://www.wolframcloud.com/objects/resourcesystem/api/1.0"], "SymbolName" -> "FunctionRepository`$\ 8b08653e8b74432c8eee479657b1ef74`NestedAssociate"], ResourceSystemBase -> Automatic]][<| "Bob" -> <|"Age" -> 37, "Location" -> <| "City" -> Entity["City", {"SaintLouis", "Missouri", "UnitedStates"}], "Country" -> Entity["Country", "UnitedStates"]|>|>|>, {"Bob", "Location", "State"} -> Entity["AdministrativeDivision", {"Missouri", "UnitedStates"}]] |

`NestedKeyDrop` removes key-value pairs from deep in a nested `Association`:

✕
ResourceFunction[ ResourceObject[ Association[ "Name" -> "NestedKeyDrop", "ShortName" -> "NestedKeyDrop", "UUID" -> "65d1bf41-d055-4dbd-8457-229f6311aaab", "ResourceType" -> "Function", "Version" -> "1.0.0", "Description" -> "Drop keys from a nested association", "RepositoryLocation" -> URL[ "https://www.wolframcloud.com/objects/resourcesystem/api/1.0"], "SymbolName" -> "FunctionRepository`$\ 68a694fc94ad4536ba33b2c7a060bc10`NestedKeyDrop", "FunctionLocation" -> CloudObject[ "https://www.wolframcloud.com/obj/1f3bd348-cf0e-4623-83d2-\ 8c85e205ed72"]], ResourceSystemBase -> Automatic]][<| "Bob" -> <|"Age" -> 37, "Location" -> <| "City" -> Entity["City", {"SaintLouis", "Missouri", "UnitedStates"}], "Country" -> Entity["Country", "UnitedStates"]|>|>|>, {"Bob", "Location", "City"}] |

`AssociationMapAt` maps a function deep in a nested `Association`:

✕
ResourceFunction[ ResourceObject[ Association[ "Name" -> "AssociationMapAt", "ShortName" -> "AssociationMapAt", "UUID" -> "2e4ee50f-119f-4b90-94fe-393d7002b2ff", "ResourceType" -> "Function", "Version" -> "1.0.0", "Description" -> "Similar to MapAt but with improved behavior for \ nested expressions involving associations", "RepositoryLocation" -> URL[ "https://www.wolframcloud.com/objects/resourcesystem/api/1.0"], "SymbolName" -> "FunctionRepository`$\ 952568bb39894a2f84b64a8049911e4b`AssociationMapAt", "FunctionLocation" -> CloudObject[ "https://www.wolframcloud.com/objects/a540b935-fc88-45de-92d8-\ 1c0442dc8e0f"]], { ResourceSystemBase -> "https://www.wolframcloud.com/objects/\ resourcesystem/api/1.0"}]][ GeoGraphics[#["Polygon"]] &, <| "Bob" -> <|"Age" -> 37, "Location" -> <| "City" -> Entity["City", {"SaintLouis", "Missouri", "UnitedStates"}], "Country" -> Entity["Country", "UnitedStates"]|>|>|>, {"Bob", "Location", "City"}] |

`AssociationKeyFlatten` converts a nested `Association` into a flat `Association`:

✕
ResourceFunction[ ResourceObject[ Association[ "Name" -> "AssociationKeyFlatten", "ShortName" -> "AssociationKeyFlatten", "UUID" -> "69db3426-da6c-4ea6-bfd6-d6700a604003", "ResourceType" -> "Function", "Version" -> "1.0.0", "Description" -> "Flatten keys in a nested association", "RepositoryLocation" -> URL[ "https://www.wolframcloud.com/objects/resourcesystem/api/1.0"], "SymbolName" -> "FunctionRepository`$\ 8c05022518a04e25b4f1cbf5ee92b720`AssociationKeyFlatten", "FunctionLocation" -> CloudObject[ "https://www.wolframcloud.com/obj/b22f7493-2be4-49ad-aba5-\ 580014219fd6"]], ResourceSystemBase -> Automatic]][<| "Bob" -> <|"Age" -> 37, "Location" -> <| "City" -> Entity["City", {"SaintLouis", "Missouri", "UnitedStates"}], "Country" -> Entity["Country", "UnitedStates"]|>|>|>] |

`AssociationKeyDeflatten` does the opposite operation*. It creates a nested association from a flat `Association` with lists as keys:

✕
ResourceFunction[ ResourceObject[ Association[ "Name" -> "AssociationKeyDeflatten", "ShortName" -> "AssociationKeyDeflatten", "UUID" -> "e8662614-c2e7-483a-ad65-d44c35c9224e", "ResourceType" -> "Function", "Version" -> "1.0.0", "Description" -> "Deflatten the keys in a flat association to \ create a nested association", "RepositoryLocation" -> URL[ "https://www.wolframcloud.com/objects/resourcesystem/api/1.0"], "SymbolName" -> "FunctionRepository`$\ a5b096a82d754179aaf6b245897ad2fa`AssociationKeyDeflatten", "FunctionLocation" -> CloudObject[ "https://www.wolframcloud.com/obj/12ecac2e-7661-4c9c-a757-\ 3cb216139675"]], ResourceSystemBase -> Automatic]][<|{"Bob", "Age"} -> 37, {"Bob", "Location", "City"} -> Entity["City", {"SaintLouis", "Missouri", "UnitedStates"}], {"Bob", "Location", "Country"} -> Entity["Country", "UnitedStates"]|>] |

** We thought the opposite of “flatten” might be “sharpen,” but we are reserving the name *`AssociationSharpen`* for **this extreme data science function** deployed as a resource function in my cloud account.*

The function `Counts` can be thought of as a modernization of the older function `Tally`:

✕
Tally[{1, 2, 3, 4, 1, 2, 3, 1, 2, 1}] |

✕
Counts[{1, 2, 3, 4, 1, 2, 3, 1, 2, 1}] |

Similarly, other functions that predate `Association` have been modernized in the Function Repository. Now there are two ways to `Reap` what you `Sow`:

✕
Reap[Sow[Entity["Plant", "Genus:Triticum"], "Seeds"]; "Fuel the tractor"; Sow[Entity["Plant", "Subspecies:ZeaMaysMays"], "Seeds"]; Sow["Division", "Politics"]; "Sleep"] |

✕
ResourceFunction[ ResourceObject[ Association[ "Name" -> "ReapAssociation", "ShortName" -> "ReapAssociation", "UUID" -> "08e5881e-0108-4ef1-90cf-93dd4d728d83", "ResourceType" -> "Function", "Version" -> "1.0.0", "Description" -> "Equivalent to Reap, but returns an association \ with tags as keys", "RepositoryLocation" -> URL[ "https://www.wolframcloud.com/objects/resourcesystem/api/1.0"], "SymbolName" -> "FunctionRepository`$\ 87a55fcacca942fe970284092367e935`ReapAssociation", "FunctionLocation" -> CloudObject[ "https://www.wolframcloud.com/objects/4e3143fa-44f9-43ff-b2d9-\ 51dab22866cc"]], { ResourceSystemBase -> "https://www.wolframcloud.com/objects/\ resourcesystem/api/1.0"}]][ Sow[Entity["Plant", "Genus:Triticum"], "Seeds"]; "Fuel the tractor"; Sow[Entity["Plant", "Subspecies:ZeaMaysMays"], "Seeds"]; Sow["Division", "Politics"]; "Sleep"] |

`BinCounts` splits data into bins and gives you the number of items in each bin, but does not return the actual bins. `BinCountAssociation` uses the keys of an `Association` to include that information in the results:

✕
BinCounts[{1, 3, 2, 1, 4, 5, 6, 2}, {0, 10, 1}] |

✕
ResourceFunction[ ResourceObject[ Association[ "Name" -> "BinCountAssociation", "ShortName" -> "BinCountAssociation", "UUID" -> "93ebf3eb-c8dc-4f6c-9ccc-5ed634837a30", "ResourceType" -> "Function", "Version" -> "1.0.0", "Description" -> "Collect histogram data in an association of bin \ intervals and bin heights", "RepositoryLocation" -> URL[ "https://www.wolframcloud.com/objects/resourcesystem/api/1.0"], "SymbolName" -> "FunctionRepository`$\ 450898eb8b12430c81d5d949562839b5`BinCountAssociation", "FunctionLocation" -> CloudObject[ "https://www.wolframcloud.com/obj/420c0f34-f35c-4544-8786-\ 6e52e7bb2341"]], ResourceSystemBase -> Automatic]][{1, 3, 2, 1, 4, 5, 6, 2}, {0, 10, 1}] |

A final, important aspect of the Wolfram Language and the Wolfram Function Repository is that there is consistency and interoperability among all the functions. Using a function created by one developer in the Function Repository does not mean that you have to build converters or translators to use other functions in the repository or in the Wolfram Language. To illustrate, here is a big pointless blob of `Association`-based operations using most of the functions discussed previously (and more) together seamlessly. While the result is meaningless, the immediate compatibility warms the heart:

✕
ResourceFunction[ ResourceObject[ Association[ "Name" -> "AssociationEditor", "ShortName" -> "AssociationEditor", "UUID" -> "4608276a-cfbc-4224-9c37-66c678f4c9b7", "ResourceType" -> "Function", "Version" -> "1.0.0", "Description" -> "Create an interface for editing an association", "RepositoryLocation" -> URL[ "https://www.wolframcloud.com/objects/resourcesystem/api/1.0"], "SymbolName" -> "FunctionRepository`$\ d9159c68ab564c33b8cad1ec86343ee7`AssociationEditor", "FunctionLocation" -> CloudObject[ "https://www.wolframcloud.com/obj/fce36585-c7cc-4b1e-9db6-\ 27851f5da755"]], ResourceSystemBase -> Automatic]]@ResourceFunction[ ResourceObject[ Association[ "Name" -> "KeySortLike", "ShortName" -> "KeySortLike", "UUID" -> "c9294236-189a-43da-a5e2-f91b5d125305", "ResourceType" -> "Function", "Version" -> "1.0.0", "Description" -> "Sort keys of an association in the same order \ as another set of keys", "RepositoryLocation" -> URL[ "https://www.wolframcloud.com/objects/resourcesystem/api/1.0"], "SymbolName" -> "FunctionRepository`$\ 12428d2999ba4e64a7c7c3aa13b84c65`KeySortLike", "FunctionLocation" -> CloudObject[ "https://www.wolframcloud.com/obj/f46b38d4-5ff1-428d-8349-\ 197555def917"]], { ResourceSystemBase -> "https://www.wolframcloud.com/objects/\ resourcesystem/api/1.0"}]][ResourceFunction[ ResourceObject[ Association[ "Name" -> "AssociationKeyFlatten", "ShortName" -> "AssociationKeyFlatten", "UUID" -> "69db3426-da6c-4ea6-bfd6-d6700a604003", "ResourceType" -> "Function", "Version" -> "1.0.0", "Description" -> "Flatten keys in a nested association", "RepositoryLocation" -> URL[ "https://www.wolframcloud.com/objects/resourcesystem/api/1.0"], "SymbolName" -> "FunctionRepository`$\ 8c05022518a04e25b4f1cbf5ee92b720`AssociationKeyFlatten", "FunctionLocation" -> CloudObject[ "https://www.wolframcloud.com/obj/b22f7493-2be4-49ad-aba5-\ 580014219fd6"]], ResourceSystemBase -> Automatic]]@ResourceFunction[ ResourceObject[ Association[ "Name" -> "AssociationMapAt", "ShortName" -> "AssociationMapAt", "UUID" -> "2e4ee50f-119f-4b90-94fe-393d7002b2ff", "ResourceType" -> "Function", "Version" -> "1.0.0", "Description" -> "Similar to MapAt but with improved behavior \ for nested expressions involving associations", "RepositoryLocation" -> URL[ "https://www.wolframcloud.com/objects/resourcesystem/api/1.0"]\ , "SymbolName" -> "FunctionRepository`$\ 952568bb39894a2f84b64a8049911e4b`AssociationMapAt", "FunctionLocation" -> CloudObject[ "https://www.wolframcloud.com/objects/a540b935-fc88-45de-92d8-\ 1c0442dc8e0f"]], ResourceSystemBase -> Automatic]][ResourceFunction[ ResourceObject[ Association[ "Name" -> "KeyCombine", "ShortName" -> "KeyCombine", "UUID" -> "fa2c6e6e-8664-409b-bfc5-a93c29fc6b01", "ResourceType" -> "Function", "Version" -> "2.0.0", "Description" -> "Map a function over the keys of an \ association, and collect or combine values in the event of key \ collisions", "RepositoryLocation" -> URL[ "https://www.wolframcloud.com/objects/resourcesystem/api/1.\ 0"], "SymbolName" -> "FunctionRepository`$\ bea3f5ae2c744deba53ef74f53c8aab4`KeyCombine", "FunctionLocation" -> CloudObject[ "https://www.wolframcloud.com/obj/6e1d9eef-04c3-4671-a333-\ 0a7d7fbf10e2"]], ResourceSystemBase -> Automatic]][Mod[#, 3] &, #] &, ResourceFunction[ ResourceObject[ Association[ "Name" -> "MapAtKey", "ShortName" -> "MapAtKey", "UUID" -> "3d4313c6-a2fc-43b3-8eae-a8c8710a63c6", "ResourceType" -> "Function", "Version" -> "1.0.0", "Description" -> "Apply functions to specific keys in an \ association", "RepositoryLocation" -> URL[ "https://www.wolframcloud.com/objects/resourcesystem/api/1.\ 0"], "SymbolName" -> "FunctionRepository`$\ 5312b39560ba48f6a984bd7c8146f70f`MapAtKey"], ResourceSystemBase -> Automatic]][{"Sown" -> ResourceFunction[ ResourceObject[ Association[ "Name" -> "ToAssociations", "ShortName" -> "ToAssociations", "UUID" -> "03f9ac8a-b9ca-4ed3-8f4e-6476f054192d", "ResourceType" -> "Function", "Version" -> "1.0.0", "Description" -> "Recursively replace lists of rules with \ associations", "RepositoryLocation" -> URL[ "https://www.wolframcloud.com/objects/resourcesystem/api/1.\ 0"], "SymbolName" -> "FunctionRepository`$\ 54fdf8dcd89543ffb46a3ea9b1182424`ToAssociations", "FunctionLocation" -> CloudObject[ "https://www.wolframcloud.com/objects/1ece3c97-188a-4956-\ 8d38-22b164389275"]], ResourceSystemBase -> Automatic]]}, ResourceFunction[ ResourceObject[ Association[ "Name" -> "ReapAssociation", "ShortName" -> "ReapAssociation", "UUID" -> "08e5881e-0108-4ef1-90cf-93dd4d728d83", "ResourceType" -> "Function", "Version" -> "1.0.0", "Description" -> "Equivalent to Reap, but returns an \ association with tags as keys", "RepositoryLocation" -> URL[ "https://www.wolframcloud.com/objects/resourcesystem/api/1.\ 0"], "SymbolName" -> "FunctionRepository`$\ 87a55fcacca942fe970284092367e935`ReapAssociation", "FunctionLocation" -> CloudObject[ "https://www.wolframcloud.com/objects/4e3143fa-44f9-43ff-\ b2d9-51dab22866cc"]], { ResourceSystemBase -> "https://www.wolframcloud.com/objects/\ resourcesystem/api/1.0"}]]@Table[Sow[i -> ResourceFunction[ ResourceObject[ Association[ "Name" -> "KeyCombine", "ShortName" -> "KeyCombine", "UUID" -> "fa2c6e6e-8664-409b-bfc5-a93c29fc6b01", "ResourceType" -> "Function", "Version" -> "2.0.0", "Description" -> "Map a function over the keys of an \ association, and collect or combine values in the event of key \ collisions", "RepositoryLocation" -> URL[ "https://www.wolframcloud.com/objects/resourcesystem/\ api/1.0"], "SymbolName" -> "FunctionRepository`$\ bea3f5ae2c744deba53ef74f53c8aab4`KeyCombine", "FunctionLocation" -> CloudObject[ "https://www.wolframcloud.com/obj/6e1d9eef-04c3-4671-\ a333-0a7d7fbf10e2"]], ResourceSystemBase -> Automatic]][ ResourceFunction[ ResourceObject[ Association[ "Name" -> "HashHue", "ShortName" -> "HashHue", "UUID" -> "9546ae91-2b23-4e84-8d3a-5e74a925ee04", "ResourceType" -> "Function", "Version" -> "1.0.0", "Description" -> "Map an expression to a color based on \ a hash", "RepositoryLocation" -> URL[ "https://www.wolframcloud.com/objects/resourcesystem/\ api/1.0"], "SymbolName" -> "FunctionRepository`$\ cfa180cfc14a404f9d8440f796bb8291`HashHue", "FunctionLocation" -> CloudObject[ "https://www.wolframcloud.com/obj/b4e7b1f1-8ab8-44a8-\ b209-18884124de2d"]], ResourceSystemBase -> Automatic]], ResourceFunction[ ResourceObject[ Association[ "Name" -> "BinCountAssociation", "ShortName" -> "BinCountAssociation", "UUID" -> "93ebf3eb-c8dc-4f6c-9ccc-5ed634837a30", "ResourceType" -> "Function", "Version" -> "1.0.0", "Description" -> "Collect histogram data in an \ association of bin intervals and bin heights", "RepositoryLocation" -> URL[ "https://www.wolframcloud.com/objects/resourcesystem/\ api/1.0"], "SymbolName" -> "FunctionRepository`$\ 450898eb8b12430c81d5d949562839b5`BinCountAssociation", "FunctionLocation" -> CloudObject[ "https://www.wolframcloud.com/obj/420c0f34-f35c-4544-\ 8786-6e52e7bb2341"]], ResourceSystemBase -> Automatic]][ Values@ResourceFunction[ ResourceObject[ Association[ "Name" -> "AssociationThrough", "ShortName" -> "AssociationThrough", "UUID" -> "74d9311e-aa45-4f03-a0d5-72d677dd4d37", "ResourceType" -> "Function", "Version" -> "1.0.0", "Description" -> "Generate an Association from \ applying different operations to an expression", "RepositoryLocation" -> URL[ "https://www.wolframcloud.com/objects/resourcesystem/\ api/1.0"], "SymbolName" -> "FunctionRepository`$\ 3edb581b6fa745baace2a4c91f202de7`AssociationThrough", "FunctionLocation" -> CloudObject[ "https://www.wolframcloud.com/obj/21f685d1-26ec-4335-\ b378-f4524169c849"]], { ResourceSystemBase -> "https://www.wolframcloud.com/\ objects/resourcesystem/api/1.0"}]][{Sqrt, Exp}, i]]], "Factors"], {i, 1, 40}]], {"Sown", "Factors"}], {{"Sown", "Factors", 1}, "Result"}] |

The functions mentioned here are only some of the `Association` tools in the Function Repository, which in turn are only a small piece of the consistently growing collection in the full repository. Every week, new functions are added that both expand the Wolfram Language and fill in some of its gaps. There is no longer any need to wait for Wolfram Language version releases to see the latest and greatest new functions: you can get them any time you want right here.

Explore more user contributions like the functions mentioned here or submit your own computational creations at the Wolfram Function Repository. |

As CEO of Wolfram Blockchain Labs (WBL), I think one of the most exciting parts of my job is collaborating with other leaders in the blockchain space to expand tools for developers and business use cases. For several years now, we’ve been adding a steady stream of blockchain functionality into the Wolfram Language to enable development of knowledge-based distributed applications and computational contracts. You may have noticed the growing number of popular blockchains (ARK, Bitcoin, bloxberg, Cardano, Ethereum, MultiChain…) partnering with us and integrating into our platform. It’s already led to some cool explorations, and we have a lot more in the pipeline.

Today, WBL is happy to announce its latest such collaboration, a partnership with TQ Tezos. That includes Tezos blockchain integration in the Wolfram Language, which is great news for smart contract developers and enthusiasts. But that’s just the beginning. Our long-term plans include a lot of big ideas that we think everyone will be excited about!

A Tezos-Wolfram partnership made sense from the start. We’ve worked to build up Wolfram Language support for third-generation blockchains, making Tezos integration an easy and intuitive part of our system. WBL has long had a particular focus on enabling smart contracts, and WBL and TQ Tezos have developed an oracle to provide Wolfram|Alpha data to Tezos smart contract developers.

One of the most exciting aspects of Tezos is its friendliness to formal verification. TQ Tezos has used the Mi-Cho-Coq framework to create a high degree of assurance that the oracle contract exhibits the same, predictable behavior every time it is called.

We also share the broader goal of democratizing technology. Tezos uniquely uses on-chain mechanisms for governance, which means that the people running nodes are the ones making decisions. Anyone interacting with the blockchain gets a vote—like citizens in a worldwide digital community. Wolfram has a history of making knowledge and computation widely available with Wolfram|Alpha, the Wolfram Cloud and the Wolfram Resource System.

So, what could someone do right now with Wolfram Language access to the Tezos blockchain? To start, you can use `BlockchainBlockData` to retrieve data from Tezos (in this case, the most recent entry block):

✕
BlockchainBlockData[Last, BlockchainBase -> "Tezos"] // Dataset |

To specify an address on the blockchain, use `BlockchainKeyEncode` with a generated `PrivateKey` (which uses the elliptic curves used in the Tezos ecosystem):

✕
tezosAddress = BlockchainKeyEncode[PrivateKey[ Association[ "Type" -> "EdwardsCurve", "CurveName" -> "ed25519", "PrivateByteArray" -> ByteArray[{230, 223, 162, 157, 26, 105, 184, 169, 145, 106, 12, 204, 35, 71, 36, 93, 34, 9, 37, 7, 155, 95, 209, 22, 37, 209, 4, 254, 62, 16, 142, 88}], "PublicByteArray" -> ByteArray[{137, 139, 124, 252, 66, 254, 126, 158, 120, 84, 94, 58, 250, 94, 109, 166, 174, 191, 195, 209, 182, 183, 7, 72, 38, 21, 112, 86, 73, 12, 175, 204}], "PublicCurvePoint" -> { 15783581147550366836542846006538900489655451279646389129589897096\ 19011472361, 3468505952679969615990774873903380786368045635621138588516215319\ 7588300204937}]], "Address", BlockchainBase -> {"Tezos", "Testnet"}] |

Given this address, `BlockchainAddressData` will provide a range of information:

✕
BlockchainAddressData[tezosAddress, BlockchainBase -> {"Tezos", "Testnet"}] // Dataset |

Here’s the current balance at that address:

✕
balance = %["Balance"] |

You can also create smart contracts to put your own data on the Tezos blockchain. To achieve that, you’d start by creating a transaction operation with `BlockchainTransaction`:

✕
tx = BlockchainTransaction[<| "BlockchainBase" -> {"Tezos", "Testnet"}, "Type" -> "Origination", "Sender" -> tezosAddress, "Balance" -> 100, "Contract" -> <| "Storage" -> <|"string" -> "hello"|>, "Code" -> {<|"prim" -> "parameter", "args" -> {<|"prim" -> "string"|>}|>, <|"prim" -> "storage", "args" -> {<|"prim" -> "string"|>}|>, <|"prim" -> "code", "args" -> {{<|"prim" -> "CAR"|>, <|"prim" -> "NIL", "args" -> {<|"prim" -> "operation"|>}|>, <|"prim" -> "PAIR"|>}}|>} |> |>] |

Then you can sign the operation by passing the `BlockchainTransaction` object and the appropriate `PrivateKey` to `BlockchainTransactionSign`:

✕
txSigned = BlockchainTransactionSign[tx, PrivateKey[ Association[ "Type" -> "EdwardsCurve", "CurveName" -> "ed25519", "PrivateByteArray" -> ByteArray[{230, 223, 162, 157, 26, 105, 184, 169, 145, 106, 12, 204, 35, 71, 36, 93, 34, 9, 37, 7, 155, 95, 209, 22, 37, 209, 4, 254, 62, 16, 142, 88}], "PublicByteArray" -> ByteArray[{137, 139, 124, 252, 66, 254, 126, 158, 120, 84, 94, 58, 250, 94, 109, 166, 174, 191, 195, 209, 182, 183, 7, 72, 38, 21, 112, 86, 73, 12, 175, 204}], "PublicCurvePoint" -> { 15783581147550366836542846006538900489655451279646389129589897096\ 19011472361, 3468505952679969615990774873903380786368045635621138588516215319\ 7588300204937}]]] |

Finally, the contract is submitted to the blockchain using `BlockchainTransactionSubmit`:

✕
submitted = BlockchainTransactionSubmit[txSigned] |

And with just a few simple lines of code, you’ve deployed a smart contract to the Tezos ecosystem. Once the transaction operation is on the blockchain (which can sometimes take a few minutes), you can verify that the balance has been updated:

✕
newBalance = BlockchainAddressData[tezosAddress, "Balance", BlockchainBase -> {"Tezos", "Testnet"}] |

The total transaction amount is equal to the 100 mutez payment plus a 76,232 mutez fee:

✕
balance - newBalance |

The details of the fees can be inspected by using `BlockchainTransactionData` and passing the transaction ID of the submitted `BlockchainTransaction`:

✕
BlockchainTransactionData[submitted["TransactionID"], "Fees", BlockchainBase -> {"Tezos", "Testnet"}] // Dataset |

This powerful simplicity is a core advantage of any Wolfram Language integration: you don’t have to be a developer to get serious work done.

The ultimate result will be an expansive toolkit that makes Tezos development available to everyone, regardless of programming skill. We have plans to extend these capabilities in several key areas within the Tezos ecosystem: analytics, computational facts delivery and blockchain educational information. WBL is also exploring how we might begin to bake on Tezos, our first tentative foray into how staking works on a blockchain network.

This and our other blockchain partnerships help move WBL toward the larger goal of bringing computational reform to the financial industry: smart contracts, symbolic data and smart reporting. We’re always looking for more ways to expand what we are doing with blockchains and other decentralized technologies. Keep an eye out for our next big announcement!

Get your free Wolfram|One trial to start writing your own smart contracts, or connect with WBL to find out about integrating your blockchain into the Wolfram Language. |

I enjoy turning mathematical concepts into wearable pieces of art. That’s the idea behind my business, Hanusa Design. I make unique products that feature striking designs inspired by the beauty and precision of mathematics. These pieces are created using the range of functionality in the Wolfram Language. Just in time for Valentine’s Day, we recently launched Spikey earrings in the Wolfram Store, which are available in rose gold–plated brass and red nylon. In this blog, I’ll give a look under the hood and discuss how an idea becomes a product through the Wolfram Language.

First, we’ll go through a tutorial of how to create a pair of mathematical earrings. In the second half of this post, I’ll share the mathematics and Wolfram Language commands behind some of my favorite designs.

Every item at Hanusa Design has been 3D modeled in Wolfram Mathematica and then 3D printed to bring the jewelry to life. At the heart of a number of designs is polyhedral geometry, easily accessible in the Wolfram Language through the `PolyhedronData` command. For example, here is a dodecahedron:

✕
Show[PolyhedronData["Dodecahedron"], Boxed -> False] |

`PolyhedronData` also puts at your fingertips the coordinates of the vertices, edges and faces of the polyhedron, which makes creating a wireframe version of the dodecahedron a snap:

✕
vertices = PolyhedronData["Dodecahedron", "VertexCoordinates"]; edges = Map[vertices[[#]] &, PolyhedronData["Dodecahedron", "EdgeIndices"]]; wireframe = Graphics3D[{Map[Tube[#, .1] &, edges], Map[Sphere[#, .1] &, edges]}, Boxed -> False] |

In the previous code, `"EdgeIndices"` refers to vertex pairs that make up the set of edges. We require both the `Tube` command to create the edges of the wireframe and the `Sphere` command to complete the corners in order for the result to be 3D printable. The model is ready to be exported to one of the formats that 3D printers need:

✕
Export[NotebookDirectory[] <> "dodecahedron.stl", wireframe] |

The resulting file can now be 3D printed. However, at this point we should probably pay attention to the size and finishing of the object. I have found that reimporting an STL file is one of the best ways to ensure that Mathematica is working with a well-behaved object:

✕
imported = Import[NotebookDirectory[] <> "dodecahedron.stl"] |

The imported model is a `MeshRegion` object, which means that we can apply region transformation commands. For instance, the standard dimensions for an STL file are in millimeters, so if we want a dodecahedron earring that is 2cm in width, we should resize the imported region to be 20mm wide:

✕
mmscale = RegionResize[imported, 20] |

While one can attach an earwire directly to this wireframe dodecahedron, I prefer to add a ring dedicated for that purpose. I’ll also use `TransformedRegion` and `RotationTransform` twice to rotate our resized model so the ring can be attached at the top of the polyhedron. Pro tip: I figured out the angle I needed to rotate it on the *y* axis by applying `ArcTan` to the dodecahedron vertex that had a zero *y* coordinate:

✕
angle = -ArcTan[vertices[[16, 3]], vertices[[16, 1]]]; rotated = TransformedRegion[ TransformedRegion[mmscale, RotationTransform[angle, {0, 1, 0}]], RotationTransform[Pi/2, {0, 0, 1}]]; |

The ring is created by using a `ParametricPlot3D` command to draw a circular path. This has been converted into a torus by using `Tube` as a `PlotStyle` option. Specifying the number of `PlotPoints` ensures that the ring is smooth instead of faceted:

✕
ring = ParametricPlot3D[ 1.7 {Cos[x], 0, Sin[x]} + {0, 0, 11.2}, {x, 0, 2 Pi}, PlotStyle -> Tube[.5, PlotPoints -> 30], PlotPoints -> 40]; earring = Show[rotated, ring, PlotRange -> All] |

Since we’ve mixed a region with a `ParametricPlot3D` object, I’ll export and reimport so I can make a second earring copy with `TranslationTransform`:

✕
one = Import[ Export[NotebookDirectory[] <> "dodecahearring.stl", earring]]; pair = Show[one, TransformedRegion[one, TranslationTransform[{22, 0, 0}]]] Export[NotebookDirectory[] <> "dodecahedron.pair.stl", pair] |

This 3D model can now be uploaded to a 3D printing service and printed just for you, just like the pair you can order here.

3D-printed jewelry seems to inhabit a sweet spot for 3D printing. When a piece is printed in nylon using selective laser sintering, it is inexpensive and can be dyed to be bright and eye-catching. A piece in gold, silver or brass is created through a lost-wax casting method. A high-resolution wax model is 3D printed, a plaster mold is formed around it and the wax is replaced by molten metal. Even though metal prints are more expensive than the nylon, they are still affordable because of the size of the models. Be aware that for 3D-printed objects, if you scale a model by a factor of two, that increases the volume of the material (and therefore its cost) by a factor of eight!

Now that we’ve made some earrings together and explored a number of key functions, I’m excited to share a number of my mathematically inspired pieces of jewelry, all 3D modeled using the Wolfram Language.

3D printing allows for intricate designs that wouldn’t be possible to create in any other way, such as when they consist of interlocking pieces. Some of my all-time bestselling designs are these dangling cubes earrings and the matching interlocking octahedron necklace. I aligned the shapes carefully to interlock and optimized their geometry to reduce printing costs.

This past holiday season, my new Mobius necklace made a strong impression. Its creation involved making a custom mathematical function for the Mobius strip (`ParametricPlot3D` was helpful here) and carefully choosing points on its boundary to make the triangulation aesthetically pleasing.

This bubble pendant is a piece of generative art, which means I designed an algorithm to place the rings in random positions and give them random radii but without having specified their final positions. Of course, I made liberal use of the `RandomReal` function. Read more about the pendant in this Wolfram Community post.

One of my newer pieces is this collection based on the mathematics of an Apollonian circle packing. Such an arrangement starts out with four circles that all touch each other at one point (the outer circle and three inner circles). The rest of the circles are generated by removing one of the four circles and finding a replacement circle that touches each of the others. This process can continue indefinitely with smaller and smaller circles. The bubbly Apollonian earrings are a mismatched pair where the initial choice of circles is random. They also have a bubbly feel because the outermost circle was removed.

With the Apollonian necklace, the wearer’s neck lies where the largest internal circle would be. The circle replacement procedure was automated in Mathematica and run until all circles larger than a given cutoff were included.

Sometimes I hit upon a concept that seems too fun not to make. That was the case with these bold and unique rotini earrings, so called because of their pasta-like shape. These cylindrical earrings were created by rotating the shape of a plus sign (+) around the central axis based on the graph of mathematical functions, such as a parabola, an exponential function and a sawtooth function.

Another source of aesthetic inspiration is the mathematics of fractals. Their iterative nature is a perfect match to be programmed in Mathematica. Below are my earrings based on the Koch tetrahedron, a three-dimensional fractal. You start off with a tetrahedron, and on each of the four sides, you build a smaller tetrahedron. This new shape has many more smaller triangular sides, and on each of those, you build an even smaller tetrahedron. I’ve stopped here for this design, but if you keep going (placing smaller and smaller tetrahedra), you’ll have the whole fractal, which would be a three-dimensional analog of the Koch snowflake. Surprisingly, the limiting shape of this fractal fits perfectly into a cube.

These knight’s tour earrings are some of my personal favorites. I love the black-and-white chess theme that fits the earrings because they are two different knight’s tours of positions in a 3 × 3 × 3 cube. I created a knight-move graph in Mathematica and used `FindHamiltonianCycle` to find two visibly different knight’s tours. I also had to use some trigonometry to stand the cubes on their corners.

Some of my earliest pieces are these statement pendants based on the always visually pleasing Voronoi diagrams (available through the `VoronoiMesh` function). The honeycomb pendant required intersecting the Voronoi diagram with other shapes. (`RegionIntersection` was useful here.)

The points defining the Voronoi diagram in the Fibonacci snowflake pendant lie along a Fibonacci spiral.

I’ll end this photo tour with a design that I can stare at for hours: my introspection necklace. This pendant is inspired by the fourth dimension; if you look carefully, you can see that it was constructed by combining two hypercubes. I have found that 3D design requires thinking about the layers that are visible and what parts of a design block other parts of the design. The ability to partially see through an object makes for a more intricate and captivating item.

Thanks for the opportunity to share my mathematical jewelry and my design process with you.

Readers of this blog can take 10% off all purchases at Hanusa Design through Pi Day, March 14, 2021, with promo code **WOLFRAM**.

The following are some additional resources if you want to learn more about how I use Wolfram technologies:

- I created a webpage about 3D design in Mathematica.
- I recorded lectures from my fall 2020 Mathematical Computing class.
- I have used Mathematica to make mathematical art, including digital images and 3D-printed sculptures.

*Guest author Christopher Hanusa is a research mathematician and mathematical artist at Queens College of the City University of New York in Queens, New York. In his spare time, he is an entrepreneur, designing mathematical jewelry for Hanusa Design using the Wolfram Language.*

Head to the Wolfram Store to get a pair of Hanusa Design’s 3D Spikey earrings, and try creating your own 3D Wolfram Language jewelry with a free Wolfram|One trial. |