Wind tunnels are devices used by experimental fluid dynamic researchers to study the effect of an object as it moves through air (or any other fluid). Since the object itself cannot move inside this tunnel, a controlled stream of air, generally produced by a powerful fan, is generated and the object is placed in the path of this stream of air. This produces the same effect as the object moving through stationary air. Such experiments are quite useful in understanding the aerodynamics of an object.

There are different kinds of wind tunnels. The simplest wind tunnel is a hollow pipe, or rectangular box. One end of this tunnel is fitted with a fan, while the other end is open. Such a tunnel is called an open-return wind tunnel. Experimentally, they are not the most efficient or reliable, but would work well if one were to build a computational wind tunnel—which is what we aim to do in this blog post. Here is the basic schematic of the wind tunnel that we will develop:

Our wind tunnel will be a 2D wind tunnel. Fluid will enter the tunnel from the left and leave from the right. The top and bottom are solid walls. I should point out that since this is a computational wind tunnel, we are afforded great flexibility in choosing what kinds of boundaries it can possess. For example, we could make it so that the left, right and bottom walls do not move but the top wall does. We would then have the popular case of flow in a box.

When one starts thinking of computational fluid dynamics, our thoughts invariably jump to the famous Navier–Stokes equations. These equations are the governing equations that dictate the behavior of fluid flow. At this point, it might seem like we are going to use the Navier–Stokes equations to help us build a wind tunnel. But as it turns out, there are other methods to study the behavior of fluid flow without solving the Navier–Stokes equations. One of those methods is called the lattice Boltzmann method (LBM).

If we were to use the Navier–Stokes equations, we would be dealing with a complicated system of partial differential equations (PDEs). To solve these numerically, we would have to employ various techniques to discretize the derivatives. Once the discretization is done, we are left with a massive system of nonlinear algebraic equations that has to be solved. This is computationally exhausting! Using the alternative approach of the LBM, we completely bypass this traditional approach. There are no systems of equations to solve in the LBM. Also, a lot of the operations (which I will describe later) are completely local operations. This makes the LBM a highly parallel method.

In a very simplistic framework, we can think of the LBM as a “bottom-up” approach. In this approach, we perform the simulations in the microscopic, or “lattice,” domain. Imagine you have a physical or macroscopic domain. If we were to zoom in on a single point in this macroscopic domain, there would be a number of particles that are interacting with each other based on some “rule” about their interaction:

For example, if two particles hit each other, how would they react or bounce off each other? These particles follow some discrete rule. Now, if we were to let these particles evolve over time based on these rules (you can see how this is closely related to cellular automata) and take averages, then these averages could be used to describe certain macroscopic quantities. As an example, the HPP model (named after Hardy, Pomeau and de Pazzis) we saw in the previous figure can be used to simulate the diffusion of gases.

Though this discrete approach sounds enticing (and researchers in the mid-1970s did try out its feasibility), it has a number of drawbacks. One of the major issues was the statistical noise in the final result. However, it is from these principles and attempts to overcome the drawbacks that the LBM emerged. A web search on theoretical aspects of this method will reveal many links to derivations and the final equations. In this blog, rather than focus on the theoretical aspect, I would like to focus on the final underlying mechanism through which the lattice Boltzmann simulations are performed. So I will only touch on the final equations we will need to develop our wind tunnel. First, assume that the density and the velocities are described by the following equations:

… where the *f _{i}* are called distribution functions and

The computation of the density and velocities can be done as follows:

✕
ex = {1, 1, 0, -1, -1, -1, 0, 1, 0}; ey = {0, 1, 1, 1, 0, -1, -1, -1, 0}; LBMDensityAndVelocity[f_] := Block[{rho, u, v}, rho = Total[f, {2}]; u = (f.ex)/rho; v = (f.ey)/rho; {rho, u, v} ]; |

These nine discrete velocities are associated with their respective distribution functions *f _{i}* and can be visualized as follows:

At each (discrete) point in the lattice domain, there will exist nine of these distribution functions. A model that uses these nine discrete velocities and functions *f _{i}* is called the D2Q9 model. If the distribution functions

The term Ω_{i} is a complicated “collision” term that basically dictates how the various *f _{i}* interact with each other. Using a number of simplifications and approximations, this equation is reduced to the following:

… where *f _{i}*

Since δt_{LBM} = 1 and the are all 1 or 0, the streaming step is given by:

To visualize the streaming step, imagine the discretized domain. On each grid point of this domain are nine of these distribution functions (as shown in the following figure). Note the various colors for each of the grid points. The length of each arrow indicates the magnitude of the respective distribution functions:

Based on the mathematical formulation, each distribution will be streamed in the respective directions as follows:

Note the colors and where they ended up. It would be helpful to just focus on the center point. Before the streaming step, the arrows (which represent the different *f _{i}*) are all green. After the streaming step, notice where the green arrows land on the surrounding grid points. This, in short, is the streaming step. In Mathematica, this step is done very easily using the built-in functions

✕
right = {0, 1}; left = {0, -1}; bottom = {1, 0}; top = {-1, 0}; none = {0, 0}; streamDir = {right, top + right, top, top + left, left, bottom + left, bottom, bottom + right, none}; LBMStream[f_] := Transpose[ MapThread[Flatten[RotateRight[#1, #2]] &, {f, streamDir}]]; |

When one does the streaming, special care has to be taken to address the boundaries of the wind tunnel. When the streaming is done, there are certain *f _{i}* that become unknown at the edges and corners of the domain. The schematic (shown in the following figure) shows which

To understand how the unknown *f _{i}* are computed, let us consider the top wall of the wind tunnel. For this edge,

is a two-dimensional unknown vector, and *f _{i}*

The details of this approach are given in the paper by Ho, Chang, Lin and Lin. This operation (which is actually a highly parallelizable operation) can be done efficiently in Mathematica as:

✕
wts = {1/9, 1/36, 1/9, 1/36, 1/9, 1/36, 1/9, 1/36, 4/9}; LBMEquilibriumDistributions[rho_, u_, v_] := Module[{uv, euv, velMag}, uv = Transpose[{u, v}]; euv = uv.{ex, ey}; velMag = Transpose[ConstantArray[Total[uv^2, {2}], 9]]; (Transpose[{rho}].{wts})*(1 + 3*euv + (9/2)*euv^2 - (3/2)*velMag) ]; |

Notice that *f _{i}*

… where *U*_{BC}, *V*_{BC} are the velocities specified by the user for the boundary. These resulting equations are linear. There are three unknowns {ρ, *Q _{x}*,

✕
Clear[f, feq, rho, ubc, vbc]; feq = EquilibriumDistributions[{rho}, {ubc}, {vbc}][[1]]; eqn1 = rho*ubc == Sum[ex[[i]]*f[i], {i, 5}] + Sum[ex[[i]]*(feq[[i]] + wts[[i]]*(ex[[i]]*qx + ey[[i]]*qy)), {i, 6, 8}] + ex[[9]]*f[9]; eqn2 = rho*vbc == Sum[ey[[i]]*f[i], {i, 5}] + Sum[ey[[i]]*(feq[[i]] + wts[[i]]*(ey[[i]]*qx + ey[[i]]*qy)), {i, 6, 8}] + ey[[9]]*f[9]; eqn3 = rho == Sum[f[i], {i, 5}] + Sum[(feq[[i]] + wts[[i]]*(ey[[i]]*qx + ey[[i]]*qy)), {i, 6, 8}] + f[9]; res = FullSimplify[First[Solve[{eqn1, eqn2, eqn3}, {rho, qx, qy}]]] |

Once {ρ, *Q _{x}*,

✕
FullSimplify[Table[(feq[[i]] + wts[[i]]*(ey[[i]]*qx + ey[[i]]*qy)), {i, 6, 8}] /. res] |

When dealing with outflow boundary conditions, we are essentially trying to impose a 0-gradient condition across the boundary, i.e. . The simplest thing one can do is to use the velocity from one grid behind and use that to compute *f _{i}* at the boundaries. However, that leads to inaccurate and, in many cases, unstable results.

Consider the right wall as shown in the following figure:

After the streaming step, the distributions *f*_{4}, *f*_{5}, *f*_{6} are unknown. To impose an outflow condition on the distribution functions *f _{i}*, we use the following relation:

… where is the velocity from the previous time step at grid points (*j*, *k*), *j* = 1, 2, …, *M* and *f _{i}*,

A second approach to imposing outflow condition would be to simply do the following:

This method is much simpler than the first one, and is based on applying a second-order backward difference formula to each of the distribution functions.

Once the streaming is done and boundary conditions are imposed, a “collision” is performed (recall the rule-based approach). This step is basically figuring out how the ensemble-averaged particles are going to interact with each other. This step is a bit complicated, but using the BKG approximation, the collision step can be written as:

… where are the density and velocities computed after the streaming step and boundary-adjustment step. The term *v*_{LBM} is the kinematic viscosity in the lattice Boltzmann domain, and τ is the relaxation parameter. This term is quite important and will be discussed in a bit.

This translates to two very simple lines of code in Mathematica:

✕
LBMCollide[f_, u_, v_, rho_, tau_] := Block[{feq}, feq = EquilibriumDistributions[rho, u, v]; f + (feq - f)/tau ]; |

From a visual standpoint, after the collision, the distribution functions are adjusted based on the previous formula, and we get new distributions:

Notice the change in the lengths of the arrows. That’s it! That is all it takes to run a lattice Boltzmann simulation.

So how is something like this supposed to simulate the fabled Navier–Stokes equations? To answer that question, we would have to go into a lot of math and multiscale analysis, but in short, the form of *f*^{eq} dictates the macroscopic equations that the LBM simulates. So it is actually possible to simulate a whole bunch of PDEs by using the appropriate equilibrium functions. Once again, I will leave it to the reader to go out into the internet world and get a sample of the remarkable things people simulate using the LBM.

Having seen the basic mechanism of the LBM, the obvious next question is: how do the simulations that are performed in the lattice-based system translate to the physical world? This is done by matching the non-dimensional parameters of the lattice system and the physical system. In our case, it is a single non-dimensional parameter: the Reynolds number. The Reynolds number (Rey) is defined as:

… where *U*_{phy} is a characteristic velocity, *L*_{phy} is the characteristic length and *v*_{phy} is the kinematic viscosity in the physical domain. In order to simulate the flow, the user is expected to specify the Reynolds number, the characteristic velocity and the characteristic length. From these three pieces of information, the kinematic viscosity is determined, and subsequently the relaxation parameter *τ* is determined. Using these pieces of information and the underlying equations associated with the LBM, the internal parameters of the simulation are computed.

The characteristic lattice velocity *U*_{LBM} must never exceed (this number is computed to be the speed of sound in the lattice domain). The lattice velocity must remain significantly below this value for it to properly simulate incompressibility. In general, the lattice velocity is taken to be *U*_{LBM} = 0.1. Similarly, the characteristic lattice length *L*_{LBM} represents the number of points used in the lattice domain to represent the characteristic length in the physical domain. *L*_{LBM} would be an integer quantity, and is typically user defined.

Let us look at an example to solidify how to relate the lattice simulation to the physical simulation. Let us assume that Rey = 100, *U*_{phy} = 1, *L*_{phy} = 1 and the physical dimensions of the wind tunnel are *L _{w}* = 15

This means that the relaxation parameter τ in the BGK model is:

We now have all the quantities we need to run the simulation. If we now let each lattice time step in the simulation be δt_{LBM} = 1, then we need to know what δt_{phy} is. This is done by equating the viscosities and is given by:

Therefore, if we want to run our simulation for *t* = 100 (time units), then in the lattice domain we would be iterating for steps.

The Reynolds number is a remarkable non-dimensional parameter. Rather than specify what fluid we are simulating at what velocities and at what dimension, the Reynolds number ties them all together. This means that if we have two systems of vastly different length, velocity scale and fluid medium, the two flows will behave the same as long as the Reynolds number remains the same.

Let us now talk about how to introduce objects into the wind tunnel. One approach would be to discretize the objects into a jagged “steps” version of the original object and align it with the grid, then impose a no-slip boundary condition on each one of the step edges and corners:

This is really not a good approach because it distorts the original object; if one needed a good representation of the object, they would have to use an extremely fine grid—making it computationally expensive and wasteful. Furthermore, sharp corners can often induce unwanted behavior in the flow. A second approach would be to *immerse* the object into the grid. The boundary of the object is discretized and is immersed into the domain:

Discretizing the boundary of a specified object can easily be done using the built-in function `BoundaryDiscretizeRegion`. We can specify `Disk` or `Circle` to generate a set of points that represents the discretized version of the circular object:

✕
bmr = BoundaryDiscretizeRegion[Disk[]]; pts = MeshCoordinates[bmr]; Show[bmr, Graphics[Point[pts]], ImageSize -> 250] |

This method of discretizing the object and placing it inside the grid is called the immersed boundary method (IBM). Once the discretized object is immersed, the effect of that object on the flow needs to be modeled while making sure that the velocity conditions of the boundary are respected. One method of making the flow “feel the presence” of the immersed object is through a method called the direct forcing method. With this approach, the lattice BGK model is modified by adding a forcing term *F _{i}* to the evolution equation:

… where is the corrective force induced on a grid point by the object boundaries. The equation for computing the velocity is now modified as:

The corrective force is computed as:

… where are the boundary conditions of the object, are the velocities at the object boundaries if the object was not present, is an approximation to the delta function and are the positions of the boundary points of the object and are generally called Lagrangian boundary points. There are several choices that one can use. We will make use of the following compactly supported function:

This approximation is also called the mollifier kernel and can be defined using the `Piecewise` function:

✕
Clear[deltaFun]; deltaFun[r_?NumericQ] := Piecewise[{{(5 - 3 Abs[r] - Sqrt[1 - 3 (1 - Abs[r])^2])/6, 0.5 <= Abs[r] <= 1.5}, {(1 + Sqrt[1 - 3 r^2])/3, Abs[r] <= 0.5}}]; |

✕
Plot[deltaFun[r], {r, -2, 2}, ImageSize -> 300] |

The 2D function δ(*x* – *X*, *y* – *Y*) is then given by:

… where dx, dy are scaling parameters. For Lagrangian point (*X _{i}*,

✕
Plot3D[deltaFun[(x - 1/2)]*deltaFun[(y - 1/2)], {x, -2, 2}, {y, -2, 2}] |

Let’s look at an example to demonstrate this immersed boundary concept, as well as how the function is constructed and how it is used for approximating a function. Assume that a circle is immersed in a rectangular domain:

✕
Clear[deltaFun]; deltaFun[r_?NumericQ] := Piecewise[{{(5 - 3 Abs[r] - Sqrt[1 - 3 (1 - Abs[r])^2])/6, 0.5 <= Abs[r] <= 1.5}, {(1 + Sqrt[1 - 3 r^2])/3, Abs[r] <= 0.5}}]; |

✕
ng = N[Range[-2, 2, 4/30]]; dx = dy = ng[[2]] - ng[[1]]; grid = Flatten[Outer[List, ng, ng], 1]; n = 30; bpts = N[CirclePoints[n]]; Graphics[{{Red, PointSize[0.03], Point[bpts]}, {Blue, Point[grid]}}, ImageSize -> 300] |

Each Lagrangian boundary point (in blue) influences the grid points (various intersections of the vertical and horizontal lines) within a certain radius, as shown in the following figure:

To get the grid points that are influenced by each of the Lagrangian points, we make use of the `Nearest` function:

✕
dr = 1.5 Sqrt[dx^2 + dy^2]; nf = Nearest[grid -> Automatic]; influenceGridPtsIndex = nf[bpts, {Infinity, dr}]; |

The function δ(*x* – *X*(*s*), *y* – *Y*(*s*)) for discrete points essentially becomes a matrix:

✕
gp = grid[[#]] & /@ influenceGridPtsIndex; dd = MapThread[Transpose[#1] - #2 &, {gp, bpts}]; dval = Table[ Map[deltaFun[#/dx] &, di[[1]]]*Map[deltaFun[#/dy] &, di[[2]]], {di, dd}]; t = Flatten[ MapThread[ Thread[Thread[{#1, #2}] -> #3] &, {Range[n], influenceGridPtsIndex, dval}]]; dMat = SparseArray[t, {n, Length[grid]}] |

This matrix can now be used to compute the values at the Lagrangian points. For example, let us assume that the underlying grid has values on it defined by *h*(*x*, *y*) = `Sin`(*x* + *y*), then the values at the Lagrangian points are computed as:

... where *D* is the discretization of δ and *h*(*x _{j}*,

✕
bptVal = dMat.Sin[Total[grid, {2}]]; |

We can compare the computed interpolated value to the actual values:

✕
ListLinePlot[{bptVal, Sin[Total[bpts, {2}]]}, ImageSize -> 300, PlotStyle -> {Red, Black}, PlotLegends -> {"Computed", "Actual"}] |

Similarly, the function values at the grid can be computed using the function values at the Lagrangian points as:

✕
wts = IdentityMatrix[n]; wts[[1, 1]] = wts[[-1, -1]] = 0.5; wts *= (Norm[bpts[[2]] - bpts[[1]]])/(dx*dy); gridVal = Sin[Total[bpts, {2}]].wts.dMat; hfun = Interpolation[Transpose[{grid, gridVal}]]; Plot3D[hfun[x, y], {x, -2, 2}, {y, -2, 2}, PlotRange -> All] |

As you can see, since the δ functions have compact support, only grid points that lie in their radius of influence get interpolated values. All grid points that are not in their support radius are 0.

So, to remind the readers, one single step in the lattice Boltzmann simulation consists of the following steps:

- Perform the streaming step.
- Adjust distribution functions at the boundaries.
- Perform the collision step.
- Compute the velocities .
- Compute the velocities at the Lagrangian boundary points of the objects.
- For each boundary point of the object, compute the corrective force needed to enforce the boundary conditions at that point.
- Compute the corrective forces at the lattice grid points using the forces obtained in step 6.
- Perform the streaming and collision steps, taking the forces into account.
- Calculate density and velocities.

This concludes all the necessary ingredients needed to run a wind tunnel simulation using the LBM in 2D.

To make this wind tunnel easy to use, I have put all these functions into a package called WindTunnel2DLBM. It contains a number of features and allows for easy setup of the problem by a user. I would recommend the interested user go through the package documentation for details. The focus here will be on the various examples and the flexibility our computational wind tunnel setup offers.

The first example is the flow in the wind tunnel. This is perhaps the simplest case. A schematic of the domain and its associated boundary conditions are shown here:

In this case, there is only one length scale to the problem: the height of the wind tunnel. Therefore, that becomes our characteristic length scale. The characteristic velocity in this case is the maximum velocity coming from the inlet, which is set to 1. All that remains is to specify the Reynolds number at which the simulation is to be carried out. This is user defined as well. Let us take the length of the wind tunnel to be 6 units going from (0,6), and the height to be 2 units going from (-1,1). We now set up the simulation:

✕
<< WindTunnel2DLBM`; |

✕
Rey = 200; charLen = 1; charVel = 1; ic = Function[{x, y}, {0, 0}]; state = WindTunnelInitialize[{Rey, charLen, charVel}, ic, {x, 0, 6}, {y, -1, 1}, t] |

Notice that we did not provide any boundary condition information here. That is because the wind tunnel defaults to the flow in a channel case, and therefore all the boundary conditions are automatically imposed. All we have to specify are the characteristic information and the dimensions of the wind tunnel.

The simulation is performed using a fixed time step. The time step is internally computed and can be accessed from the following property:

✕
state["TimeStep"] |

Let us now run the simulation for a period of 5 time units:

✕
WindTunnelIterate[state, 5]; state |

We can query the data at the final step of the simulation:

✕
{usol, vsol} = {"U", "V"} /. state[state["CurrentTime"]]; |

The solution can be visualized in a variety of ways. For 2D simulation, a streamline plot can reveal some useful information. Let us visualize the streamline plot:

✕
StreamPlot[{usol[x, y], vsol[x, y]}, {x, 0, 6}, {y, -1, 1}, AspectRatio -> Automatic, ImageSize -> 400, PlotRangePadding -> None] |

Notice that the streamlines are not completely parallel (there is a bit of deviation). To see why, let us look at the profiles of the *u* component of the velocity field at various *x* locations:

✕
Plot[Evaluate[{usol[#, y] & /@ Range[0, 6, 2]}], {y, -1, 1}, ImageSize -> 300, PlotLegends -> Range[0, 6, 2], PlotStyle -> {Black, Red, Blue, Green}] |

This indicates that the velocities have a spatial dependence. For this particular problem, we should expect the flow to reach steady state, i.e. the flow should not vary with time. Let us run the simulation for an additional 20 time units and see the velocity profile:

✕
WindTunnelIterate[state, 20]; {usol, vsol} = {"U", "V"} /. state[state["CurrentTime"]]; Plot[Evaluate[{usol[#, y] & /@ Range[0, 6, 2]}], {y, -1, 1}, ImageSize -> 300, PlotLegends -> Range[0, 6, 2], PlotStyle -> {Black, Red, Blue, Green}] |

We see that the velocity profiles at the various *x* locations are almost the same as each other. This gives us an indication that the flow is indeed reaching steady state.

Let us now look at the classic flow-in-a-box problem. Here’s the schematic of the domain and the boundary condition information:

The top wall moves with a horizontal velocity of 1 (length units/time units), while all the others are stationary, no-slip walls. The circles inside the box denote the kind of fluid behavior that might be expected. As the top wall moves, the wall drags the fluid below it, causing the fluid to rotate—that is the big circle in the schematic, and it represents a vortex. If there is sufficient strength in the main vortex, then we can expect it to start causing smaller, secondary vortices to form. Our hypothesis is that the strength of the vortex should be related to the Reynolds number. Let us see what happens by running the simulation at Reynolds number 100:

✕
state = WindTunnelInitialize[{100, 1, 1}, Function[{x, y}, {0, 0}], {x, 0, 1}, {y, 0, 1}, t, "CharacteristicLatticePoints" -> 60, "TunnelBoundaryConditions" -> {"Left" -> "NoSlip", "Right" -> "NoSlip", "Bottom" -> "NoSlip", "Top" -> Function[{x, y, t}, {1, 0}]}]; |

We now iterate for 50 time units:

✕
WindTunnelIterate[state, 50]; |

Visualizing the result shows us that there is a primary vortex that forms near the middle, while a smaller, secondary vortex forms at the bottom right of the box:

✕
{usol, vsol} = {"U", "V"} /. state[state["CurrentTime"]]; StreamPlot[{usol[x, y], vsol[x, y]}, {x, 0, 1}, {y, 0, 1}, AspectRatio -> Automatic, StreamPoints -> Fine, PlotRangePadding -> None, ImageSize -> 300] |

If the Reynolds number is ramped up, then these secondary vortices become stronger and larger, and additional vortices start developing in the corners. Let us look at the case when the Reynolds number is 1,000:

✕
state = WindTunnelInitialize[{1000, 1, 1}, Function[{x, y}, {0, 0}], {x, 0, 1}, {y, 0, 1}, t, "CharacteristicLatticePoints" -> 60, "TunnelBoundaryConditions" -> {"Left" -> "NoSlip", "Right" -> "NoSlip", "Bottom" -> "NoSlip", "Top" -> Function[{x, y, t}, {1, 0}]}]; |

We will again iterate for 50 time units:

✕
WindTunnelIterate[state, 50]; |

Let us visualize the result:

✕
{usol, vsol} = {"U", "V"} /. state[state["CurrentTime"]]; StreamPlot[{usol[x, y], vsol[x, y]}, {x, 0, 1}, {y, 0, 1}, AspectRatio -> Automatic, StreamPoints -> Fine, PlotRangePadding -> None, ImageSize -> 300] |

Notice that the primary vortex has moved closer to the center; from the looks of it, it’s strong enough to be able to form secondary vortices at the bottom left and bottom right of the domain.

Let us now see what happens if we do the simulation on a “tall” box rather than a square one. The boundary conditions remain the same, but the domain changes in the *y* direction:

✕
state = WindTunnelInitialize[{1000, 1, 1}, Function[{x, y}, {0, 0}], {x, 0, 1}, {y, 0, 2}, t, "CharacteristicLatticePoints" -> 60, "TunnelBoundaryConditions" -> {"Left" -> "NoSlip", "Right" -> "NoSlip", "Bottom" -> "NoSlip", "Top" -> Function[{x, y, t}, {1, 0}]}]; |

Run the simulation and use `ProgressIndicator` to track the progress. This simulation will take a few minutes:

✕
ProgressIndicator[Dynamic[state["CurrentTime"]], {0, 50}] AbsoluteTiming[WindTunnelIterate[state, 50]] |

Visualize the streamlines:

✕
{usol, vsol} = {"U", "V"} /. state[state["CurrentTime"]]; StreamPlot[{usol[x, y], vsol[x, y]}, {x, 0, 1}, {y, 0, 2}, AspectRatio -> Automatic, StreamPoints -> Fine, PlotRangePadding -> None, ImageSize -> Medium] |

In the tall-box scenario, a primary vortex is developed near the top wall, and that vortex in turn creates another vortex below it. If that second vortex is strong enough, it will create vortices at the bottom corners of the box.

We can already see the flexibility our wind tunnel is providing us. Let us now put an object inside the wind tunnel and observe the behavior of the flow. For this example, use a circular object:

This is the same as flow in a channel (our first example), but with an object placed in the channel. Notice now that there are are two length scales *d* and *H*. The choice of the characteristic length, though arbitrary, must tie back to some aspect of the physics of the flow. In this example, if the size of the object was to be increased or decreased, then the flow pattern behind it would be expected to change. Therefore, the natural choice is to use *d* as the characteristic length.

Let us place the cylinder at (3,0) in the domain. Let the size of the cylinder be 1 length unit. Therefore, the characteristic scale will be 1. Let the domain size be (0, 15) × (–2, 2). The object is specified as a `ParametricRegion`:

✕
Remove[state]; state = WindTunnelInitialize[{200, 1, 1}, Function[{x, y}, {0, 0}], {x, 0, 15}, {y, -2, 2}, t, "CharacteristicLatticePoints" -> 15, "ObjectsInTunnel" -> {ParametricRegion[{3 + Cos[s]/2, Sin[s]/2}, {{s, 0, 2 Pi}}]}] |

It is a good idea to visualize the tunnel before starting the simulation, to make sure the object is in the correct position:

✕
ListLinePlot[state["ObjectsInTunnel"], PlotRange -> {{0, 14}, {-2, 2}}, AspectRatio -> Automatic, Axes -> False, Frame -> True, ImageSize -> Medium, PlotLabel -> StringForm["GridPoints: ``", Reverse@state["GridPoints"]]] |

Let us simulate the flow for 10 time units:

✕
WindTunnelIterate[state, 10]; {usol, vsol} = {"U", "V"} /. state[state["CurrentTime"]]; Rasterize@ Show[LineIntegralConvolutionPlot[{{usol[x, y], vsol[x, y]}, {"noise", 300, 400}}, {x, 0, 15}, {y, -2, 2}, AspectRatio -> Automatic, ImageSize -> Medium, PlotRangePadding -> None, LineIntegralConvolutionScale -> 2, ColorFunction -> "RoseColors"], ListLinePlot[state["ObjectsInTunnel"], PlotStyle -> {{Thickness[0.005], Black}}]] |

There are two things to notice here: the symmetric pair of vortices behind the cylinder and the flow *inside* the cylinder. A close-up reveals that there is some flow pattern inside the cylinder as well:

✕
Show[StreamPlot[{usol[x, y], vsol[x, y]}, {x, 2.4, 3.6}, {y, -1/2, 1/2}, AspectRatio -> Automatic, ImageSize -> Medium, PlotRangePadding -> None], ListLinePlot[state["ObjectsInTunnel"], PlotStyle -> {{Thickness[0.01], Blue}}]] |

This behavior is because we are making use of the IBM. As mentioned earlier, the IBM computes a set of forces to be applied on the grid points such that the velocity at the surface representing the surface is 0. It does not specify what needs to happen inside the cylinder. Therefore, being an incompressible flow, there exists a flow pattern inside the cylinder as well. The important thing is that the velocities at the boundaries of the object are 0 (no-slip).

Let us now continue to iterate for 30 time units and see what happens to the pattern behind the cylinder. Sometimes, it can be helpful to look at another variable called vorticity to get a better understanding of what is happening:

✕
WindTunnelIterate[state, 30]; {usol, vsol} = {"U", "V"} /. state[state["CurrentTime"]]; |

Set up the color scheme for the contours:

✕
cc = N@Range[-3, 3, 4/100]; cc = DeleteCases[cc, x_ /; -0.4 <= x <= 0.4]; cname = "VisibleSpectrum"; cdata = ColorData[cname]; crange = ColorData[cname, "Range"]; cMinMax = {Min[cc], Max[cc]}; colors = cdata[Rescale[#, cMinMax, crange]] & /@ cc; |

Visualize the vorticity:

✕
Remove[vort]; vort = D[usol[x, y], y] - D[vsol[x, y], x]; Rasterize@Show[ContourPlot[vort, {x, 0, 15}, {y, -2, 2}, AspectRatio -> Automatic, ImageSize -> 500, Contours -> cc, ContourShading -> None, ContourStyle -> colors, PlotRange -> {{0, 15}, {-2, 2}, All}], Graphics[Polygon[state["ObjectsInTunnel"]]]] |

We now notice that the symmetric pattern has been destroyed and is replaced by this “wavy” behavior; the vorticity clearly shows the wavy behavior. What we notice here is called an instability in the wake of the cylinder. This instability continues to amplify, and eventually vortices start forming behind the cylinder. This phenomenon is called “vortex shedding.” There is a shear layer generated at the surface of the cylinder that gets carried downstream.

This vortex shedding is also dependent on the Reynolds number. For small enough numbers, we don’t get any shedding. However, at around 100–150, the shedding is observed. To properly observe this phenomena, it would be good to see the time evolution of this flow. As a first step, set up the problem by defining the characteristic terms and the objects in the tunnel:

✕
state = WindTunnelInitialize[{200, 1, 1}, Function[{x, y}, {0, 0}], {x, 0, 15}, {y, -2, 2}, t, "CharacteristicLatticePoints" -> 15, "ObjectsInTunnel" -> {ParametricRegion[{3 + Cos[s]/2, Sin[s]/2}, {{s, 0, 2 Pi}}]}]; |

To produce a time evolution of the vorticity, we will extract the solution at each time unit and generate a series of plots:

✕
cc = N@Range[-5, 5, 10/200]; cc = DeleteCases[cc, x_ /; -0.5 <= x <= 0.5]; cname = "VisibleSpectrum"; cdata = ColorData[cname]; crange = ColorData[cname, "Range"]; cMinMax = {Min[cc], Max[cc]}; colors = cdata[Rescale[#, cMinMax, crange]] & /@ cc; res = Table[ WindTunnelIterate[state, t]; {usol, vsol} = {"U", "V"} /. state[state["CurrentTime"]]; vort = D[usol[x, y], y] - D[vsol[x, y], x]; plot = Show[ContourPlot[vort, {x, 0, 15}, {y, -2, 2}, AspectRatio -> Automatic, ImageSize -> Medium, Contours -> cc, ContourShading -> None, ContourStyle -> colors, PlotRange -> {{0, 15}, {-2, 2}, All}], Graphics[Point /@ state["ObjectsInTunnel"]]]; Rasterize[plot] , {t, 0, 50, 1}]; |

Running the simulation clearly shows two vortices forming in the back of the cylinder with the shear layer slowly getting perturbed, which then increases in amplitude before finally breaking into a vortex shedding:

✕
ListAnimate[res, DefaultDuration -> 10, AnimationRunning -> False] |

For our next example, we will exploit the immersed boundary treatment and “immerse” a circular tank inside our wind tunnel. The boundary of the tank will have 0-velocities. Inside this tank, we will immerse an elliptical object. This object is placed near the tank wall and follows the tank boundary in a circular path. The flexibility of the lattice Boltzmann method with immersed boundary allows us great flexibility with moving objects. The objective is to study what kind of disturbances develop when this object moves through a still fluid.

Set up the problem by defining the characteristic terms. In this case, the simulation will be performed at a Reynolds number of 400. The characteristic length and velocity are specified as unity. There are two objects in the tunnel. The first object is the large circular tank that is held stationary; the second is the elliptical object that will be moving inside this tank:

✕
Remove[state]; state = WindTunnelInitialize[{400, 1, 1}, Function[{x, y}, {0, 0}], {x, -2.2, 2.2}, {y, -2.2, 2.2}, t, "CharacteristicLatticePoints" -> 25, "TunnelBoundaryConditions" -> {"Left" -> "NoSlip", "Right" -> "NoSlip", "Top" -> "NoSlip", "Bottom" -> "NoSlip"}, "ObjectsInTunnel" -> {{ParametricRegion[{1.3 + 0.2*Sin[s], 0.5*Cos[s]}, {{s, 0, 2 Pi}}], Function[{xb, yb, t}, {-yb, xb}]}, {ParametricRegion[{2*Sin[s], 2*Cos[s]}, {{s, 0, 2 Pi}}]}}] |

As always, it is a good idea to check the geometry of the underlying problem. We can do that by simply extracting the discretized object when doing the initialization; we can see that everything is where it is supposed to be:

✕
Graphics[Map[Line, state["ObjectsInTunnel"]], Frame -> True, ImageSize -> Small] |

As we did before, we will be looking at the vorticity contours of the flow. Let us first define the color scheme and the levels of contours that will be plotted:

✕
cc = N@Range[-7, 7, 14/100]; cc = DeleteCases[cc, x_ /; -0.1 <= x <= 0.1]; cname = "VisibleSpectrum"; cdata = ColorData[cname]; crange = ColorData[cname, "Range"]; cMinMax = {Min[cc], Max[cc]}; colors = cdata[Rescale[#, cMinMax, crange]] & /@ cc; |

The simulation is now run for 60 time units:

✕
oreg = RegionPlot[x^2 + y^2 >= 2^2, {x, -2.2, 2.2}, {y, -2.2, 2.2}, PlotStyle -> Black]; AbsoluteTiming[res = Table[ WindTunnelIterate[state, tt]; {usol, vsol} = {"U", "V"} /. state[state["CurrentTime"]]; vort = D[usol[x, y], y] - D[vsol[x, y], x]; Rasterize@ Show[ContourPlot[vort, {x, -2, 2}, {y, -2, 2}, AspectRatio -> Automatic, ImageSize -> 300, Contours -> cc, ContourShading -> None, ContourStyle -> colors, PlotRange -> {{-2, 2}, {-2, 2}, All}], Graphics[Polygon[First[state["ObjectsInTunnel"]]]], oreg], {tt, 0, 60, 1/2}];] |

Running the time evolution of the fluid disturbance shows that a very beautiful geometric pattern is formed within the tank initially before settling down to a more uniform circular disturbance:

✕
ListAnimate[res, DefaultDuration -> 10, AnimationRunning -> False, ImageSize -> Automatic] |

For the sake of curiosity (and fun), what kind of flow pattern would we expect for the following geometry?

Fluid enters the pipe from the right end, moves up the pipe and then gets discharged from the left end. What would be the effect of that stopper at the right end? How will it impact the discharge?

This is surprisingly easy to figure out with our current setup. Again, we just immerse our pipe and the obstacle within it into our wind tunnel. The left, right and top boundaries of the wind tunnel are given a 0-velocity condition. The bottom boundary is given an outflow condition from –1 ≤ *x* ≤ –0.7, a 0-velocity condition from –0.7 ≤ *x* ≤ 0.7 and a parabolic velocity profile from 0.7 ≤ *x* ≤ 1:

✕
Remove[state]; inletVel = Fit[{{7/10, 0}, {17/20, 1}, {1, 0}}, {1, x, x^2}, x]; state = WindTunnelInitialize[{500, 0.3, 1}, Function[{x, y}, {0, 0}], {x, -1.1, 1.1}, {y, 0, 1.1}, t, "CharacteristicLatticePoints" -> 20, "TunnelBoundaryConditions" -> {"Left" -> "NoSlip", "Right" -> "NoSlip", "Top" -> "NoSlip", "Bottom" -> Function @@ List[{x, y, t}, If @@ List[0.7 <= x <= 1., {0, inletVel}, If[-1 <= x <= -0.7, "Outflow", {0, 0}]]]}, "ObjectsInTunnel" -> {ImplicitRegion[ 0.7 <= (x^4 + y^4)^(1/4) <= 1, {{x, -1, 1}, {y, -0.2, 1}}], ParametricRegion[{0.22 + t, t - 0.2}, {{t, 0.55, 0.7}}]}] |

Let us run it for 40 time units:

✕
ProgressIndicator[Dynamic[state["CurrentTime"]], {0, 40}] AbsoluteTiming[WindTunnelIterate[state, 40]] |

Let us plot the vorticity:

✕
{usol, vsol} = {"U", "V"} /. state[state["CurrentTime"]]; vort = D[usol[x, y], y] - D[vsol[x, y], x]; cc = N@Range[-20, 20, 40/50]; cc = DeleteCases[cc, x_ /; -0.1 <= x <= 0.1]; cname = "VisibleSpectrum"; cdata = ColorData[cname]; crange = ColorData[cname, "Range"]; cMinMax = {Min[cc], Max[cc]}; colors = cdata[Rescale[#, cMinMax, crange]] & /@ cc; Rasterize@ Show[ContourPlot[vort, {x, -1, 1}, {y, 0, 1}, AspectRatio -> Automatic, ImageSize -> Medium, Contours -> cc, ContourShading -> None, ContourStyle -> colors, PlotRange -> {{-1, 1}, {0, 1}, All}, RegionFunction -> Function[{x, y}, 0.7 <= (x^4 + y^4)^(1/4) <= 1]] , Graphics[Point /@ state["ObjectsInTunnel"]]] |

We see that the obstacle/stopper introduces a vortex shedding, which travels down the pipe. Let us look at the velocities at *y* = 0:

✕
Plot[vsol[x, 0], {x, -1, 1}, PlotRange -> {{-1, 1}, {-1, 1}}, ImageSize -> Medium] |

If we compare the velocity profile between the outlet (at the left) and inlet (at the right), we see that the outlet velocity is almost half of the inlet. This gives us compelling evidence that the stopper has caused a reduction in fluid discharge from the left end of the pipe, which is to be expected.

As a final example, let us look at the flow around an airfoil. An airfoil basically represents the cross-section of an airplane wing, and is the fundamental thing that actually allows an airplane to lift off the ground. There are many types of airfoils, but we will focus on a simple one where the airfoil is described by the parametric equation The parameter *a* controls how thick the airfoil should be, and the parameter *b* controls the curvature of the airfoil:

✕
Clear[mat, a, b, t, AOA]; Manipulate[ mat = {{Cos[AOA Degree], -Sin[AOA Degree]}, {Sin[AOA Degree], Cos[AOA Degree]}}; ParametricPlot[ mat.{t^2, 0.2 (t - t^3 + (t^2 - t^4)/b)/a}, {t, -1, 1}, AspectRatio -> Automatic, ImageSize -> Medium, PlotRange -> {{0, 1}, {-0.2, 0.5}}], {{a, 1}, 0.1, 10}, {{b, 0.9}, 0.3, 10}, {{AOA, 0, "Angle of Attack"}, -20, 20}] |

In order for the aircraft to get “lift,” i.e. be able to get off the ground, the top surface of the airfoil should have a pressure distribution that is lower than the bottom surface. This pressure difference causes the wing to lift upward (along with anything attached to it). This pressure difference is achieved by having wind blow over its surface at significantly high speeds. A second consideration is that the wing generally needs to be tilted or have an “angle of attack” to it. By doing this, we ensure greater lift. We will also give the airfoil a –10° angle of attack. The simulation will be run for a Reynolds number of 1,000. Now, I should point out that a Reynolds number of 1,000 is a rather small value. A typical Reynolds number for small aircraft is around 1 million. A full-scale simulation is just not possible on a laptop because of the large grid size. However, even at 1,000, we should be able to get a good understanding of the underlying dynamics. For this example, a uniform flow fill comes in from the left. The top and bottom tunnel boundaries are set to be periodic, and the right boundary is set to an outflow. The characteristic length here will be the thickness of the airfoil:

✕
state = WindTunnelInitialize[{1000, 0.2, 1}, Function[{x, y}, {0, 0}], {x, -2, 6}, {y, -1., 1.}, t, "CharacteristicLatticePoints" -> 20, "CharacteristicLatticeVelocity" -> 0.05, "TunnelBoundaryConditions" -> {"Left" -> Function[{x, y, t}, {1, 0}], "Right" -> "Outflow", "Top" -> "Periodic"}, "ObjectsInTunnel" -> {ParametricRegion[{{Cos[-10 Degree], -Sin[-10 \ Degree]}, {Sin[-10 Degree], Cos[-10 Degree]}}.{t^2, 0.2 (t - t^3 + (t^2 - t^4)/0.9)/1}, {{t, -1, 1}}]}] |

Before starting the simulation, extract the discretized object and check that it is in the appropriate location within the wind tunnel:

✕
ListLinePlot[state["ObjectsInTunnel"], PlotRange -> Evaluate[state["Ranges"]], AspectRatio -> Automatic, Axes -> False, Frame -> True, ImageSize -> 400, PlotLabel -> StringForm["GridPoints: ``", Reverse@state["GridPoints"]]] |

Notice the large number of grid points. This is because we are allowing 20 lattice points to resolve the thin airfoil. We now run the simulation for 10 time units. This simulation takes a bit of time to finish 10 time units’ worth of simulation because: (a) the resolution (i.e. the number of grid points needed for running this simulation) is quite large (800×200); and (b) to complete the simulation, 20,000 iterations must be performed:

✕
10/state["TimeStep"] |

Start the iteration process:

✕
AbsoluteTiming[WindTunnelIterate[state, 10]] |

Let us first look at the vorticity plot:

✕
{usol, vsol} = {"U", "V"} /. state[state["CurrentTime"]]; vort = D[usol[x, y], y] - D[vsol[x, y], x]; cc = N@Range[-15, 15, 30/60]; cc = DeleteCases[cc, x_ /; -0.1 <= x <= 0.1]; cname = "VisibleSpectrum"; cdata = ColorData[cname]; crange = ColorData[cname, "Range"]; cMinMax = {Min[cc], Max[cc]}; colors = cdata[Rescale[#, cMinMax, crange]] & /@ cc; Show[ContourPlot[vort, {x, -0.5, 5}, {y, -1, 1}, AspectRatio -> Automatic, ImageSize -> 500, Contours -> cc, ContourShading -> None, ContourStyle -> colors, PlotRange -> {{-0.5, 5}, {-1, 0.5}, All}] , Graphics[{FaceForm[White], EdgeForm[Black], Polygon[state["ObjectsInTunnel"][[1]]]}]] |

Just as in the case of the bluff body, we are seeing vortex shedding. For the case of the airfoil, this is not really a desirable property. We ideally want the flow to hug the surface. When the flow separates (as you see on the top surface of the airfoil), the pressure drop is not achieved properly and the airfoil will be unable to generate lift.

Let us now look at the pressure. Rather than plotting the pressure, we will plot a non-dimensional parameter called the pressure coefficient, defined by *C _{p}* = 2(

✕
PressureCoefficient[x_?NumericQ, y_?NumericQ] := (psol[x, y] - psol[-2, 0])/(0.5* state["InternalVelocity"]^2) |

✕
pp = Apply[psol, objs, 1]; pp = (pp - psol[-2, 0])/(0.5*state["InternalVelocity"]^2); ListPlot[Transpose[{objs[[All, 1]], pp}], PlotRange -> All, Axes -> False, Frame -> True, FrameLabel -> {"x \[Rule]", "\!\(\*SubscriptBox[\(C\), \(p\)]\)"}, FrameStyle -> Directive[Black, 14], ImageSize -> Medium] |

You will notice that there are two lines here. The lower line represents the pressure on the top surface, while the top line represents the pressure on the bottom surface. It is clear that despite some separation from the airfoil, we are getting some pressure differences. We can also plot the pressure contours and visualize them near the airfoil:

✕
Show[Quiet@ ContourPlot[ PressureCoefficient[x, y], {x, -0.5, 1.5}, {y, -0.4, 0.4}, AspectRatio -> Automatic, PlotRangePadding -> None, ColorFunction -> "TemperatureMap", Contours -> 40, PlotLegends -> Automatic, PlotRange -> All, ImageSize -> Medium], Graphics[{FaceForm[White], EdgeForm[Black], Polygon[state["ObjectsInTunnel"][[1]]]}]] |

If you look carefully at the color scheme, you will indeed see that the top-surface pressure is less than the bottom surface. So perhaps there is hope with this airfoil. The fluid-dynamic property that we have just explored is called the Bernoulli principle, which has applications in aviation (as we have seen here) and in fields such as automotive engineering.

This is just the start—there are many more examples you can try out! What we have discussed here is a good place to begin exploring this alternative approach to studying fluid dynamics problems and their implementation in Mathematica. The LBM combined with the IBM is a good tool for anyone interested in studying and analyzing fluid flows. With the help of Mathematica’s built-in functions, putting together the numerical wind tunnel is quite straightforward. The WindTunnel2DLBM package has helped me explore many fascinating concepts in the field of fluid dynamics (and make stunning visualizations). I hope you too will get inspired and dive into the exploration of fluid-flow phenomena.

Get full access to the latest Wolfram Language functionality with a Mathematica 12 or Wolfram|One trial. |

Microsoft Excel is among the most popular tools in the world. For non-technical and advanced users aspiring to extend beyond Excel’s built-in feature set, we’re proud to announce the easiest and most productive tool for doing so: Wolfram CloudConnector for Excel, now available to anyone running Excel on a Windows system. You can access the advanced computational power of the Wolfram Language for your data directly from your spreadsheets.

We’ve made it easy for anyone to start using CloudConnector for Excel using the `Wolfram` Excel function, which allows you to call native Wolfram Language code directly in an Excel cell. For example, `CurrentDate` in Excel will get you today’s date:

`=Wolfram("CurrentDate")`

We can also add additional parameters to the `Wolfram` function. These values get slotted into the expression. `RandomWord` can take additional parameters, such as a number, which will generate that many words:

✕
RandomWord[5] |

So in Excel, we can write:

`=Wolfram("RandomWord",5)`

Even though we have written this in a single Excel cell, the output fills out into a column.

CloudConnector automatically converts the Wolfram Language output to be displayed in Excel. If you wanted to get all the business days for the next 10 days, you would use the function `DayRange`:

✕
DayRange[Now, DayPlus[Now, 10], "BusinessDay"] |

Now let’s run this expression in a spreadsheet with the `Wolfram` function, applying quotes and escaped quotes as necessary. We add an “`&`” on the end to make the expression a pure function:

`=Wolfram("DayRange[Now, DayPlus[Now, 10], ""BusinessDay""]&")`

Notice how the dates are converted to an Excel format. This is an example of the automatic conversion from the Wolfram Language.

You can also pass data stored in a spreadsheet cell as an argument to this function:

`=Wolfram("DayRange[Now, DayPlus[Now, #], ""BusinessDay""]&", B2)`

Any update to the cell being used as an argument (in this case, B2) will trigger recalculation of the formula in Excel.

Often you will want to store the code outside the spreadsheet, either because you don’t want users to see or edit it, or because you want to be able to push centralized updates to many users simultaneously. Deploying the code as an API and then calling it from the spreadsheet addresses this need.

Converting the Wolfram Language code we had before into an `APIFunction` only requires some minor changes:

✕
CloudDeploy[ APIFunction[{"x" -> "Integer"}, DayRange[Now, DayPlus[Now, #x], "BusinessDay"] &, AllowedCloudExtraParameters -> All], "MyDayRangeFunction", Permissions -> "Public"] |

There is one parameter, `"x"`, that can take an integer. The `APIFunction` is deployed as a `CloudObject` with the name `DayRange`.

An Excel user can access this API with the `WolframAPI` function:

`=WolframAPI("MyDayRangeFunction",Parameter("x",C2))`

This formula evaluates entirely in the cloud. The source code is never seen by the caller of the API.

With additional knowledge of the Wolfram Language, you can develop powerful Wolfram APIs, which would normally require a long and tedious development process in other systems.

Here, an `APIFunction` calculates the shortest tour of the 20 largest cities in a country and displays the result:

✕
CloudDeploy[ APIFunction[{"Location" -> "Country"}, Module[{cities, tour}, cities = EntityClass[ "City", { EntityProperty["City", "Country"] -> Entity[ "Country", "Germany"], EntityProperty["City", "Population"] -> TakeLargest[20]}]; tour = FindShortestTour[GeoPosition[cities]][[2]]; GeoListPlot[cities[[tour]], GeoLabels -> True, Joined -> True]] &, AllowedCloudExtraParameters -> All], "ShortestTourFunction", Permissions -> "Public"] |

This is an `APIFunction` that takes a single parameter, `"Location"`, which must be a `"Country"`:

Notice how the Excel formula creates an image. This is specialized functionality built for CloudConnector that allows you to make changes to spreadsheet values to trigger updates to this image. With this short amount of code, you can connect your spreadsheets to the full computational power of the Wolfram Language.

Wolfram CloudConnector for Excel creates a user-focused, feature-filled link from Excel to the Wolfram Cloud, while cutting developer time for more advanced computation. CloudConnector is a free plugin that runs on Excel for Windows. It does not require any additional Wolfram technology to be installed—just an appropriate Wolfram Cloud Account to create APIs. CloudConnector for Excel is available now for use in both the Wolfram Cloud and Wolfram Enterprise Private Cloud.

]]>We began this week with pre-conference training on topics from machine learning and neural networks to application building and “Computational X,” offering headquarters tours and an opening reception before the “real” conference even began. Monday’s opening keynote by CEO Stephen Wolfram covered a ton of ground, from a Version 12 recap to a roadmap of things to come. True to tradition, Stephen uncovered bugs in pre-release versions of our software, livecoded examples and gave the audience so much to look forward to.

This year had our 30th keynote address by Stephen Wolfram. In addition to this three-decade milestone, we had our first livestreamed keynote. Stephen touched on the 12.0 release in April as well as the progress toward 12.1 since then, discussed major accomplishments in the Wolfram Cloud and Wolfram Notebooks, and gave a sneak peek of some fascinating projects to come. Here are some highlights:

- The Wolfram Notebook Archive was introduced as a way to preserve live, interactive computational notebooks in perpetuity, especially in conjunction with the major improvements in cloud notebook publishing.
- Brand-new course content is coming to Wolfram U, including the release of an interactive image processing course to debut next week. Also coming to Wolfram U are integrated course-authoring tools and a new math and science initiative.
- The recent release of Wolfram|Alpha Notebook Edition allows any student to build up and work through computations right inside a Wolfram Notebook, just as they would in Wolfram|Alpha.
- The release of 12.1 in the next few months will include dozens of new functions, as well as higher DPIs for Windows and an increase in workflows crosslinked to other documentation. Additionally, 12.1 will feature a new Standard Notebook toolbar.
- Work toward incremental quality development will be prioritized in order to tackle many lower-level bugs that still exist.
- A Wolfram Citings page will be added to Wolfram Community in order to share the unique creations people are making with Wolfram products. Sharing is also easier than ever with new functionality allowing users to embed notebooks in Community posts.
- A new initiative to find a fundamental theory of physics using computational technology has been launched. A team has been assembled to work toward this goal, and Stephen will be livestreaming his search for the fundamental theory of physics.

… and, of course, much more! So be sure to check out the keynote in its entirety for Stephen’s account of the past year, as well as the path ahead.

Each year at the Wolfram Technology Conference, Stephen recognizes a number of outstanding individuals whose work with the Wolfram Language has been exemplary. Congratulations to the 11 winners of the 2019 Wolfram Innovator Award:

**Thomas Burghardt, PhD**, Mayo Clinic Rochester: for the application of neural networks constructed with machine intelligence tools in the study of inheritable heart disease.**Todd Feitelson**, Millbrook School: for innovative educational techniques utilizing computational thinking and 3D printing in high-school classrooms.**Chris Hanusa, PhD**, CUNY Queens College: for creating tools to advance the visualization of concepts in the classroom through computational technology.**Joo-Haeng Lee, PhD**, Electronics and Telecommunications Research Institute: for developing a unique and powerful pixel-based color transition algorithm (PixelSwap) and his work in synthetic learning sets.**Casey Mulligan, PhD**, University of Chicago, Former Chief Economist for the White House Council of Economic Advisers: for his innovative work on automated economic reasoning, which can begin with purely qualitative assumptions.**Flip Phillips, PhD**, Skidmore College: for his work on the visual and haptic perception of two- and three-dimensional shapes, psychological aesthetics and cortical plasticity related to blindness and visual restoration.**Robert Rasmussen, PhD, and William “Kirk” Reinholtz**, NASA Jet Propulsion Laboratory: for optimizing the integration of mission operation systems and preserving consistent and accountable information throughout the operations processes.**Jane Shen-Gunther, MD, Colonel**, US Army, Brooke Army Medical Center: for automating data processing for DNA sequencing in gynecological oncology and HPV detection and integrating interactive visualizations into reporting structures.**Yehuda Ben-Shimol, PhD**, Ben-Gurion University of the Negev: for introducing thousands of students and fellow faculty to the use of computational thinking in communications systems engineering as well as contributions to the advancement of earthquake prediction.**Mihai Vidrighin, PhD**, PsiQuantum: for building comprehensive models of nonlinear and quantum optics to describe spontaneous parametric photon-pair generation and quantum optics circuits.

On Wednesday night, we held our annual Wolfram Livecoding Championship, a favorite special event now in its fourth year. Conference guests and internal enthusiasts alike competed to most accurately and most quickly answer seven Wolfram Language programming questions. This year, all contestants who earned at least a point on the questions were eligible for prizes, in addition to our first-, second- and third-place winners. Questions included:

- Which 10 people have the most Wolfram Language symbols named after them (eponyms)? Return the most common country of birth among them as a country entity.
- Evolve the rule 30 cellular automaton for 2,000 steps starting from a single 1 centered on a background of 0s.
- Given the string “4 – 2 + 3 + 4”, find all the parenthetical groupings of exactly two subexpressions at a time for which the final expression evaluates to a positive number.

Dubbed the “Chip and Flip Show,” the competition was hosted by our high-energy emcees, Chip Hurst and Flip Phillips, who kept the live audience engaged and laughing. We had an incredible first-place finish by Gerli Jogeva! She brought home the win with a commanding 14 points.

Rounding out the top three place finishers, Carlo Barbieri and Sander Huisman came in second and third. Congratulations to our winners, and thank you to all who participated in making this another impressive and truly fun year of livecoding!

If you didn’t catch the championship live, you can relive the excitement and creativity in our archived livestream video of the event.

Throughout the conference, our senior developers stopped by our Tech Talk booth to chat about what projects they’re excited about and what their teams are working hard on. We had a student poster session, demo booths for hands-on engagement, an expert panel, topical meet-ups and roundtables, another iteration of the infamous One-Liner Competition and so much more.

And that’s the end of the 2019 Wolfram Technology Conference. We want to thank all our presenters and attendees, because you are the ones who make our incredible community what it is. If you couldn’t make it out this year, we hope you’ve enjoyed the livestreamed portions of the conference—keep an eye out for all the archived videos coming soon to the Wolfram YouTube channel. We can’t wait to see what awaits us next year!

Mark your calendars now and save the date for our next Wolfram Technology Conference, October 6–9, 2020! |

We’ll kick off the conference Monday evening with Stephen Wolfram’s keynote. Our CEO will talk about the state of Wolfram technology, sharing new directions and future goals for the company. Tune in on Twitch, Facebook Live or YouTube Live Monday, October 28 starting at 6pm CT to see this presentation live.

Throughout the week, we’ll also be streaming several Tech Talks featuring interviews with Wolfram employees about their latest projects. These talks give a taste of the topics discussed at the conference, from medical imaging to blockchain, as well as updates and previews of Wolfram Language and Wolfram SystemModeler features. You can catch these talks on YouTube Live or Facebook Live throughout Tuesday and Wednesday, October 29–30. In the meantime, you can take a look at last year’s talks:

On Wednesday evening, we’ll be holding our fourth annual Livecoding Championship, a chance for talented and creative Wolfram Language users to compete for various prizes by showing off their coding prowess. Contestants will test their abilities with challenges in visualization, audio processing and a range of other computational areas. Anyone at the conference is free to compete—and the turnout gets more diverse and impressive every year! Everyone else can check out Twitch, Facebook Live or YouTube Live at 9pm CT on Wednesday, October 30 to watch the competition (along with running commentary). For a preview of the event, check out last year’s competition:

These streaming events are only a small part of what the Wolfram Technology Conference has to offer. If you like what you see, consider joining us in person next time. Maybe you could be the next Livecoding Champion!

Join us at the Wolfram Technology Conference 2019 on Twitch, Facebook Live or YouTube Live. |

We’ve been working towards it for many years, but now it’s finally here: an incredibly smooth workflow for publishing Wolfram Notebooks to the web—that makes possible a new level of interactive publishing and computation-enabled communication.

You create a Wolfram Notebook—using all the power of the Wolfram Language and the Wolfram Notebook system—on the desktop or in the cloud. Then you just press a button to publish it to the Wolfram Cloud—and immediately anyone anywhere can both read and interact with it on the web.

It’s an unprecedentedly easy way to get rich, interactive, computational content onto the web. And—together with the power of the Wolfram Language as a computational language—it promises to usher in a new era of computational communication, and to be a crucial driver for the development of “computational X” fields.

When a Wolfram Notebook is published to the cloud, it’s immediately something people can read and interact with. But it’s more than that. Because if you press the Make Your Own Copy button, you’ll get your own copy of the notebook, which you can not only read and interact with, but also edit and do computation in, right on the web. And what this means is that the notebook becomes not just something you look at, but something you can immediately use and build on.

And, by the way, we’ve set it up so that anyone can make their own copy of a published notebook, and start using it; all they need is a (free) Cloud Basic account. And people with Cloud Basic accounts can even publish their own notebooks in the cloud, though if they want to store them long term they’ll have to upgrade their account. (Through the Wolfram Foundation, we’re also developing a permanent curated Notebook Archive for public-interest notebooks.)

There are lots of other important workflows too. On a computer, you can immediately download notebooks to your desktop, and run them there natively using the latest version of the Wolfram Player that we’ve made freely available for many years. You can also run notebooks natively on iOS devices using the Wolfram Player app. And the Wolfram Cloud app (on iOS or Android) gives you a streamlined way to make your own copy of a notebook to work with in the cloud.

You can publish a Wolfram Notebook to the cloud, and you can use it as a complete, rich webpage. But you can also embed the notebook inside an existing webpage, providing anything from a single (perhaps dynamically updated) graphic to a full interactive interface or embedded document.

And, by the way, the exact same technology that enables Wolfram Notebooks in the cloud also allows you to immediately set up Wolfram Language APIs or form interfaces, for use either directly on the web, or through client libraries in languages like Python and Java.

We invented notebooks in 1988 as the main interface for Mathematica Version 1.0, and over the past three decades, many millions of Wolfram Notebooks have been made. Some record ongoing work, some are exercises, and some contain discoveries small and large. Some are expositions, presentations or online books and papers. Some are interactive demonstrations. And with the emergence of the Wolfram Language as a full-scale computational language, more and more now serve as rich computational essays, communicating with unprecedented effectiveness in a mixture of human language and computational language.

Over the years, we’ve progressively polished the notebook experience with a long series of user interface innovations, adapted and optimized for successive generations of desktop systems. But what’s allowed us now to do full-scale notebook publishing on the web is that—after many years of work—we’ve managed to get a polished version of Wolfram Notebooks that run in the cloud, much as they do on desktop.

Create a notebook on the desktop or in the cloud, complete with all its code, hierarchical sections, interactive elements, large graphics, etc. When it’s published as a cloud notebook people will be able to visit it just like they would visit any webpage, except that it’ll automatically “come alive” and allow all sorts of interaction.

Some of that interaction will happen locally inside the web browser; some of it will automatically access servers in the cloud. But in the end—reflecting our whole hyperarchitecture approach—Wolfram Notebooks will run seamlessly across desktop, cloud and mobile. Create your content once, and let people not only read it anywhere, but also interact with it, as well as modify and compute with it.

When you first go to a Wolfram Notebook in the cloud it might look like an ordinary webpage. But the fact that it’s an active, computational document means there are lots of things you can immediately do with it. If you see a graphic, you’ll immediately be able to resize it. If it’s 3D, you’ll be able to rotate it too. Notebooks are typically organized in a hierarchy of cells, and you can immediately open and close groups of cells to navigate the hierarchy.

There can also be dynamic interactive content. In the Wolfram Language, functions like `Manipulate` automatically set up interactive user interfaces in notebooks, with sliders and so on—and these are automatically active in a published cloud notebook. Other content can be dynamic too: using functions like `Dynamic` you can for example dynamically pull data in real time from the Wolfram Knowledgebase or the Wolfram Data Drop or—if the user allows it—from their computer’s camera or microphone.

When you write a computational essay you typically want people to read your Wolfram Language code, because it’s part of how you’re communicating your content. But in a Wolfram Notebook you can also use `Iconize` to just show an iconized version of details of your code (like, say, options for how to display graphics):

Normally when you do a computation in a Wolfram Notebook, there’ll be a succession of `In[ ]` and `Out[ ]` cells. But you can always double-click the `Out[ ]` cell to close the `In[ ]` cell, so people at first just see the output, and not the computational language code that made it.

One of the great things about the Wolfram Language is how integrated and self-contained it is. And that means that it’s realistic to pick up even fragments of code from anywhere in a notebook, and expect to have it work it elsewhere. In a published notebook, just click a piece of code and it’ll get copied so you can paste it into a notebook you’re creating, on the cloud or the desktop.

A great source of “ready-made” interactive content for Wolfram Notebooks is the 12,000+ interactive Demonstrations in the Wolfram Demonstrations Project. Press Copy to Clipboard and you can paste the Demonstration (together with closed cells containing its code) into any notebook.

Once you’ve assembled the notebook you want, you can publish it. On the desktop, go to File > Publish to Cloud. In the cloud, just press Publish. You can either specify the name for the published notebook—or you can let the system automatically pick a UUID name. But you can take any notebook—even a large one—and very quickly have a published version in the cloud.

It didn’t take long after we invented notebooks back in 1988 for me to start thinking about using them to enable a new kind of computational publishing, with things like computational journals and computational books. And, indeed, even very early on, there started to be impressive examples of what could be done.

But with computation tied to the desktop, there was always a limit to what could be done. Even before the web, we invented systems for distributing notebooks as desktop files. Later, when web browsers existed, we built plugins to access desktop computation capabilities from within browsers. And already in the mid-1990s we built mechanisms for generating content through web servers from within webpages. But it was only with modern web technology and with the whole framework of the Wolfram Cloud that the kind of streamlined notebook publishing that we’re releasing today has become possible.

But given what we now have, I think there’s finally an opportunity to transform things like scientific and technical publishing—and to make them truly take advantage of the computational paradigm. Yes, there can be rich interactive diagrams, that anyone can use on the web. And, yes, things can be dynamically updated, for example based on real-time data from the Wolfram Knowledgebase or elsewhere.

But important as these things are, I think they ultimately pale in comparison to what Wolfram Notebooks can do for the usability and reproducibility of knowledge. Because a Wolfram Notebook doesn’t just give you something to read or even interact with; it can also give you everything you need to actually use—or reproduce—what it says.

Either directly within the notebook, or in the Wolfram Data Repository, or elsewhere in the cloud, there can for example be underlying data—say from observations or experiments. Then there can be code in the notebook that computes graphics or other outputs that can be derived from this data. And, yes, that code could be there just to be there—and could be hidden away in some kind of unreadable computational footnote.

But there’s something much more powerful that’s now uniquely possible with the Wolfram Language as it exists today: it’s possible to use the language not just to provide code for a computer to run, but also to express things in computational language in a way that not just computers, but also humans, can readily understand. Technical papers often use mathematical notation to succinctly express mathematical ideas. What we’ve been working toward all these years with the Wolfram Language is to provide a full-scale computational language that can also express computational ideas.

So let’s say you’ve got a technical paper that’s presented as a Wolfram Notebook, with lots of its content in the Wolfram Language. What can you do with it? You can certainly run the computational language code to make sure it produces what the paper says. But more important, you can take pieces of that computational language code and build on it, using it yourself in your own notebook, running it for different cases, modifying it, etc.

Of course, the fact that this can actually work in practice is incredibly nontrivial, and relies on a huge amount of unique development that we’ve done. Because first and foremost, it requires a coherently designed, full-scale symbolic computational language—because that’s the only way it’s realistic to be able to take even small fragments of code and have them work on their own, or in different situations. But there’s more too: it’s also critical that code that works now goes on working in the future, and with the design discipline we’ve had in the Wolfram Language we have an impressive history of compatibility spanning more than 30 years.

Back in the 1970s when I started writing technical papers, they typically began as handwritten documents. Later they were typed on a typewriter. Then when a journal was going to publish them, they would get copyedited and typeset, before being printed. It was a laborious—and inevitably somewhat expensive—process.

By the 1980s, personal computers with word processors and typesetting systems were becoming common—and pretty soon journals could expect “camera-ready” electronic versions of papers. (As it happens, in 1986 I started what may have been the first journal to routinely accept such things.)

And as the technology improved, the quality of what an author could readily make and what a publisher could produce in a fully typeset journal gradually converged, leaving the primary role of the journal being around branding and selectivity, and for many people calling its value into question.

But for computational journals it’s a new story. Because if a paper has computational language code in it, there’s the immediate question of whether the code actually runs, and runs correctly. It’s a little like the old process of copyediting a paper so it could be typeset. There’s real human work—and understanding—that’s needed to make sure the code runs correctly. The good news is that one can use methods from software quality assurance, now enhanced by things like modern machine learning. But there’s still real work to be done—and as a result there’s real value to be added by the process of “official publication” in a computational journal, and there’s a real reason to actually have a computational journal as an organized, potentially commercial, thing.

We’ve been doing review and curation of submissions to the Wolfram Demonstrations Project for a dozen years now. And, yes, it takes work. But the result is that we can be confident that the Demonstrations we publish actually run, and will go on doing so. For the Wolfram Data Repository we also have a review process, there to ensure that data is computable at an appropriate level.

One day there’ll surely be “first-run” computational journals, where new results are routinely reported through computational essays. But even before that, we can expect ancillary computational journals, that provide genuine “computation-backed” and “data-backed” publication. There just hasn’t been the technology to make this work properly in the past. Now, with the Wolfram Language, and the new streamlined web publishing of Wolfram Notebooks, everything that’s needed finally exists.

It’s always a sign that something is important when it immediately changes the way one works. And that’s certainly something that’s happened for me with notebook publishing.

I might give a talk where I build up a notebook, say doing a live experiment or a demonstration. And then at the end of the talk, I’ll do something new: I’ll publish the notebook to the cloud (either by pressing the button or using `CloudPublish`). Then I’ll make a QR code of the notebook URL (say using `BarcodeImage`), and show it big on the screen. People in the audience can then hold up their phones to read the QR code—and then just click the URL, and immediately be able to start using my notebook in the Wolfram Cloud on their phones.

I can tell that notebook publishing is getting me to write more, because now I have a convenient way to distribute what I write. I’ll often do computational explorations of one thing or another. And in the past, I’d just store the notebooks I made in my filesystem (and, yes, over 30+ years I’ve built up a huge number). But now it’s incredibly fast to add some text to turn the notebooks into computational essays—that I can immediately publish to the cloud, so anyone can access them.

Sometimes I’ll put a link to the published notebook in a post like this; sometimes I’ll do something like tweet it. But the point is that I now have a very streamlined way to give people direct access to computational work I do, in a form that they can immediately interact with, and build on.

From a technical development point of view, the path to where we are today has been a long and complex one, involving many significant achievements in software engineering. But the result is something conceptually clear and simple, though extremely powerful—that I think is going to enable a major new level of computation-informed communication: a new world of notebook publishing.

*More about Wolfram Notebooks:*

Wolfram Notebooks Overview »

Wolfram Notebooks Interactive Course »

*To comment, please visit the copy of this post at Stephen Wolfram’s Writings »*

In this roundup of our recent Wolfram Community favorites, our talented users explore different methods of accessing, interpreting and representing data—creating some eye-catching results that offer new ways of looking at the world. We’re also excited to showcase a few projects from alumni of our annual Wolfram High School Summer Camp and Wolfram Summer School. Check out the culmination of their hard work, as well as how Community members find clever solutions using the Wolfram Language.

How can you decipher the scale of an area, or the size of an object within that area, if the area is shown using a satellite image? Though this is typically quite a challenging issue, as Earth’s terrain can look the same at different zoom levels, William Goodall makes it look easy by using the Wolfram Language. For his Summer Camp project, William produced an algorithm that uses feature extraction and a neural network to accurately predict map zoom.

The emblems representing the 2020 Summer Olympics and Paralympics combine checkered patterns used in Japanese culture (named *ichimatsu moyo* during the Edo period), indigo blue (a traditional Japanese color) and varieties of rectangular shapes (representing the diversity promoted by and found within the Olympic and Paralympic Games). Kotaro Okazaki, an inventor at Fujitsu Limited in Japan, explores the mathematics used to create the emblems.

After wondering which US cities have reported the highest crime rates, Diego Zviovich set out to uncover the answer—and found the Wolfram Language crucial in interpreting and analyzing the crime statistics data compiled by the FBI. Diego homed in on specific fields of interest and assembled them into a `Dataset`, finally creating a `BubbleChart` that gives an at-a-glance visualization of national crime rates.

Predicting urban development is essential for citizens, governments and companies. Ahmed Elbanna, a researcher and lecturer at Tanta University, Egypt, explores several different ways to collect satellite images, including manual collection and using the API from NASA’s website, to predict how a city can grow and change. Then to build the model, Ahmed uses the machine learning superfunction `Predict` to generate a new predicted binary image and add it to the list of binary images of the area in study.

Richard Lapin, a recent BSc economics graduate from University College London, created the `Cartogram` function in order to generate cartograms (geometrically distorted maps where the distortion conveys information derived from input data). With `Cartogram`, Richard made great use of the Wolfram Function Repository—a public resource of user-contributed Wolfram Language functions.

Since we’ve announced the Free Wolfram Engine for Developers, more users than ever before have access to the power of Wolfram’s computational intelligence. One popular way to take advantage of the Wolfram Engine is to call it from a system like Jupyter. Seth Chandler, Wolfram Innovator Award winner and a Foundation Professor of Law at the University of Houston, shows us how by walking us through a comprehensive tutorial on installing a Wolfram Engine kernel for Jupyter.

Dev Chheda created a system capable of detecting three moods in human speech: angry, happy and sad. Dev determined the features most useful for detecting mood, extracting the amplitudes, fundamental frequencies and format frequencies of each training clip. He then trained the system on labeled samples of emotional speech using `Classify`, allowing him to identify and label new audio recordings with the mood of the speaker.

Chord diagrams are an elegant way to represent interrelationships between variables. After searching for built-in capabilities or other user-provided answers and coming up short, George Varnavides built a chord diagram from scratch using the visualization capabilities of the Wolfram Language. In addition to sharing his process on Wolfram Community, George also made sure other users would be able to easily access his code by submitting `ChordDiagram` to the Wolfram Function Repository. Thanks, George!

In bioinformatics, DNA sequence alignment is an algorithm-based method of arranging sequences to find similarities between them. Jessica Shi, another Summer Camp student, used pairwise alignment (and a bit of inspiration from a Wolfram Demonstration published in 2011) to create a visual output representing sequence alignment.

If you haven’t yet signed up to be a member of Wolfram Community, please do so! You can join in on similar discussions, post your own work in groups that cover your interests and browse the complete list of Staff Picks. If you’re looking for more interesting work produced by our Summer Program alumni, visit our Wolfram Community groups for Summer Camp and Summer School projects.

]]>Wolfram Virtual Labs are open educational resources in the form of interactive courseware that are used to explain different concepts in the classroom. Our ambition is to provide an easy way to study difficult concepts and promote student curiosity.

For this post, I spoke with Dr. Matteo Fasano about his experience with using Virtual Labs as a course complement in the masters’ courses in which he acts as a teaching assistant. He also told me why and how he supported the Wolfram MathCore group to develop the CollegeThermal Virtual Labs (now available) and how they can help teachers or instructors make learning more engaging.

I am a postdoctoral researcher at Politecnico di Torino. I am a teaching assistant for five masters’ courses on energy and thermal engineering. I was the recipient of the Young Researcher of the Year award from energy company Eni in 2017 for my research work “Heat and Mass Transfer of Water at Nanoscale Solid-Liquid Interfaces.”

It was in May last year when I—along with my colleagues—attended a Wolfram SystemModeler demo. During this demo, we got to know about an internal project at Wolfram called Virtual Labs. The idea was simple: these were a set of interactive computer exercises meant as a complement to teaching materials in which you explore a specific topic using system models, either by creating new models to describe the topic or by interacting with pre-built models. It was planned to be distributed as an open educational resource to teachers, students and anyone else wishing to learn about a subject.

I was intrigued by this concept and started correspondence with your team. I sent some exercises from my course material to check if it was possible to prepare models for these case studies, and if it could be modeled as a Virtual Lab with interactive instructions for students. On first review it looked doable; however, because of the length of the content, we proposed for it to be split into two Virtual Labs instead, and these are now available to everyone as the CollegeThermal library.

We tried to make sure that the content can be followed by anyone with basic thermodynamic knowledge.

Ever wondered what the thermal power of the radiator in your house should be to guarantee thermal comfort, or how the ambient conditions affect your room temperature? To answer these questions, it is important to understand the thermal behavior of different components in your house. The Room Heating lab has the ambition to model the most significant components of a house and combine them to see how your room temperature changes with ambient temperature fluctuations.

We first begin by observing the thermal behavior of a multi-layer wall crossed by a heat flux normal to its surface. The wall consists of three layers in series: insulation, brick and insulation, respectively. As a first approximation, the thermal response of this configuration can be studied by a one-dimensional, lumped-parameters model, in which only heat conduction through the wall is considered (for the moment). In the following figure, we show how the thermal conductivity of the different materials affects the temperature distribution inside the wall, at fixed-layers thickness:

There is a first interesting behavior that can be noticed here: if we consider typical values for the thermal insulating and brick layers, namely 0.07 W/m K and 0.7 W/m K, respectively, we observe a large temperature drop across the insulation layers, whereas the temperature drop across the brick layer is only minimal, even if the thickness of the brick layer is three times that of the insulation layers. In fact, the law of heat conduction (also known as Fourier’s law) states that—at fixed-heat flux—temperature gradients are linearly proportional to thermal resistance, which can be here determined as the ratio between the layer thickness and thermal conductivity.

We now add another part to our model: the convective heat transfer from the ambient room to the internal wall, and from the external wall to the environment. The thermal resistances due to convection are added in series with respect to the conductive ones through the wall, thus causing further temperature drops at both sides of the wall. In this case, the typical boundary layer of natural convection can be seen by the nonlinear temperature profiles of air in the proximity of the wall surface:

In addition to opaque walls, buildings have transparent walls (windows). The thermal model of windows here considers the solar radiation through the glass, the air leakages through the window and the thermal conduction across the glass and frame of the window. We can analyze the model response by observing the heat-flow values for the different contributions. In this case, the indoor and outdoor temperatures are set at 20 °C and -10 °C, respectively. Results in the following figure show that the heat flux is positive when it flows from the external ambient to the internal one, as in the case of solar radiation, whereas heat fluxes are negative (i.e. thermal losses) when they flow in the opposite direction, as in the cases of leakages and conduction through the window:

The thermal model of both opaque and transparent walls can be combined to represent a wall with a window. This assembled model allows you to compare the thermal flux flowing in/out to/from the room at different ambient and room temperatures. Clearly, a nonzero thermal flux balance would lead to the dynamical change of room temperature. If the net thermal flux has negative sign, the room temperature will tend to decrease with time, therefore reducing the thermal comfort:

To observe the actual dynamic behavior of temperature with time, we then refine the thermal model of the room by introducing: (1) other relevant components of the room that significantly affect its temperature, namely the roof, floor (opaque walls) and people (inner heat source); and (2) the thermal capacitance of air in the room. All the components are then combined to create a more comprehensive room model. This model is tested for a given outside temperature:

In this case, we observe from the following figure that the net heat outflow is greater than the inflow one; as a result, the temperature in the room decreases and eventually stabilizes at 10 °C, when inlet and outlet thermal fluxes equalize:

Such equilibrium temperature causes thermal discomfort in the room, which should be avoided by introducing a heating system in the building.

In the Virtual Lab, we have modeled the heating system using radiators as heat exchangers. Specifically, both electric and water radiators as shown in the following diagrams have been considered, since the multidomain approach of SystemModeler allows us to combine different domains such as electric, thermal and control in one model. As a first implementation, the radiator operates at a fixed nominal thermal power and is controlled by an on/off strategy:

We will now use Wolfram|Alpha to get temperature data for one example day during the winter season of Rome as a case study, and use it to define the average external temperature required by our complete room + radiator model:

✕
tempsWinter = WeatherData["Rome", "Temperature", {{2016, 01, 01}}]["Path"]; tempsInSecondsWinter = {tempsWinter[[All, 1]] - tempsWinter[[1, 1]], QuantityMagnitude[tempsWinter[[All, 2]], "DegreesCelsius"]}\[Transpose]; |

The following plot shows the temperature variation for a span of about nine hours. This data can then be fed to our model:

✕
ListPlot[tempsInSecondsWinter, AxesLabel -> {"Time[s]", "Temperature[\[Degree]C]"}] |

Thanks to an intuitive graphical user interface, the user can now perform fast sensitivity analyses to assess the effect of the model parameters on the room temperature. For example, in this figure, the room reference temperature, the heat capacity of the room and the solar irradiance can be changed to explore their effects on the on/off cycles of the radiator (and thus the heating system):

The overall energy consumption by the heating system is also estimated during the day. Here the students can appreciate the energy- (and cost-) saving potential of decreasing the room reference temperature by just 1 °C during the heating season, as well as the importance of an adequate sizing of the heating system to guarantee stable comfort conditions throughout the whole day.

In this lab, you saw how we started with trivial models and ended up observing a nontrivial example. The students have access to all the system models used in the labs, where they can learn how these models were created and try to create new models or to refine the existing ones.

Existing ways of teaching using only presentations and blackboards have no doubt worked for a long time. I believe that this teaching approach is still required to get a full preliminary understanding of the physical heat- and mass-transfer mechanisms and the thermodynamics underlying thermal systems. Nowadays, this approach can be assisted by the real-time simulation of thermal systems to get a fast, hands-on experience of their actual operating conditions. Virtual Labs can support this approach without the need for prior knowledge of low-level coding, which is a plus for students without a solid programming background.

“It took me less than a day to create a model, and I felt that I could now create Virtual Labs by myself.”

The notebook interface and wide visualization tools of Mathematica provide a convenient way to create dynamic contents. Modeling using SystemModeler is also easy with its wide range of built-in components and equation-based models. It took me less than a day to create a model, and I felt that I could now create Virtual Labs by myself. I am curious to also test the power of the Wolfram Language for my research: in the near future, I would like to explore the machine learning capabilities of Mathematica to predict heat- and mass-transfer processes—for instance, for the injection molding of nanocomposites. Not only me, but I can also see my students making use of these tools to improve their understanding of the physical concepts learned during the lectures.

A small tip to teachers: the Wolfram MathCore group is easy to talk to and open to providing help. If you have any ideas or questions on how you could improve your teaching capacities, do contact them! If you want your students to experience innovative learning processes, take an extra step to transform your courses with the help of flexible, intuitive and high-level programming.

Have a good idea for an educational area where Virtual Labs could be helpful? Let us know by sending us an email at virtuallabs@wolfram.com!

Get full access to the latest Wolfram Language functionality for SystemModeler models and Virtual Labs with Mathematica 12. |

His latest work is cutting edge—but it’s only part of the story. Throughout his career, Prince-Wright has used Wolfram technologies for “modeling systems as varied as downhole wellbore trajectory, radionuclide dispersion and PID control of automation systems.” Read on to learn more about Prince-Wright’s accomplishments and discover why Wolfram technology is his go-to for developing unique computational solutions.

When Mathematica Version 1.0 was released, Prince-Wright was in a PhD program in probabilistic modeling at the University of Glasgow. At the time, Mathematica was sold as a specialized tool for symbolic algebra. Its unique treatment of differential equations and linear algebra made it indispensable to Prince-Wright in his research on reliability analysis.

Over the next few years, while teaching engineering mathematics, he found the symbolic approach of the Wolfram Language helpful in demonstrating math concepts intuitively. In particular, he used it to show students how the use of vectors and complex numbers made location and rotation algebra much simpler than trigonometry. He also notes that even this early interface produced higher-quality visualizations faster than any other software at the time.

Eventually, Prince-Wright moved to the private sector—analyzing engineering systems, presenting findings and making policy proposals for a range of clients. Though he tried other software, he always gravitated toward the Wolfram Language for its highly automated interface construction (after all, he was a civil engineer, not a programmer). In the early 2000s, he began taking advantage of the publishing capabilities of Wolfram Notebooks. He could readily demonstrate custom models to clients anywhere, either by sending them full-featured standalone notebooks or by generating interactive notebook-based interfaces that connected to pre-deployed models on the back end.

Ongoing improvements in Mathematica’s modeling and analysis functionality also allowed Prince-Wright to tackle new challenges. In 2005, he helped develop a new well-control algorithm for a major oil company that combined concepts from several engineering domains. He was pleased by how easy it was to repurpose the language to create this sophisticated multiphysics solution: “What was remarkable was thinking back 14 years and using the skills I developed using Mathematica as a teacher to solve real-world problems.”

In 2012, while working in safety and systems engineering at Berkeley & Imperial, Prince-Wright discovered Wolfram SystemModeler and its Wolfram Language integration features. He began leveraging this new Modelica interface to test and improve models he’d built in the Wolfram Language. Combining the drag-and-drop simulation workflow of SystemModeler with the real-time analysis of the Wolfram Language allowed him to achieve unprecedented speed and accuracy in his models.

Around that time, Prince-Wright heard about the Wolfram Connected Devices Project. In addition to improved integration with low-level devices, he soon realized the project gave him a framework for comparing system models to the actual systems they represent—in his words, “integrating the language with the real world around us.” This workflow turned out to be ideal for simulating and testing the kinds of embedded systems used in deepwater drilling. Since then, he and his team have continued to push this concept further, exploring the use of digital twins and hardware-in-the-loop simulations to continue improving his models.

In many ways, the modern Wolfram technology stack has advanced in parallel with Prince-Wright’s career path, growing from a symbolic math tool to a cross-platform computational system to the full-featured modern engineering and analytics suite available today. Like his work, Wolfram technology can be applied across a diverse range of engineering fields. And the Wolfram Language maintains consistent syntax and structure throughout, making it easy to try different techniques—and complementing Prince-Wright’s approach of constantly pushing the boundaries. These qualities combine to provide lasting value for innovators like Prince-Wright: “The evolution of the Wolfram System has been very carefully designed to ensure that the core language was honored through the process. What that means now is that using the Wolfram Language gives you an opportunity to solve even bigger and more complex problems.”

Find out more about Robert Prince-Wright, as well as other users’ exciting Wolfram Language applications, on our Customer Stories pages.

Build and test your own engineering models with Wolfram SystemModeler, the easy-to-use, next-generation modeling and simulation environment. |

How can something that simple produce something that complex? It’s been nearly 40 years since I first saw rule 30—but it still amazes me. Long ago it became my personal all-time favorite science discovery, and over the years it’s changed my whole worldview and led me to all sorts of science, technology, philosophy and more.

But even after all these years, there are still many basic things we don’t know about rule 30. And I’ve decided that it’s now time to do what I can to stimulate the process of finding more of them out. So as of today, I am offering $30,000 in prizes for the answers to three basic questions about rule 30.

The setup for rule 30 is extremely simple. One’s dealing with a sequence of lines of black and white cells. And given a particular line of black and white cells, the colors of the cells on the line below are determined by looking at each cell and its immediate neighbors and then applying the following simple rule:

✕
RulePlot[CellularAutomaton[30]] |

If you start with a single black cell, what will happen? One might assume—as I at first did—that the rule is simple enough that the pattern it produces must somehow be correspondingly simple. But if you actually do the experiment, here’s what you find happens over the first 50 steps:

✕
RulePlot[CellularAutomaton[30], {{1}, 0}, 50, Mesh -> All, ImageSize -> Full] |

But surely, one might think, this must eventually resolve into something much simpler. Yet here’s what happens over the first 300 steps:

And, yes, there’s some regularity over on the left. But many aspects of this pattern look for all practical purposes random. It’s amazing that a rule so simple can produce behavior that’s so complex. But I’ve discovered that in the computational universe of possible programs this kind of thing is common, even ubiquitous. And I’ve built a whole new kind of science—with all sorts of principles—based on this.

And gradually there’s been more and more evidence for these principles. But what specifically can rule 30 tell us? What concretely can we say about how it behaves? Even the most obvious questions turn out to be difficult. And after decades without answers, I’ve decided it’s time to define some specific questions about rule 30, and offer substantial prizes for their solutions.

I did something similar in 2007, putting a prize on a core question about a particular Turing machine. And at least in that case the outcome was excellent. In just a few months, the prize was won—establishing forever what the simplest possible universal Turing machine is, as well as providing strong further evidence for my general Principle of Computational Equivalence.

The Rule 30 Prize Problems again get at a core issue: just how complex really is the behavior of rule 30? Each of the problems asks this in a different, concrete way. Like rule 30 itself, they’re all deceptively simple to state. Yet to solve any of them will be a major achievement—that will help illuminate fundamental principles about the computational universe that go far beyond the specifics of rule 30.

I’ve wondered about every one of the problems for more than 35 years. And all that time I’ve been waiting for the right idea, or the right kind of mathematical or computational thinking, to finally be able to crack even one of them. But now I want to open this process up to the world. And I’m keen to see just what can be achieved, and what methods it will take.

For the Rule 30 Prize Problems, I’m concentrating on a particularly dramatic feature of rule 30: the apparent randomness of its center column of cells. Start from a single black cell, then just look down the sequence of values of this cell—and it seems random:

✕
ArrayPlot[ MapIndexed[If[#2[[2]] != 21, # /. {0 -> 0.2, 1 -> .6}, #] &, CellularAutomaton[30, {{1}, 0}, 20], {2}], Mesh -> All] |

But in what sense is it really random? And can one prove it? Each of the Prize Problems in effect uses a different criterion for randomness, then asks whether the sequence is random according to that criterion.

Here’s the beginning of the center column of rule 30:

✕
ArrayPlot[List@CellularAutomaton[30, {{1}, 0}, {80, {{0}}}], Mesh -> True, ImageSize -> Full] |

It’s easy to see that this doesn’t repeat—it doesn’t become periodic. But this problem is about whether the center column ever becomes periodic, even after an arbitrarily large number of steps. Just by running rule 30, we know the sequence doesn’t become periodic in the first billion steps. But what about ever? To establish that, we need a proof. (Here are the first million and first billion bits in the sequence, by the way, as entries in the Wolfram Data Repository.)

Here’s what one gets if one tallies the number of black and of white cells in successively more steps in the center column of rule 30:

✕
Dataset[{{1, 1, 0, ""}, {10, 7, 3, 2.3333333333333335}, {100, 52, 48, 1.0833333333333333}, {1000, 481, 519, 0.9267822736030829}, {10000, 5032, 4968, 1.0128824476650564}, {100000, 50098, 49902, 1.0039276982886458}, {1000000, 500768, 499232, 1.003076725850907}, {10000000, 5002220, 4997780, 1.0008883944471345}, {100000000, 50009976, 49990024, 1.000399119632349}, {1000000000, 500025038, 499974962, 1.0001001570154626}}] |

The results are certainly close to equal for black vs. white. But what this problem asks is whether the limit of the ratio after an arbitrarily large number of steps is exactly 1.

To find the *n*^{th} cell in the center column, one can always just run rule 30 for *n* steps, computing the values of all the cells in this diamond:

✕
With[{n = 100}, ArrayPlot[ MapIndexed[If[Total[Abs[#2 - n/2 - 1]] <= n/2, #, #/4] &, CellularAutomaton[30, CenterArray[{1}, n + 1], n], {2}]]] |

But if one does this directly, one’s doing *n*^{2} individual cell updates, so the computational effort required goes up like O(*n*^{2}). This problem asks if there’s a shortcut way to compute the value of the *n*^{th} cell, without all this intermediate computation—or, in particular, in less than O(*n*) computational effort.

Rule 30 is a creature of the computational universe: a system found by exploring possible simple programs with the new intellectual framework that the paradigm of computation provides. But the problems I’ve defined about rule 30 have analogs in mathematics that are centuries old.

Consider the digits of π. They’re a little like the center column of rule 30. There’s a definite algorithm for generating them. Yet once generated they seem for all practical purposes random:

✕
N[Pi, 85] |

Just to make the analog a little closer, here are the first few digits of π in base 2:

✕
BaseForm[N[Pi, 25], 2] |

And here are the first few bits in the center column of rule 30:

✕
Row[CellularAutomaton[30, {{1}, 0}, {90, {{0}}}]] |

Just for fun, one can convert these to base 10:

✕
N[FromDigits[{Flatten[CellularAutomaton[30, {{1}, 0}, {500, {0}}]], 0}, 2], 85] |

Of course, the known algorithms for generating the digits of π are considerably more complicated than the simple rule for generating the center column of rule 30. But, OK, so what’s known about the digits of π?

Well, we know they don’t repeat. That was proved in the 1760s when it was shown that π is an irrational number—because the only numbers whose digits repeat are rational numbers. (It was also shown in 1882 that π is transcendental, i.e. that it cannot be expressed in terms of roots of polynomials.)

How about the analog of problem 2? Do we know if in the digit sequence of π different digits occur with equal frequency? By now more than 100 trillion binary digits have been computed—and the measured frequencies of digits are very close (in the first 40 trillion binary digits the ratio of 1s to 0s is about 0.9999998064). But in the limit, are the frequencies exactly the same? People have been wondering about this for several centuries. But so far mathematics hasn’t succeeded in delivering any results.

For rational numbers, digit sequences are periodic, and it’s easy to work out relative frequencies of digits. But for the digit sequences of all other “naturally constructed” numbers, basically there’s nothing known about limiting frequencies of digits. It’s a reasonable guess that actually the digits of π (as well as the center column of rule 30) are “normal”, in the sense that not only every individual digit, but also every block of digits of any given length in the limit occur with equal frequency. And as was noted in the 1930s, it’s perfectly possible to “digit-construct” normal numbers. Champernowne’s number, formed by concatenating the digits of successive integers, is an example (and, yes, this works in any base, and one can also get normal numbers by concatenating values of functions of successive integers):

✕
N[ChampernowneNumber[10], 85] |

But the point is that for “naturally constructed” numbers formed by combinations of standard mathematical functions, there’s simply no example known where any regularity of digits has been found. Of course, it ultimately depends what one means by “regularity”—and at some level the problem devolves into a kind of number-digit analog of the search for extraterrestrial intelligence. But there’s absolutely no proof that one couldn’t, for example, find even some strange combination of square roots that would have a digit sequence with some very obvious regularity.

OK, so what about the analog of problem 3 for the digits of π? Unlike rule 30, where the obvious way to compute elements in the sequence is one step at a time, traditional ways of computing digits of π involve getting better approximations to π as a complete number. With the standard (bizarre-looking) series invented by Ramanujan in 1910 and improved by the Chudnovsky brothers in 1989, the first few terms in the series give the following approximations:

✕
Style[Table[N[(12*\!\( \*UnderoverscriptBox[\(\[Sum]\), \(k = 0\), \(n\)] \*FractionBox[\( \*SuperscriptBox[\((\(-1\))\), \(k\)]*\(\((6*k)\)!\)*\((13591409 + 545140134*k)\)\), \(\(\((3*k)\)!\) \*SuperscriptBox[\((\(k!\))\), \(3\)]* \*SuperscriptBox[\(640320\), \(3*k + 3/2\)]\)]\))^-1, 100], {n, 10}] // Column, 9] |

So how much computational effort is it to find the *n*^{th} digit? The number of terms required in the series is O(*n*). But each term needs to be computed to *n*-digit precision, which requires at least O(*n*) individual digit operations—implying that altogether the computational effort required is more than O(*n*).

Until the 1990s it was assumed that there wasn’t any way to compute the *n*^{th} digit of π without computing all previous ones. But in 1995 Simon Plouffe discovered that actually it’s possible to compute—albeit slightly probabilistically—the *n*^{th} digit without computing earlier ones. And while one might have thought that this would allow the *n*^{th} digit to be obtained with less than O(*n*) computational effort, the fact that one has to do computations at *n*-digit precision means that at least O(*n*) computational effort is still required.

Of the three Rule 30 Prize Problems, this is the one on which the most progress has already been made. Because while it’s not known if the center column in the rule 30 pattern ever becomes periodic, Erica Jen showed in 1986 that no two columns can both become periodic. And in fact, one can also give arguments that a single column plus scattered cells in another column can’t both be periodic.

The proof about a pair of columns uses a special feature of rule 30. Consider the structure of the rule:

✕
RulePlot[CellularAutomaton[30]] |

Normally one would just say that given each triple of cells, the rule determines the color of the center cell below. But for rule 30, one can effectively also run the rule sideways: given the cell to the right and above, one can also uniquely determine the color of the cell to the left. And what this means is that if one is given two adjacent columns, it’s possible to reconstruct the whole pattern to the left:

✕
GraphicsRow[ ArrayPlot[#, PlotRange -> 1, Mesh -> All, PlotRange -> 1, Background -> LightGray, ImageSize -> {Automatic, 80}] & /@ (PadLeft[#, {Length[#], 10}, 10] & /@ Module[{data = {{0, 1}, {1, 1}, {0, 0}, {0, 1}, {1, 1}, {1, 0}, {0, 1}, {1, 10}}}, Flatten[{{data}, Table[Join[ Table[Module[{p, q = data[[n, 1]], r = data[[n, 2]], s = data[[n + 1, 1]] }, p = Mod[-q - r - q r + s, 2]; PrependTo[data[[n]], p]], {n, 1, Length[data] - i}], PrependTo[data[[-#]], 10] & /@ Reverse[Range[i]]], {i, 7}]}, 1]])] |

But if the columns were periodic, it immediately follows that the reconstructed pattern would also have to be periodic. Yet by construction at least the initial condition is definitely not periodic, and hence the columns cannot both be periodic. The same argument works if the columns are not adjacent, and if one doesn’t know every cell in both columns. But there’s no known way to extend the argument to a single column—such as the center column—and thus it doesn’t resolve the first Rule 30 Prize Problem.

OK, so what would be involved in resolving it? Well, if it turns out that the center column is eventually periodic, one could just compute it, and show that. We know it’s not periodic for the first billion steps, but one could at least imagine that there could be a trillion-step transient, after which it’s periodic.

Is that plausible? Well, transients do happen—and theoretically (just like in the classic Turing machine halting problem) they can even be arbitrarily long. Here’s a somewhat funky example—found by a search—of a rule with 4 possible colors (totalistic code 150898). Run it for 200 steps, and the center column looks quite random:

✕
ArrayPlot[ CellularAutomaton[{150898, {4, 1}, 1}, {{1}, 0}, {200, 150 {-1, 1}}], ColorRules -> {0 -> Hue[0.12, 1, 1], 1 -> Hue[0, 0.73, 0.92], 2 -> Hue[0.13, 0.5, 1], 3 -> Hue[0.17, 0, 1]}, PixelConstrained -> 2, Frame -> False] |

After 500 steps, the whole pattern still looks quite random:

✕
ArrayPlot[ CellularAutomaton[{150898, {4, 1}, 1}, {{1}, 0}, {500, 300 {-1, 1}}], ColorRules -> {0 -> Hue[0.12, 1, 1], 1 -> Hue[0, 0.73, 0.92], 2 -> Hue[0.13, 0.5, 1], 3 -> Hue[0.17, 0, 1]}, Frame -> False, ImagePadding -> 0, PlotRangePadding -> 0, PixelConstrained -> 1] |

But if one zooms in around the center column, there’s something surprising: after 251 steps, the center column seems to evolve to a fixed value (or at least it’s fixed for more than a million steps):

✕
Grid[{ArrayPlot[#, Mesh -> True, ColorRules -> {0 -> Hue[0.12, 1, 1], 1 -> Hue[0, 0.73, 0.92], 2 -> Hue[0.13, 0.5, 1], 3 -> Hue[0.17, 0, 1]}, ImageSize -> 38, MeshStyle -> Lighter[GrayLevel[.5, .65], .45]] & /@ Partition[ CellularAutomaton[{150898, {4, 1}, 1}, {{1}, 0}, {1400, {-4, 4}}], 100]}, Spacings -> .35] |

Could some transient like this happen in rule 30? Well, take a look at the rule 30 pattern, now highlighting where the diagonals on the left are periodic:

✕
steps = 500; diagonalsofrule30 = Reverse /@ Transpose[ MapIndexed[RotateLeft[#1, (steps + 1) - #2[[1]]] &, CellularAutomaton[30, {{1}, 0}, steps]]]; diagonaldataofrule30 = Table[With[{split = Split[Partition[Drop[diagonalsofrule30[[k]], 1], 8]], ones = Flatten[ Position[Reverse[Drop[diagonalsofrule30[[k]], 1]], 1]]}, {Length[split[[1]]], split[[1, 1]], If[Length[split] > 1, split[[2, 1]], Length[diagonalsofrule30[[k]]] - Floor[k/2]]}], {k, 1, 2 steps + 1}]; transientdiagonalrule30 = %; transitionpointofrule30 = If[IntegerQ[#[[3]]], #[[3]], If[#[[1]] > 1, 8 #[[1]] + Count[Split[#[[2]] - #[[3]]][[1]], 0] + 1, 0] ] & /@ diagonaldataofrule30; decreasingtransitionpointofrule30 = Append[Min /@ Partition[transitionpointofrule30, 2, 1], 0]; transitioneddiagonalsofrule30 = Table[Join[ Take[diagonalsofrule30[[n]], decreasingtransitionpointofrule30[[n]]] + 2, Drop[diagonalsofrule30[[n]], decreasingtransitionpointofrule30[[n]]]], {n, 1, 2 steps + 1}]; transientdiagonalrule30 = MapIndexed[RotateRight[#1, (steps + 1) - #2[[1]]] &, Transpose[Reverse /@ transitioneddiagonalsofrule30]]; smallertransientdiagonalrule30 = Take[#, {225, 775}] & /@ Take[transientdiagonalrule30, 275]; Framed[ArrayPlot[smallertransientdiagonalrule30, ColorRules -> {0 -> White, 1 -> Gray, 2 -> Hue[0.14, 0.55, 1], 3 -> Hue[0.07, 1, 1]}, PixelConstrained -> 1, Frame -> None, ImagePadding -> 0, ImageMargins -> 0, PlotRangePadding -> 0, PlotRangePadding -> Full ], FrameMargins -> 0, FrameStyle -> GrayLevel[.75]] |

There seems to be a boundary that separates order on the left from disorder on the right. And at least over the first 100,000 or so steps, the boundary seems to move on average about 0.252 steps to the left at each step—with roughly random fluctuations:

✕
data = CloudGet[ CloudObject[ "https://www.wolframcloud.com/obj/bc470188-f629-4497-965d-\ a10fe057e2fd"]]; ListLinePlot[ MapIndexed[{First[#2], -# - .252 First[#2]} &, Module[{m = -1, w}, w = If[First[#] > m, m = First[#], m] & /@ data[[1]]; m = 1; Table[While[w[[m]] < i, m++]; m - i, {i, 100000}]]], Filling -> Axis, AspectRatio -> 1/4, MaxPlotPoints -> 10000, Frame -> True, PlotRangePadding -> 0, AxesOrigin -> {Automatic, 0}, PlotStyle -> Hue[0.07`, 1, 1], FillingStyle -> Directive[Opacity[0.35`], Hue[0.12`, 1, 1]]] |

But how do we know that there won’t at some point be a huge fluctuation, that makes the order on the left cross the center column, and perhaps even make the whole pattern periodic? From the data we have so far, it looks unlikely, but I don’t know any way to know for sure.

And it’s certainly the case that there are systems with exceptionally long “transients”. Consider the distribution of primes, and compute `LogIntegral[ n] - PrimePi[n]`:

✕
DiscretePlot[LogIntegral[n] - PrimePi[n], {n, 10000}, Filling -> Axis, Frame -> True, PlotRangePadding -> 0, AspectRatio -> 1/4, Joined -> True, PlotStyle -> Hue[0.07`, 1, 1], FillingStyle -> Directive[Opacity[0.35`], Hue[0.12`, 1, 1]]] |

Yes, there are fluctuations. But from this picture it certainly looks as if this difference is always going to be positive. And that’s, for example, what Ramanujan thought. But it turns out it isn’t true. At first the bound for where it would fail was astronomically large (Skewes’s number 10^10^10^964). And although still nobody has found an explicit value of *n* for which the difference is negative, it’s known that before *n* = 10^{317} there must be one (and eventually the difference will be negative at least nearly a millionth of the time).

I strongly suspect that nothing like this happens with the center column of rule 30. But until we have a proof that it can’t, who knows?

One might think, by the way, that while one might be able to prove periodicity by exposing regularity in the center column of rule 30, nothing like that would be possible for non-periodicity. But actually, there are patterns whose center columns one can readily see are non-periodic, even though they’re very regular. The main class of examples are nested patterns. Here’s a very simple example, from rule 161—in which the center column has white cells when *n* = 2^{k}:

✕
GraphicsRow[ ArrayPlot[CellularAutomaton[161, {{1}, 0}, #]] & /@ {40, 200}] |

Here’s a slightly more elaborate example (from the 2-neighbor 2-color rule 69540422), in which the center column is a Thue–Morse sequence `ThueMorse[ n]`:

✕
GraphicsRow[ ArrayPlot[ CellularAutomaton[{69540422, 2, 2}, {{1}, 0}, {#, {-#, #}}]] & /@ {40, 400}] |

One can think of the Thue–Morse sequence as being generated by successively applying the substitutions:

✕
RulePlot[SubstitutionSystem[{0 -> {0, 1}, 1 -> {1, 0}}], Appearance -> "Arrow"] |

And it turns out that the *n*^{th} term in this sequence is given by `Mod[DigitCount[ n, 2, 1], 2]`—which is never periodic.

Will it turn out that the center column of rule 30 can be generated by a substitution system? Again, I’d be amazed (although there are seemingly natural examples where very complex substitution systems do appear). But once again, until one has a proof, who knows?

Here’s something else, that may be confusing, or may be helpful. The Rule 30 Prize Problems all concern rule 30 running in an infinite array of cells. But what if one considers just *n* cells, say with the periodic boundary conditions (i.e. taking the right neighbor of the rightmost cell to be the leftmost cell, and vice versa)? There are 2^{n} possible total states of the system—and one can draw a state transition diagram that shows which state evolves to which other. Here’s the diagram for *n* = 5:

✕
Graph[# -> CellularAutomaton[30][#] & /@ Tuples[{1, 0}, 4], VertexLabels -> ((# -> ArrayPlot[{#}, ImageSize -> 30, Mesh -> True]) & /@ Tuples[{1, 0}, 4])] |

And here it is for *n* = 4 through *n* = 11:

✕
Row[Table[ Framed[Graph[# -> CellularAutomaton[30][#] & /@ Tuples[{1, 0}, n]]], {n, 4, 11}]] |

The structure is that there are a bunch of states that appear only as transients, together with other states that are on cycles. Inevitably, no cycle can be longer than 2^{n} (actually, symmetry considerations show that it always has to be somewhat less than this).

OK, so on a size-*n* array, rule 30 always has to show behavior that becomes periodic with a period that’s less than 2^{n}. Here are the actual periods starting from a single black cell initial condition, plotted on a log scale:

✕
ListLogPlot[ Normal[Values[ ResourceData[ "Repetition Periods for Elementary Cellular Automata"][ Select[#Rule == 30 &]][All, "RepetitionPeriods"]]], Joined -> True, Filling -> Bottom, Mesh -> All, MeshStyle -> PointSize[.008], AspectRatio -> 1/3, Frame -> True, PlotRange -> {{47, 2}, {0, 10^10}}, PlotRangePadding -> .1, PlotStyle -> Hue[0.07`, 1, 1], FillingStyle -> Directive[Opacity[0.35`], Hue[0.12`, 1, 1]]] |

And at least for these values of *n*, a decent fit is that the period is about 2^{0.63 n}. And, yes, at least in all these cases, the period of the center column is equal to the period of the whole evolution. But what do these finite-size results imply about the infinite-size case? I, at least, don’t immediately see.

Here’s a plot of the running excess of 1s over 0s in 10,000 steps of the center column of rule 30:

✕
ListLinePlot[ Accumulate[2 CellularAutomaton[30, {{1}, 0}, {10^4 - 1, {{0}}}] - 1], AspectRatio -> 1/4, Frame -> True, PlotRangePadding -> 0, AxesOrigin -> {Automatic, 0}, Filling -> Axis, PlotStyle -> Hue[0.07`, 1, 1], FillingStyle -> Directive[Opacity[0.35`], Hue[0.12`, 1, 1]]] |

Here it is for a million steps:

✕
ListLinePlot[ Accumulate[ 2 ResourceData[ "A Million Bits of the Center Column of the Rule 30 Cellular Automaton"] - 1], Filling -> Axis, Frame -> True, PlotRangePadding -> 0, AspectRatio -> 1/4, MaxPlotPoints -> 1000, PlotStyle -> Hue[0.07`, 1, 1], FillingStyle -> Directive[Opacity[0.35`], Hue[0.12`, 1, 1]]] |

And a billion steps:

✕
data=Flatten[IntegerDigits[#,2,8]&/@Normal[ResourceData["A Billion Bits of the Center Column of the Rule 30 Cellular Automaton"]]]; data=Accumulate[2 data-1]; sdata=Downsample[data,10^5]; ListLinePlot[Transpose[{Range[10000] 10^5,sdata}],Filling->Axis,Frame->True,PlotRangePadding->0,AspectRatio->1/4,MaxPlotPoints->1000,PlotStyle->Hue[0.07`,1,1],FillingStyle->Directive[Opacity[0.35`],Hue[0.12`,1,1]]] |

We can see that there are times when there’s an excess of 1s over 0s, and vice versa, though, yes, as we approach a billion steps 1 seems to be winning over 0, at least for now.

But let’s compute the ratio of the total number of 1s to the total number 0f 0s. Here’s what we get after 10,000 steps:

✕
Quiet[ListLinePlot[ MapIndexed[#/(First[#2] - #) &, Accumulate[CellularAutomaton[30, {{1}, 0}, {10^4 - 1, {{0}}}]]], AspectRatio -> 1/4, Filling -> Axis, AxesOrigin -> {Automatic, 1}, Frame -> True, PlotRangePadding -> 0, PlotStyle -> Hue[0.07`, 1, 1], FillingStyle -> Directive[Opacity[0.35`], Hue[0.12`, 1, 1]], PlotRange -> {Automatic, {.88, 1.04}}]] |

Is this approaching the value 1? It’s hard to tell. Go on a little longer, and this is what we see:

✕
Quiet[ListLinePlot[ MapIndexed[#/(First[#2] - #) &, Accumulate[CellularAutomaton[30, {{1}, 0}, {10^5 - 1, {{0}}}]]], AspectRatio -> 1/4, Filling -> Axis, AxesOrigin -> {Automatic, 1}, Frame -> True, PlotRangePadding -> 0, PlotStyle -> Hue[0.07`, 1, 1], FillingStyle -> Directive[Opacity[0.35`], Hue[0.12`, 1, 1]], PlotRange -> {Automatic, {.985, 1.038}}]] |

The scale is getting smaller, but it’s still hard to tell what will happen. Plotting the difference from 1 on a log-log plot up to a billion steps suggests it’s fairly systematically getting smaller:

✕
accdata=Accumulate[Flatten[IntegerDigits[#,2,8]&/@Normal[ResourceData["A Billion Bits of the Center Column of the Rule 30 Cellular Automaton"]]]]; diffratio=FunctionCompile[Function[Typed[arg,TypeSpecifier["PackedArray"]["MachineInteger",1]],MapIndexed[Abs[N[#]/(First[#2]-N[#])-1.]&,arg]]]; data=diffratio[accdata]; ListLogLogPlot[Join[Transpose[{Range[3,10^5],data[[3;;10^5]]}],Transpose[{Range[10^5+1000,10^9,1000],data[[10^5+1000;;10^9;;1000]]}]],Joined->True,AspectRatio->1/4,Frame->True,Filling->Axis,PlotRangePadding->0,PlotStyle->Hue[0.07`,1,1],FillingStyle->Directive[Opacity[0.35`],Hue[0.12`,1,1]]] |

But how do we know this trend will continue? Right now, we don’t. And, actually, things could get quite pathological. Maybe the fluctuations in 1s vs. 0s grow, so even though we’re averaging over longer and longer sequences, the overall ratio will never converge to a definite value.

Again, I doubt this is going to happen in the center column of rule 30. But without a proof, we don’t know for sure.

We’re asking here about the frequencies of black and white cells. But an obvious—and potentially illuminating—generalization is to ask instead about the frequencies for blocks of cells of length *k*. We can ask if all 2^{k} such blocks have equal limiting frequency. Or we can ask the more basic question of whether all the blocks even ever occur—or, in other words, whether if one goes far enough, the center column of rule 30 will contain any given sequence of length *k* (say a bitwise representation of some work of literature).

Again, we can get empirical evidence. For example, at least up to *k* = 22, all 2^{k} sequences do occur—and here’s how many steps it takes:

✕
ListLogPlot[{3, 7, 13, 63, 116, 417, 1223, 1584, 2864, 5640, 23653, 42749, 78553, 143591, 377556, 720327, 1569318, 3367130, 7309616, 14383312, 32139368, 58671803}, Joined -> True, AspectRatio -> 1/4, Frame -> True, Mesh -> True, MeshStyle -> Directive[{Hue[0.07, 0.9500000000000001, 0.99], PointSize[.01]}], PlotTheme -> "Detailed", PlotStyle -> Directive[{Thickness[.004], Hue[0.1, 1, 0.99]}]] |

It’s worth noticing that one can succeed perfectly for blocks of one length, but then fail for larger blocks. For example, the Thue–Morse sequence mentioned above has exactly equal frequencies of 0 and 1, but pairs don’t occur with equal frequencies, and triples of identical elements simply never occur.

In traditional mathematics—and particularly dynamical systems theory—one approach to take is to consider not just evolution from a single-cell initial condition, but evolution from all possible initial conditions. And in this case it’s straightforward to show that, yes, if one evolves with equal probability from all possible initial conditions, then columns of cells generated by rule 30 will indeed contain every block with equal frequency. But if one asks the same thing for different distributions of initial conditions, one gets different results, and it’s not clear what the implication of this kind of analysis is for the specific case of a single-cell initial condition.

If different blocks occurred with different frequencies in the center column of rule 30, then that would immediately show that the center column is “not random”, or in other words that it has statistical regularities that could be used to at least statistically predict it. Of course, at some level the center column is completely “predictable”: you just have to run rule 30 to find it. But the question is whether, given just the values in the center column on their own, there’s a way to predict or compress them, say with much less computational effort than generating an arbitrary number of steps in the whole rule 30 pattern.

One could imagine running various data compression or statistical analysis algorithms, and asking whether they would succeed in finding regularities in the sequence. And particularly when one starts thinking about the overall computational capabilities of rule 30, it’s conceivable that one could prove something about how across a spectrum of possible analysis algorithms, there’s a limit to how much they could “reduce” the computation associated with the evolution of rule 30. But even given this, it’d likely still be a major challenge to say anything about the specific case of relative frequencies of black and white cells.

It’s perhaps worth mentioning one additional mathematical analog. Consider treating the values in a row of the rule 30 pattern as digits in a real number, say with the first digit of the fractional part being on the center column. Now, so far as we know, the evolution of rule 30 has no relation to any standard operations (like multiplication or taking powers) that one does on real numbers. But we can still ask about the sequence of numbers formed by looking at the right-hand side of the rule 30 pattern. Here’s a plot for the first 200 steps:

✕
ListLinePlot[ FromDigits[{#, 0}, 2] & /@ CellularAutomaton[30, {{1}, 0}, {200, {0, 200}}], Mesh -> All, AspectRatio -> 1/4, Frame -> True, MeshStyle -> Directive[{Hue[0.07, 0.9500000000000001, 0.99], PointSize[.0085]}], PlotTheme -> "Detailed", PlotStyle -> Directive[{ Hue[0.1, 1, 0.99]}], ImageSize -> 575] |

And here’s a histogram of the values reached at successively more steps:

✕
Grid[{Table[ Histogram[ FromDigits[{#, 0}, 2] & /@ CellularAutomaton[30, {{1}, 0}, {10^n, {0, 20}}], {.01}, Frame -> True, FrameTicks -> {{None, None}, {{{0, "0"}, .2, .4, .6, .8, {1, "1"}}, None}}, PlotLabel -> (StringTemplate["`` steps"][10^n]), ChartStyle -> Directive[Opacity[.5], Hue[0.09, 1, 1]], ImageSize -> 208, PlotRangePadding -> {{0, 0}, {0, Scaled[.06]}}], {n, 4, 6}]}, Spacings -> .2] |

And, yes, it’s consistent with the limiting histogram being flat, or in other words, with these numbers being uniformly distributed in the interval 0 to 1.

Well, it turns out that in the early 1900s there were a bunch of mathematical results established about this kind of equidistribution. In particular, it’s known that `FractionalPart[ h n]` for successive

Consider the pattern made by rule 150:

✕
Row[{ArrayPlot[CellularAutomaton[150, {{1}, 0}, 30], Mesh -> All, ImageSize -> 315], ArrayPlot[CellularAutomaton[150, {{1}, 0}, 200], ImageSize -> 300]}] |

It’s a very regular, nested pattern. Its center column happens to be trivial (all cells are black). But if we look one column to the left or right, we find:

✕
ArrayPlot[{Table[Mod[IntegerExponent[t, 2], 2], {t, 80}]}, Mesh -> All, ImageSize -> Full] |

How do we work out the value of the *n*^{th} cell? Well, in this particular case, it turns out there’s essentially just a simple formula: the value is given by `Mod[IntegerExponent[ n, 2], 2]`. In other words, just look at the number

How much computational effort does it take to “evaluate this formula”? Well, even if we have to check every bit in *n*, there are only about `Log[2, n]` of those. So we can expect that the computational effort is O(log

But what about the rule 30 case? We know we can work out the value of the *n*^{th} cell in the center column just by explicitly applying the rule 30 update rule *n*^{2} times. But the question is whether there’s a way to reduce the computational work that’s needed. In the past, there’s tended to be an implicit assumption throughout the mathematical sciences that if one has the right model for something, then by just being clever enough one will always find a way to make predictions—or in other words, to work out what a system will do, using a lot less computational effort than the actual evolution of the system requires.

And, yes, there are plenty of examples of “exact solutions” (think 2-body problem, 2D Ising model, etc.) where we essentially just get a formula for what a system will do. But there are also other cases (think 3-body problem, 3D Ising model, etc.) where this has never successfully been done.

And as I first discussed in the early 1980s, I suspect that there are actually many systems (including these) that are computationally irreducible, in the sense that there’s no way to significantly reduce the amount of computational work needed to determine their behavior.

So in effect Problem 3 is asking about the computational irreducibility of rule 30—or at least a specific aspect of it. (The choice of O(*n*) computational effort is somewhat arbitrary; another version of this problem could ask for O(*n*^{α}) for any α<2, or, for that matter, O(log^{ β}(*n*))—or some criterion based on both time and memory resources.)

If the answer to Problem 3 is negative, then the obvious way to show this would just be to give an explicit program that successfully computes the *n*^{th} value in the center column with less than O(*n*) computational effort, as we did for rule 150 above.

We can ask what O(*n*) computational effort means. What kind of system are we supposed to use to do the computation? And how do we measure “computational effort”? The phenomenon of computational universality implies that—within some basic constraints—it ultimately doesn’t matter.

For definiteness we could say that we always want to do the computation on a Turing machine. And for example we can say that we’ll feed the digits of the number *n* in as the initial state of the Turing machine tape, then expect the Turing machine to grind for much less than *n* steps before generating the answer (and, if it’s really to be “formula like”, more like O(log *n*) steps).

We don’t need to base things on a Turing machine, of course. We could use any kind of system capable of universal computation, including a cellular automaton, and, for that matter, the whole Wolfram Language. It gets a little harder to measure “computational effort” in these systems. Presumably in a cellular automaton we’d want to count the total number of cell updates done. And in the Wolfram Language we might end up just actually measuring CPU time for executing whatever program we’ve set up.

I strongly suspect that rule 30 is computationally irreducible, and that Problem 3 has an affirmative answer. But if isn’t, my guess is that eventually there’ll turn out to be a program that rather obviously computes the *n*^{th} value in less than O(*n*) computational effort, and there won’t be a lot of argument about the details of whether the computational resources are counted correctly.

But proving that no such program exists is a much more difficult proposition. And even though I suspect computational irreducibility is quite ubiquitous, it’s always very hard to prove explicit lower bounds on the difficulty of doing particular computations. And in fact almost all explicit lower bounds currently known are quite weak, and essentially boil down just to arguments about information content—like that you need O(log *n*) steps to even read all the digits in the value of *n*.

Undoubtedly the most famous lower-bound problem is the P vs. NP question. I don’t think there’s a direct relation to our rule 30 problem (which is more like a P vs. LOGTIME question), but it’s perhaps worth understanding how things are connected. The basic point is that the forward evolution of a cellular automaton, say for *n* steps from an initial condition with *n* cells specified, is at most an O(*n*^{2}) computation, and is therefore in P (“polynomial time”). But the question of whether there exists an initial condition that evolves to produce some particular final result is in NP. If you happen (“non-deterministically”) to pick the correct initial condition, then it’s polynomial time to check that it’s correct. But there are potentially 2^{n} possible initial conditions to check.

Of course there are plenty of cellular automata where you don’t have to check all these 2^{n} initial conditions, and a polynomial-time computation clearly suffices. But it’s possible to construct a cellular automaton where finding the initial condition is an NP-complete problem, or in other words, where it’s possible to encode any problem in NP in this particular cellular automaton inversion problem. Is the rule 30 inversion problem NP-complete? We don’t know, though it seems conceivable that it could be proved to be (and if one did prove it then rule 30 could finally be a provably NP-complete cryptosystem).

But there doesn’t seem to be a direct connection between the inversion problem for rule 30, and the problem of predicting the center column. Still, there’s at least a more direct connection to another global question: whether rule 30 is computation universal, or, in other words, whether there exist initial conditions for rule 30 that allow it to be “programmed” to perform any computation that, for example, any Turing machine can perform.

We know that among the 256 simplest cellular automata, rule 110 is universal (as are three other rules that are simple transformations of it). But looking at a typical example of rule 110 evolution, it’s already clear that there are definite, modular structures one can identify. And indeed the proof proceeds by showing how one can “engineer” a known universal system out of rule 110 by appropriately assembling these structures.

✕
SeedRandom[23542345]; ArrayPlot[ CellularAutomaton[110, RandomInteger[1, 600], 400], PixelConstrained -> 1] |

Rule 30, however, shows no such obvious modularity—so it doesn’t seem plausible that one can establish universality in the “engineering” way it’s been established for all other known-to-be-universal systems. Still, my Principle of Computational Equivalence strongly suggests that rule 30 is indeed universal; we just don’t yet have an obvious direction to take in trying to prove it.

If one can show that a system is universal, however, then this does have implications that are closer to our rule 30 problem. In particular, if a system is universal, then there’ll be questions (like the halting problem) about its infinite-time behavior that will be undecidable, and which no guaranteed-finite-time computation can answer. But as such, universality is a statement about the existence of initial conditions that reproduce a given computation. It doesn’t say anything about the specifics of a particular initial condition—or about how long it will take to compute a particular result.

OK, but what about a different direction: what about getting empirical evidence about our Problem 3? Is there a way to use statistics, or cryptanalysis, or mathematics, or machine learning to even slightly reduce the computational effort needed to compute the *n*^{th} value in the center column?

Well, we know that the whole 2D pattern of rule 30 is far from random. In fact, of all 2^{m2} patches, only *m* × *m* can possibly occur—and in practice the number weighted by probability is much smaller. And I don’t doubt that facts like this can be used to reduce the effort to compute the center column to less than O(*n*^{2}) effort (and that would be a nice partial result). But can it be less than O(*n*) effort? That’s a much more difficult question.

Clearly if Problem 1 was answered in the negative then it could be. But in a sense asking for less than O(*n*) computation of the center column is precisely like asking whether there are “predictable regularities” in it. Of course, even if one could find small-scale statistical regularities in the sequence (as answering Problem 2 in the negative would imply), these wouldn’t on their own give one a way to do more than perhaps slightly improve a constant multiplier in the speed of computing the sequence.

Could there be some systematically reduced way to compute the sequence using a neural net—which is essentially a collection of nested real-number functions? I’ve tried to find such a neural net using our current deep-learning technology—and haven’t been able to get anywhere at all.

What about statistical methods? If we could find statistical non-randomness in the sequence, then that would imply an ability to compress the sequence, and thus some redundancy or predictability in the sequence. But I’ve tried all sorts of statistical randomness tests on the center column of rule 30—and never found any significant deviation from randomness. (And for many years—until we found a slightly more efficient rule—we used sequences from finite-size rule 30 systems as our source of random numbers in the Wolfram Language, and no legitimate “it’s not random!” bugs ever showed up.)

Statistical tests of randomness typically work by saying, “Take the supposedly random sequence and process it in some way, then see if the result is obviously non-random”. But what kind of processing should be done? One might see if blocks occur with equal frequency, or if correlations exist, or if some compression algorithm succeeds in doing compression. But typically batteries of tests end up seeming a bit haphazard and arbitrary. In principle one can imagine enumerating all possible tests—by enumerating all possible programs that can be applied to the sequence. But I’ve tried doing this, for example for classes of cellular automaton rules—and have never managed to detect any non-randomness in the rule 30 sequence.

So how about using ideas from mathematics to predict the rule 30 sequence? Well, as such, rule 30 doesn’t seem connected to any well-developed area of math. But of course it’s conceivable that some mapping could be found between rule 30 and ideas, say, in an area like number theory—and that these could either help in finding a shortcut for computing rule 30, or could show that computing it is equivalent to some problem like integer factoring that’s thought to be fundamentally difficult.

I know a few examples of interesting interplays between traditional mathematical structures and cellular automata. For example, consider the digits of successive powers of 3 in base 2 and in base 6:

✕
Row[Riffle[ ArrayPlot[#, ImageSize -> {Automatic, 275}] & /@ {Table[ IntegerDigits[3^t, 2, 159], {t, 100}], Table[IntegerDigits[3^t, 6, 62], {t, 100}]}, Spacer[10]]] |

It turns out that in the base 6 case, the rule for generating the pattern is exactly a cellular automaton. (For base 2, there are additional long-range carries.) But although both these patterns look complex, it turns out that their mathematical structure lets us speed up making certain predictions about them.

Consider the *s*^{th} digit from the right-hand edge of line *n* in each pattern. It’s just the *s*^{th} digit in 3^{n}, which is given by the “formula” (where *b* is the base, here 2 or 6) `Mod[Quotient[3 ^{n}, b^{s}], b]`. But how easy is it to evaluate this formula? One might think that to compute 3

Talking of mathematical structure, it’s worth mentioning that there are more formula-like ways to state the basic rule for rule 30. For example, taking the values of three adjacent cells to be *p*, *q*, *r* the basic rule is just ` p ⊻ (q ∨ r)` or

To work out *n* steps in the evolution of rule 30, one’s effectively got to repeatedly compose the basic rule. And so far as one can tell, the symbolic expressions that arise just get more and more complicated—and don’t show any sign of simplifying in such a way as to save computational work.

In Problem 3, we’re talking about the computational effort to compute the *n*^{th} value in the center column of rule 30—and asking if it can be less than O(*n*). But imagine that we have a definite algorithm for doing the computation. For any given *n*, we can see what computational resources it uses. Say the result is *r*[*n*]. Then what we’re asking is whether *r*[*n*] is less than “big O” of *n*, or whether `MaxLimit[ r[n]/n, n → ∞]<∞`.

But imagine that we have a particular Turing machine (or some other computational system) that’s implementing our algorithm. It could be that *r*[*n*] will at least asymptotically just be a smooth or otherwise regular function of *n* for which it’s easy to see what the limit is. But if one just starts enumerating Turing machines, one encounters examples where *r*[*n*] appears to have peaks of random heights in random places. It might even be that somewhere there’d be a value of *n* for which the Turing machine doesn’t halt (or whatever) at all, so that *r*[*n*] is infinite. And in general, as we’ll discuss in more detail later, it could even be undecidable just how *r*[*n*] grows relative to O(*n*).

So far, I’ve mostly described the Prize Problems in words. But we can also describe them in computational language (or effectively also in math).

In the Wolfram Language, the first *t* values in the center column of rule 30 are given by:

✕
c[t_] := CellularAutomaton[30, {{1}, 0}, {t, {{0}}}] |

And with this definition, the three problems can be stated as predicates about *c*[*t*].

✕
\!\( \*SubscriptBox[\(\[NotExists]\), \({p, i}\)]\( \*SubscriptBox[\(\[ForAll]\), \(t, t > i\)]c[t + p] == c[t]\)\) |

or

✕
NotExists[{p, i}, ForAll[t, t > i, c[t + p] == c[t]]] |

or “there does not exist a period *p* and an initial length *i* such that for all *t* with *t*>*i*, *c*[*t* + *p*] equals *c*[*t*]”.

✕
\!\(\*UnderscriptBox[\(\[Limit]\), \(t\* UnderscriptBox["\[Rule]", TemplateBox[{}, "Integers"]]\[Infinity]\)]\) Total[c[t]]/t == 1/2 |

or

✕
DiscreteLimit[Total[c[t]]/t, t -> Infinity] == 1/2 |

or “the discrete limit of the total of the values in *c*[*t*]/*t* as *t* → ∞ is 1/2”.

Define `machine[ m]` to be a machine parametrized by

✕
\!\( \*SubscriptBox[\(\[NotExists]\), \(m\)]\(( \*SubscriptBox[\(\[ForAll]\), \(n\)]\(\(machine[m]\)[n]\)[[1]] == Last[c[n]]\ \[And] \ \*UnderscriptBox[\(\[MaxLimit]\), \(n -> \[Infinity]\)] \*FractionBox[\(\(\(machine[m]\)[n]\)[[ 2]]\), \(n\)] < \[Infinity])\)\) |

or “there does not exist a machine *m* which for all *n* gives *c*[*n*], and for which the lim sup of the amount of computational effort spent, divided by *n*, is finite”. (Yes, one should also require that *m* be finite, so the machine’s rule can’t just store the answer.)

Before we discuss the individual problems, an obvious question to ask is what the interdependence of the problems might be. If the answer to Problem 3 is negative (which I very strongly doubt), then it holds the possibility for simple algorithms or formulas from which the answers to Problems 1 and 2 might become straightforward. If the answer to Problem 3 is affirmative (as I strongly suspect), then it implies that the answer to Problem 1 must also be affirmative. The contrapositive is also true: if the answer to Problem 1 is negative, then it implies that the answer to Problem 3 must also be negative.

If the answer to Problem 1 is negative, so that there is some periodic sequence that appears in the center column, then if one explicitly knows that sequence, one can immediately answer Problem 2. One might think that answering Problem 2 in the negative would imply something about Problem 3. And, yes, unequal probabilities for black and white implies compression by a constant factor in a Shannon-information way. But to compute value with less than O(*n*) resources—and therefore to answer Problem 3 in the negative—requires that one be able to identify in a sense infinitely more compression.

So what does it take to establish the answers to the problems?

If Problem 1 is answered in the negative, then one can imagine explicitly exhibiting the pattern generated by rule 30 at some known step—and being able to see the periodic sequence in the center. Of course, Problem 1 could still be answered in the negative, but less constructively. One might be able to show that eventually the sequence has to be periodic, but not know even any bound on where this might happen. If Problem 3 is answered in the negative, a way to do this is to explicitly give an algorithm (or, say, a Turing machine) that does the computation with less than O(*n*) computational resources.

But let’s say one has such an algorithm. One still has to prove that for all *n*, the algorithm will correctly reproduce the *n*^{th} value. This might be easy. Perhaps there would just be a proof by induction or some such. But it might be arbitrarily hard. For example, it could be that for most *n*, the running time of the algorithm is clearly less than *n*. But it might not be obvious that the running time will always even be finite. Indeed, the “halting problem” for the algorithm might simply be undecidable. But just showing that a particular algorithm doesn’t halt for a given *n* doesn’t really tell one anything about the answer to the problem. For that one would have to show that there’s no algorithm that exists that will successfully halt in less than O(*n*) time.

The mention of undecidability brings up an issue, however: just what axiom system is one supposed to use to answer the problems? For the purposes of the Prize, I’ll just say “the traditional axioms of standard mathematics”, which one can assume are Peano arithmetic and/or the axioms of set theory (with or without the continuum hypothesis).

Could it be that the answers to the problems depend on the choice of axioms—or even that they’re independent of the traditional axioms (in the sense of Gödel’s incompleteness theorem)? Historical experience in mathematics makes this seem extremely unlikely, because, to date, essentially all “natural” problems in mathematics seem to have turned out to be decidable in the (sometimes rather implicit) axiom system that’s used in doing the mathematics.

In the computational universe, though—freed from the bounds of historical math tradition—it’s vastly more common to run into undecidability. And, actually, my guess is that a fair fraction of long-unsolved problems even in traditional mathematics will also turn out to be undecidable. So that definitely raises the possibility that the problems here could be independent of at least some standard axiom systems.

OK, but assume there’s no undecidability around, and one’s not dealing with the few cases in which one can just answer a problem by saying “look at this explicitly constructed thing”. Well, then to answer the problem, we’re going to have to give a proof.

In essence what drives the need for proof is the presence of something infinite. We want to know something for any *n*, even infinitely large, etc. And the only way to handle this is then to represent things symbolically (“the symbol `Infinity` means infinity”, etc.), and apply formal rules to everything, defined by the axioms in the underlying axiom system one’s assuming.

In the best case, one might be able to just explicitly exhibit that series of rule applications—in such a way that a computer can immediately verify that they’re correct. Perhaps the series of rule applications could be found by automated theorem proving (as in `FindEquationalProof`). More likely, it might be constructed using a proof assistant system.

It would certainly be exciting to have a fully formalized proof of the answer to any of the problems. But my guess is that it’ll be vastly easier to construct a standard proof of the kind human mathematicians traditionally do. What is such a proof? Well, it’s basically an argument that will convince other humans that a result is correct.

There isn’t really a precise definition of that. In our step-by-step solutions in Wolfram|Alpha, we’re effectively proving results (say in calculus) in such a way that students can follow them. In an academic math journal, one’s giving proofs that successfully get past the peer review process for the journal.

My own guess would be that if one were to try to formalize essentially any nontrivial proof in the math literature, one would find little corners that require new results, though usually ones that wouldn’t be too hard to get.

How can we handle this in practice for our prizes? In essence, we have to define a computational contract for what constitutes success, and when prize money should be paid out. For a constructive proof, we can get Wolfram Language code that can explicitly be run on any sufficiently large computer to establish the result. For formalized proofs, we can get Wolfram Language code that can run through the proof, validating each step.

But what about for a “human proof”? Ultimately we have no choice but to rely on some kind of human review process. We can ask multiple people to verify the proof. We could have some blockchain-inspired scheme where people “stake” the correctness of the proof, then if one eventually gets consensus (whatever this means) one pays out to people some of the prize money, in proportion to their stake. But whatever is done, it’s going to be an imperfect, “societal” result—like almost all of the pure mathematics that’s so far been done in the world.

OK, so for people interested in working on the Problems, what skills are relevant? I don’t really know. It could be discrete and combinatorial mathematics. It could be number theory, if there’s a correspondence with number-based systems found. It could be some branch of algebraic mathematics, if there’s a correspondence with algebraic systems found. It could be dynamical systems theory. It could be something closer to mathematical logic or theoretical computer science, like the theory of term rewriting systems.

Of course, it could be that no existing towers of knowledge—say in branches of mathematics—will be relevant to the problems, and that to solve them will require building “from the ground up”. And indeed that’s effectively what ended up happening in the solution for my 2,3 Turing Machine Prize in 2007.

I’m a great believer in the power of computer experiments—and of course it’s on the basis of computer experiments that I’ve formulated the Rule 30 Prize Problems. But there are definitely more computer experiments that could be done. So far we know a billion elements in the center column sequence. And so far the sequence doesn’t seem to show any deviation from randomness (at least based on tests I’ve tried). But maybe at a trillion elements (which should be well within range of current computer systems) or a quadrillion elements, or more, it eventually will—and it’s definitely worth doing the computations to check.

The direct way to compute *n* elements in the center column is to run rule 30 for *n* steps, using at an intermediate stage up to *n* cells of memory. The actual computation is quite well optimized in the Wolfram Language. Running on my desktop computer, it takes less than 0.4 seconds to compute 100,000 elements:

✕
CellularAutomaton[30, {{1}, 0}, {100000, {{0}}}]; // Timing |

Internally, this is using the fact that rule 30 can be expressed as `Xor[ p, Or[q, r]]`, and implemented using bitwise operations on whole words of data at a time. Using explicit bitwise operations on long integers takes about twice as long as the built-in

✕
Module[{a = 1}, Table[BitGet[a, a = BitXor[a, BitOr[2 a, 4 a]]; i - 1], {i, 100000}]]; // Timing |

But these results are from single CPU processors. It’s perfectly possible to imagine parallelizing across many CPUs, or using GPUs. One might imagine that one could speed up the computation by effectively caching the results of many steps in rule 30 evolution, but the fact that across the rows of the rule 30 pattern all blocks appear to occur with at least roughly equal frequency makes it seem as though this would not lead to significant speedup.

Solving some types of math-like problems seem pretty certain to require deep knowledge of high-level existing mathematics. For example, it seems quite unlikely that there can be an “elementary” proof of Fermat’s last theorem, or even of the four-color theorem. But for the Rule 30 Prize Problems it’s not clear to me. Each of them might need sophisticated existing mathematics, or they might not. They might be accessible only to people professionally trained in mathematics, or they might be solvable by clever “programming-style” or “puzzle-style” work, without sophisticated mathematics.

Sometimes the best way to solve a specific problem is first to solve a related problem—often a more general one—and then come back to the specific problem. And there are certainly many problems related to the Rule 30 Prize Problems that one can consider.

For example, instead of looking at the vertical column of cells at the center of the rule 30 pattern, one could look at a column of cells in a different direction. At 45°, it’s easy to see that any sequence must be periodic. On the left the periods increase very slowly; on the right they increase rapidly. But what about other angles?

Or what about looking at rows of cells in the pattern? Do all possible blocks occur? How many steps is it before any given block appears? The empirical evidence doesn’t see any deviation from blocks occurring at random, but obviously, for example, successive rows are highly correlated.

What about different initial conditions? There are many dynamical systems–style results about the behavior of rule 30 starting with equal probability from all possible infinite initial conditions. In this case, for example, it’s easy to show that all possible blocks occur with equal frequency, both at a given row, and in a given vertical column. Things get more complicated if one asks for initial conditions that correspond, for example, to all possible sequences generated by a given finite state machine, and one could imagine that from a sequence of results about different sets of possible initial conditions, one would eventually be able to say something about the case of the single black cell initial condition.

Another straightforward generalization is just to look not at a single black cell initial condition, but at other “special” initial conditions. An infinite periodic initial condition will always give periodic behavior (that’s the same as one gets in a finite-size region with periodic boundary conditions). But one can, for example, study what happens if one puts a “single defect” in the periodic pattern:

✕
GraphicsRow[(ArrayPlot[ CellularAutomaton[30, MapAt[1 - #1 &, Flatten[Table[#1, Round[150/Length[#1]]]], 50], 100]] &) /@ {{1, 0}, {1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0}, {1, 0, 0, 0, 0, 0, 0}, {1, 1, 1, 0, 0}}] |

One can also ask what happens when one has not just a single black cell, but some longer sequence in the initial conditions. How does the center column change with different initial sequences? Are there finite initial sequences that lead to “simpler” center columns?

Or are there infinite initial conditions generated by other computational systems (say substitution systems) that aren’t periodic, but still give somehow simple rule 30 patterns?

Then one can imagine going “beyond” rule 30. What happens if one adds longer-range “exceptions” to the rules? When do extensions to rule 30 show behavior that can be analyzed in one way or another? And can one then see the effect of removing the “exceptions” in the rule?

Of course, one can consider rules quite different from rule 30 as well—and perhaps hope to develop intuition or methods relevant to rule 30 by looking at other rules. Even among the 256 two-color nearest-neighbor rules, there are others that show complex behavior starting from a simple initial condition:

✕
Row[Riffle[ Labeled[ArrayPlot[CellularAutomaton[#, {{1}, 0}, {150, All}], PixelConstrained -> 1, Frame -> False], Style[Text[StringTemplate["rule ``"][#]], 12], LabelStyle -> Opacity[.5]] & /@ {45, 73}, Spacer[8]]] |

And if one looks at larger numbers of colors and larger neighbors one can find an infinite number of examples. There’s all sorts of behavior that one sees. And, for example, given any particular sequence, one can search for rules that will generate it as their center column. One can also try to classify the center-column sequences that one sees, perhaps identifying a general class “like rule 30” about which global statements can be made.

But let’s discuss the specific Rule 30 Prize Problems. To investigate the possibility of periodicity in rule 30 (as in Problem 1), one could study lots of different rules, looking for examples with very long periods, or very long transients—and try to use these to develop an intuition for how and when these can occur.

To investigate the equal-frequency phenomenon of Problem 2, one can look at different statistical features, and see both in rule 30 and across different rules when it’s possible to detect regularity.

For Problem 3, one can start looking at different levels of computational effort. Can one find the *n*^{th} value with computational effort O(*n*^{γ}) for any γ<2 (I don't know any method to achieve this)? Can one show that one can’t find the *n*^{th} value with less than O(log(*n*)) computational effort? What about with less than O(log(*n*)) available memory? What about for different rules? Periodic and nested patterns are easy to compute quickly. But what other examples can one find?

As I’ve mentioned, a big achievement would be to show computation universality for rule 30. But even if one can’t do it for rule 30, finding additional examples (beyond, for example, rule 110) will help build intuition about what might be going on in rule 30.

Then there’s NP-completeness. Is there a way of setting up some question about the behavior of rule 30 for some family of initial conditions where it’s possible to prove that the question is NP-complete? If this worked, it would be an exciting result for cryptography. And perhaps, again, one can build up intuition by looking at other rules, even ones that are more “purposefully constructed” than rule 30.

When I set up my 2,3 Turing Machine Prize in 2007 I didn’t know if it’d be solved in a month, a year, a decade, a century, or more. As it turned out, it was actually solved in about four months. So what will happen with the Rule 30 Prize Problems? I don’t know. After nearly 40 years, I’d be surprised if any of them could now be solved in a month (but it’d be really exciting if that happened!). And of course some superficially similar problems (like features of the digits of π) have been out there for well over a century.

It’s not clear whether there’s any sophisticated math (or computer science) that exists today that will be helpful in solving the problems. But I’m confident that whatever is built to solve them will provide structure that will be important for solving other problems about the computational universe. And the longer it takes (think Fermat’s last theorem), the larger the amount of useful structure is likely to be built on the way to a solution.

I don’t know if solutions to the problems will be “obviously correct” (it’ll help if they’re constructive, or presented in computable form), or whether there’ll be a long period of verification to go through. I don’t know if proofs will be comparatively short, or outrageously long. I don’t know if the solutions will depend on details of axiom systems (“assuming the continuum hypothesis”, etc.), or if they’ll be robust for any reasonable choices of axioms. I don’t know if the three problems are somehow “comparably difficult”—or if one or two might be solved, with the others holding out for a very long time.

But what I am sure about is that solving any of the problems will be a significant achievement. I’ve picked the problems to be specific, definite and concrete. But the issues of randomness and computational irreducibility that they address are deep and general. And to know the solutions to these problems will provide important evidence and raw material for thinking about these issues wherever they occur.

Of course, having lived now with rule 30 and its implications for nearly 40 years, I will personally be thrilled to know for certain even a little more about its remarkable behavior.

*To comment, please visit the copy of this post at Stephen Wolfram’s Writings »*

Real-time filters work like magic. Usually out of sight, they clean data to make it useful for the larger system they are part of, and sometimes even for human consumption. A fascinating thing about these filters is that they don’t have a big-picture perspective. They work wonders with only a small window into the data that is streaming in. On the other hand, if I had a stream of numbers flying across my screen, I would at the very least need to plot it to make sense of the data. These types of filters are very simple as well.

Take a basic lowpass filter based on two weights, 49/50 and 1/50. The filter computes its next output as a weighted sum of the current output and input. Specifically, `output _{k+1} = 49/50 output_{k} + 1/50 input_{k}`. So with a few elementary operations, it essentially strips away the high-frequency component on a properly sampled input signal:

✕
With[{u = Table[Sin[t/10] + Sin[10 t], {t, 0, 150, 1/10.}]}, ListLinePlot[{u, RecurrenceFilter[{{1, -(49/50)}, {1/50}}, u]}, PlotLegends -> {"original signal", "filtered signal"}]] |

In hindsight, this output makes sense. The filter relies only a bit on the current input, and hence misses the high-frequency portion of the signal, effectively getting rid of it. Any slowly changing component has already been captured in the output.

The filter weights need not be conjured up or arrived at by trial and error. They can be obtained in a systematic fashion using the signal processing functions in the Wolfram Language. (And that’s what I will be doing in the next section for a couple of nontrivial filters.) I can also compute the frequency response of the filter, which will tell me how it will shift and smother or amplify a signal. This creates a ballpark of what to expect from the real-time responses of the filter. I can simulate its real-time response to various signals as well. But how does the filter perform in real time? Can I see the shifting and smothering? And how does this compare to what is predicted by the frequency response?

To investigate all this, I will pick some slightly more involved filters—namely a bandpass Butterworth and a Chebyshev 1 filter—and deploy them to an Arduino Nano. To go about this, I need several things. Luckily, they are all now available in the Wolfram Language.

The functionality to analyze and design filters has existed in the Wolfram Language for quite some time now. The next link in the workflow is the C or Arduino filter code that needs to be deployed to the microcontroller. Thanks to the Microcontroller Kit, which was released with Version 12, I can generate and deploy the code directly from the Wolfram Language. It will automatically generate the needed microcontroller code, sparing me the wearisome task of having to write the code and make sure that I have the coefficients correct. And finally, I use the Wolfram Device Framework to transmit and receive the data from the Nano.

In the first part of this blog post, I will compute, analyze and deploy the filters. In the second part, I will acquire and analyze the filtered data to visualize the responses and evaluate the performance of the filters.

I start off by creating a function to compute a bandpass filter that will pass signals with frequencies between 2 Hz and 5 Hz and attenuate signals of 1 Hz and 10 Hz by -30 dB:

✕
tfm[filter_, s_: s] := filter[{"Bandpass", {1, 2, 5, 10}, {30, 1}}, s] // N // TransferFunctionExpand // Chop |

With this, I obtain the corresponding Butterworth and Chebyshev 1 bandpass filters:

✕
{brw, cbs} = tfm /@ {ButterworthFilterModel, Chebyshev1FilterModel}; |

✕
Grid[{{"Butterworth filter", brw}, {"Chebyshev 1 filter", cbs}}, Background -> {None, {Lighter[#, 0.6] & /@ {cB, cC}}}, Frame -> True, Spacings -> {1, 1}, Alignment -> Left] |

The Bode magnitude plot attests that frequencies between 2 Hz and 5 Hz will be passed through, and frequencies outside this range will be attenuated, with an attenuation of around -35 dB at 1 Hz and 10 Hz:

✕
{bMagPlot, bPhasePlot} = BodePlot[{brw, cbs}, {0.5, 18}, PlotLayout -> "List", GridLines -> Table[{{1, 2, 5, 10}, Automatic}, {2}], FrameLabel -> {{"frequency (Hz)", "magnitude (dB)"}, {"frequency (Hz)", "phase (deg)"}}, FrameTicks -> Table[{{Automatic, Automatic}, {{1, 2, 5, 10}, Automatic}}, {2}], ImageSize -> Medium, PlotStyle -> {cB, cC, cI}, PlotTheme -> "Detailed", PlotLegends -> {"Butterworth", "Chebyshev 1"}]; |

✕
bMagPlot |

The phase plot shows that at around 3 Hz for the Butterworth filter and 2 Hz for the Chebyshev, there will be no phase shift. For all other frequencies, there will be varying amounts of phase shifts:

✕
bPhasePlot |

Later, I will put these frequency responses side by side with the filtered response from the microcontroller to check if they add up.

For now, I will simulate the response of the filters to a signal with three frequency components. The second component lies in the bandpass range, while the other two lie outside it:

✕
inpC = Sin[0.75 t] + Sin[2.5 t] + Sin[10 t]; |

The responses have a single frequency component, and the two frequences outside the bandpass have in fact been stripped away:

✕
outC = Table[OutputResponse[sys, inpC, {t, 0, 15}], {sys, {brw, cbs}}]; Plot[{outC, inpC}, {t, 0, 15}, PlotLegends -> {"Butterworth", "Chebyshev 1", "Input"}, PlotStyle -> {cB, cC, cI}, PlotTheme -> "Detailed"] |

I then discretize the filters and put them together into one model:

✕
sp = 0.1; |

✕
StateSpaceModel[ToDiscreteTimeModel[#, sp]] & /@ {brw, cbs} ; sysD = NonlinearStateSpaceModel[ SystemsModelParallelConnect[Sequence @@ %, {{1, 1}}, {}]] /. Times[1.`, v_] :> v // Chop |

For good measure, I simulate and verify the response of the discretized model as well:

✕
With[{tmax = 15}, With[{inpD = Table[inpC, {t, 0, tmax, sp}]}, ListLinePlot[Join[OutputResponse[sysD, inpD], {inpD}], PlotLegends -> {"Butterworth", "Chebyshev 1", "Input"}, PlotStyle -> {cB, cC, cI}, PlotTheme -> "Detailed", PlotMarkers -> {Automatic, Scaled[0.025]}]]] |

And I wrap up this first stage by deploying the filter and setting up the Arduino to send and receive the signals over the serial pins:

✕
\[ScriptCapitalM] = MicrocontrollerEmbedCode[ sysD, <|"Target" -> "ArduinoNano", "Inputs" -> "Serial", "Outputs" -> {"Serial", "Serial"}|>, "/dev/cu.usbserial-A106PX6Q"] |

The port name /dev/cu.usbserial-A106PX6Q will not work for you. If you are following along, you will have to change it to the correct value. You can figure it out using Device Manager on Windows, or by searching for file names of the form /dev/cu.usb* and /dev/ttyUSB* on Mac and Linux systems, respectively.

At this point, I can connect any other serial device to send and receive the data. I will use the Device Framework in Mathematica to do that, as its notebook interface provides a great way to visualize the data in real time.

To set up the data transfer, I begin by identifying the start, delimiter and end bytes:

✕
{sB, dB, eB} = Lookup[\[ScriptCapitalM]["Serial"], {"StartByte", "DelimiterByte", "EndByte"}] |

Then I create a scheduled task that reads the filtered output signals and sends the input signal over to the Arduino, and runs at exactly the same sampling period as the discretized filters:

✕
i1 = i2 = 1; yRaw = y1 = y2 = u1 = u2 = {}; task1 = ScheduledTask[ If[DeviceExecute[dev, "SerialReadyQ"], If[i1 > len1, i1 = 1]; If[i2 > len2, i2 = 1]; AppendTo[yRaw, DeviceReadBuffer[dev, "ReadTerminator" -> eB]]; AppendTo[u1, in1[[i1++]]]; AppendTo[u2, in2[[i2++]]]; DeviceWrite[dev, sB]; DeviceWrite[dev, ToString[Last[u1] + Last[u2]]]; DeviceWrite[dev, eB] ] , sp]; |

I also create a second and less frequent scheduled task that parses the data and discards the old values:

✕
task2 = ScheduledTask[ yRaw1 = yRaw; yRaw = Drop[yRaw, Length@yRaw1]; {y1P, y2P} = Transpose[parseData /@ yRaw1]; y1 = Join[y1, y1P]; y2 = Join[y2, y2P]; {u1, u2, y1, y2} = drop /@ {u1, u2, y1, y2}; , 3 sp ]; |

Now I am ready to actually send and receive the data, so I open a connection to the Arduino:

✕
dev = DeviceOpen["Serial", "/dev/cu.usbserial-A106PX6Q"] |

I then generate the input signals and submit the scheduled tasks to the kernel:

✕
signals[w1, g1, w2, g2]; {taskObj1, taskObj2} = SessionSubmit /@ {task1, task2}; |

The input signals are of the form `g1 Sin[w1 t]` and

At this point, the data is going back and forth between my Mac and the Arduino. To visualize the data and control the input signals, I create a panel. From the panel, I control the frequency and magnitude of the input signals. I plot the input and filtered signals, and also the frequency response of the filters. The frequency response plots have lines showing the expected magnitude and phase of the filtered signals, which I can verify on the signal plots:

✕
DynamicModule[{ iPart, ioPart, plotOpts = {ImageSize -> Medium, GridLines -> Automatic, Frame -> True, PlotTheme -> "Business"}}, Panel[ Column[{ Grid[{{ Panel[ Column[{ Style["Input signal 1", Bold, cI1], Grid[{{"freq.", Slider[ Dynamic[w1, {None, (w1 = #)& , signals[#, g1, w2, g2]& }], { 0, 7, 0.25}, Appearance -> "Labeled"]}}], Grid[{{"mag.", Slider[ Dynamic[g1, {None, (g1 = #)& , signals[w1, #, w2, g2]& }], { 0, 5, 0.25}, Appearance -> "Labeled"]}}]}]], Panel[ Column[{ Style["Input signal 2", Bold, cI2], Grid[{{"freq.", Slider[ Dynamic[w2, {None, (w2 = #)& , signals[w1, g1, #, g2]& }], { 0, 7, 0.25}, Appearance -> "Labeled"]}}], Grid[{{"mag.", Slider[ Dynamic[g2, {None, (g2 = #)& , signals[w1, g1, w2, #]& }], { 0, 5, 0.25}, Appearance -> "Labeled"]}}]}]]}}], Grid[{{ Dynamic[ ListLinePlot[ Part[{u1, u2, u1 + u2}, iPart], PlotStyle -> Part[{cI1, cI2, cI}, iPart], PlotRange -> {All, (g1 + g2) {1, -1}}, plotOpts], TrackedSymbols :> {u1, u2}, Initialization :> (iPart = {3}; g1 = (g2 = 0); Null)], Dynamic[ ListLinePlot[ Part[ Join[{y1, y2}, {u1 + u2}], ioPart], PlotStyle -> Part[{cB, cC, cI}, ioPart], PlotRange -> {All, (g1 + g2) {1, -1}}, FrameTicks -> {{None, All}, {Automatic, Automatic}}, plotOpts], TrackedSymbols :> {y1, y2, u1, u2}, Initialization :> (ioPart = {1, 2})]}, { Grid[{{ CheckboxBar[ Dynamic[iPart], Thread[Range[3] -> ""], Appearance -> "Vertical"], lI}}], Grid[{{ CheckboxBar[ Dynamic[ioPart], Thread[Range[3] -> ""], Appearance -> "Vertical"], lIO}}]}, { Labeled[ Dynamic[ BodePlot[{brw, cbs}, {0.5, 18}, PlotLayout -> "Magnitude", PlotLegends -> None, GridLines -> Automatic, ImageSize -> Medium, PlotStyle -> {cB, cC}, PlotTheme -> "Detailed", Epilog -> Join[ mLine[w1, cI1], mLine[w2, cI2]]], TrackedSymbols :> {w1, w2}, SaveDefinitions -> True], "Bode magnitude"], Labeled[ Dynamic[ BodePlot[{brw, cbs}, {0.5, 18}, PlotLayout -> "Phase", PlotLegends -> None, GridLines -> Automatic, ImageSize -> Medium, PlotStyle -> {cB, cC}, PlotTheme -> "Detailed", Epilog -> Join[ pLine[w1, cI1], pLine[w2, cI2]]], TrackedSymbols :> {w1, w2}, SaveDefinitions -> True], "Bode phase"]}}, Frame -> {All, {True, None, True}}, FrameStyle -> Lighter[Gray, 0.6]]}, Center]]] |

Since the Arduino needs to be up and running to see the panel update dynamically, I’ll show you some examples of my results here:

Finally, before disconnecting the Arduino, I remove the tasks and close the connection to the device:

✕
TaskRemove /@ {taskObj1, taskObj2}; DeviceClose[dev] |

It’s interesting to see in real time how the response of the deployed filter closely matches that of the analog version.

I think it’s extremely convenient to have the design and microcontroller code so closely coupled, the reason being that if I looked at just the microcontroller code, the coefficients of the filters are abstruse, especially compared to those of the simple first-order lowpass filter. However, the design specifications from which they were obtained are very concrete; thus, having them in one unified place makes it better suited for further analysis and experimentation.

The entire notebook with the filter design, analysis, code generation and visualization is available for download. Of course, you need an Arduino Nano or a similar microcontroller. And with just a few small tweaks—in some cases just changing the target and port name—you can replicate it on a whole host of other microcontrollers. Happy tinkering!

Get full access to the latest Wolfram Language functionality for the Microcontroller Kit with Mathematica 12. |