Wolfram Blog » Michael Trott http://blog.wolfram.com News, views, and ideas from the front lines at Wolfram Research. Tue, 17 Oct 2017 14:34:22 +0000 en hourly 1 http://wordpress.org/?v=3.2.1 How Laplace Would Hide a Goat: The New Science of Magic Windows http://blog.wolfram.com/2017/08/25/how-laplace-would-hide-a-goat-the-new-science-of-magic-windows/ http://blog.wolfram.com/2017/08/25/how-laplace-would-hide-a-goat-the-new-science-of-magic-windows/#comments Fri, 25 Aug 2017 14:15:23 +0000 Michael Trott http://blog.internal.wolfram.com/?p=37893

Last week, I read Michael Berry’s paper, “Laplacian Magic Windows.” Over the years, I have read many interesting papers by this longtime Mathematica user, but this one stood out for its maximizing of the product of simplicity and unexpectedness. Michael discusses what he calls the magic window. For 70+ years, we have known about holograms, and now we know about magic windows. So what exactly is a magic window? Here is a sketch of the optics of one:

Magic window optics sketch


Parallel light falls onto a glass sheet that is planar on the one side and has some gentle surface variation on the other side (bumps in the above image are vastly overemphasized; the bumps of a real magic window would be minuscule). The light gets refracted by the magic window (the deviation angles of the refracted light rays are also overemphasized in the graphic) and falls onto a wall. Although the window bumpiness shows no recognizable shape or pattern, the light density variations on the wall show a clearly recognizable image. Starting with the image that one wants to see on the wall, one can always construct a window that shows the image one has selected. The variations in the thickness of the glass are assumed to be quite small, and the imaging plane is assumed to be not too far away so that the refracted light does not form caustics—as one sees them, for instance, at the bottom of a swimming pool in sunny conditions.

Now, how should the window surface look to generate any pre-selected image on the wall? It turns out that the image visible on the wall is the Laplacian of the window surface. Magic windows sound like magic, but they are just calculus (differentiation, to be precise) in action. Isn’t this a neat application of multivariate calculus? Schematically, these are the mathematical steps involved in a magic window.

Implementation-wise, the core steps are the following:

Magic window implementation

And while magic windows are a 2017 invention, their roots go back hundreds of years to so-called magic mirrors. Magic mirrors are the mirror equivalent of magic windows: they too can act as optical Laplace operators (see the following).

Expressed more mathematically: Let the height of the bumpy side of the glass surface be f(x,y). Then the intensity of the light brightness on the wall is approximately Δx,y f(x,y), where Δ is the Laplacian ∂2./∂x2+∂2./∂y2. Michael calls such a window a “magic window.” It is magic because the glass surface height f(x,y) does not in any way resemble Δ x,y f(x,y).

It sounds miraculous that a window can operate as a Laplace operator. So let’s do some numerical experiments to convince ourselves that this does really work. Let’s start with a goat that we want to use as the image to be modeled. We just import a cute-looking Oberhasli dairy goat from the internet.

goat = ImageTake[RemoveAlphaChannel[ColorConvert[Import[           "https://s-media-cache-ak0.pinimg.com/originals/fa/60/ce/\      fa60ce323b5642a78abb1b1814fcd582.jpg"], "Grayscale"]], {1, -30}]
goat = ImageTake[RemoveAlphaChannel[ColorConvert[Import[           "https://s-media-cache-ak0.pinimg.com/originals/fa/60/ce/\      fa60ce323b5642a78abb1b1814fcd582.jpg"], "Grayscale"]], {1, -30}]

{w, h} = ImageDimensions[goat];

The gray values of the pixels can be viewed as a function h: ℝ*ℝ->[0,1]. Interpolation allows us to use this function constructively.

ifGoat = Interpolation[   Flatten[MapIndexed[{Reverse@#2, #1} &, ImageData[goat], {2}], 1]]

Here is a 3D plot of the goat function ifGoat.

Plot3D[ifGoat[x, y], {x, 1, w}, {y, 1, h}, ColorFunction -> GrayLevel,   MeshFunctions -> {}, PlotPoints -> 200,  BoxRatios -> {w, h, w/8}, ViewPoint -> {0, 2, 4},   ViewVertical -> {0, -1, 0}]

Plot3D[ifGoat[x, y], {x, 1, w}, {y, 1, h}, ColorFunction -> GrayLevel,   MeshFunctions -> {}, PlotPoints -> 200,  BoxRatios -> {w, h, w/8}, ViewPoint -> {0, 2, 4},   ViewVertical -> {0, -1, 0}]

And we can solve the Poisson equation with the image as the right-hand side: Δ f = image using NDSolveValue.

We will use Dirichlet boundary conditions for now. (But the boundary conditions will not matter for the main argument.)

ndsGoat = NDSolveValue[{Laplacian[U[x, y], {x, y}] == ifGoat[x, y],     DirichletCondition[U[x, y] == 1/2, True]}, {U}, {x, y} \[Element]      Rectangle[{1, 1}, {w, h}],    Method -> {"PDEDiscretization" -> {"FiniteElement", {"MeshOptions" \ -> {MaxCellMeasure -> 1}}}}][[1]]

The Poisson equation solution is quite a smooth function; the inverse of the Laplace operator is a smoothing operation. No visual trace of the goat seems to be left.

Plot3D[Evaluate[ndsGoat[x, y]], {x, 1, w}, {y, 1, h},               MeshFunctions -> {#3 &}, PlotPoints -> 80,   AxesLabel -> {x, y}]

Plot3D[Evaluate[ndsGoat[x, y]], {x, 1, w}, {y, 1, h},               MeshFunctions -> {#3 &}, PlotPoints -> 80,   AxesLabel -> {x, y}]

Overall it is smooth, and it is also still smooth when zoomed in.

Plot3D[Evaluate[ndsGoat[x, y]], {x, 100, 200}, {y, 100, 200},                MeshFunctions -> {#3 &}, PlotPoints -> 80,   AxesLabel -> {x, y}]

Plot3D[Evaluate[ndsGoat[x, y]], {x, 100, 200}, {y, 100, 200},                MeshFunctions -> {#3 &}, PlotPoints -> 80,   AxesLabel -> {x, y}]

Even when repeatedly zoomed in.

Plot3D[Evaluate[ndsGoat[x, y]], {x, 190, 200}, {y, 190, 200},                MeshFunctions -> {#3 &}, PlotPoints -> 80,   AxesLabel -> {x, y}]

Plot3D[Evaluate[ndsGoat[x, y]], {x, 190, 200}, {y, 190, 200},                MeshFunctions -> {#3 &}, PlotPoints -> 80,   AxesLabel -> {x, y}]

The overall shape of the Poisson equation solution can be easily understood through the Green’s function of the Laplace operator.

GreenFunction[{Laplacian[u[x, y],{x, y}],    DirichletCondition[u[x, y] == 0, True]}, u[x, y],                                 {x, y} \[Element]    Rectangle[{0, 0}, {Lx, Ly}], {m, n}]

We calculate and visualize the first few terms (individually) of the double sum from the Green’s function.

integral[{jx_, jy_}, {kx_, ky_}] =   Integrate[   Sin[Pi x kx/Lx] Sin[Pi y ky/Ly], {x, jx, jx + 1}, {y, jy, jy + 1}]

cfF = With[{lx = w, ly = h, id = ImageData[goat]},    Compile[{kx, ky},      Module[{sum = 0. },      Do[sum =         sum + (-1) id[[jy, jx]]  1/(          kx ky \[Pi]^2) (Cos[(jx kx \[Pi])/lx] -             Cos[((1 + jx) kx \[Pi])/lx]) (Cos[(jy ky \[Pi])/ly] -             Cos[((1 + jy) ky \[Pi])/ly]),       {jy, ly - 1}, {jx, lx - 1}];      sum]] ];

With[{lx = w, ly = h},  Table[Plot3D[    Evaluate[     cfF[kx, ky] 1/((\[Pi]^2 kx^2)/lx^2 + (\[Pi]^2 ky^2)/       ly^2) (Sin[(\[Pi] x kx)/lx] Sin[(\[Pi] y ky)/ly])], {x, 1,      lx}, {y, 1, ly},    Ticks -> {None, None, Automatic},     PlotLabel -> "{kx,ky}" == {kx, ky}, MeshFunctions -> {#3 &}],   {kx, 3}, {ky, 3}]]

With[{lx = w, ly = h},  Table[Plot3D[    Evaluate[     cfF[kx, ky] 1/((\[Pi]^2 kx^2)/lx^2 + (\[Pi]^2 ky^2)/       ly^2) (Sin[(\[Pi] x kx)/lx] Sin[(\[Pi] y ky)/ly])], {x, 1,      lx}, {y, 1, ly},    Ticks -> {None, None, Automatic},     PlotLabel -> "{kx,ky}" == {kx, ky}, MeshFunctions -> {#3 &}],   {kx, 3}, {ky, 3}]]

Taking 252 terms into account, we have the following approximations for the Poisson equation solution and its Laplacian. The overall shape is the same as the previous numerical solution of the Poisson equation.

goatPoissonApprox[x_, y_] = With[{lx = w, ly = h},    Monitor[     Sum[cfF[kx, ky] 1/((\[Pi]^2 kx^2)/lx^2 + (\[Pi]^2 ky^2)/        ly^2) (Sin[(\[Pi] x kx)/lx] Sin[(\[Pi] y ky)/ly]), {kx,        25}, {ky, 25}], {kx, ky}]];

With[{lx = w, ly = h},   Plot3D[Evaluate[goatPoissonApprox[x, y]], {x, 1, lx}, {y, 1, ly},    MeshFunctions -> {#3 &}]]

With[{lx = w, ly = h},   Plot3D[Evaluate[goatPoissonApprox[x, y]], {x, 1, lx}, {y, 1, ly},    MeshFunctions -> {#3 &}]]

For this small number of Fourier modes, the outline of goat is recognizable, but its details aren’t.

With[{cfL =     Compile[{x, y},      Evaluate[Laplacian[goatPoissonApprox[x, y], {x, y}]]]},  ReliefPlot[Table[cfL[x, y], {y, h, 1, -1}, {x, 1, w}],    ColorFunction -> GrayLevel, Frame -> False]]

Applying the Laplace operator to the PDE solutions recovers (by construction) a version of the goat. Due to finite element discretization and numerical differentiation, the resulting goat is not quite the original one.

ReliefPlot[  Table[Evaluate[Laplacian[ifGoat[x, y], {x, y}]], {y, h, 1, -1}, {x,     w}],                         ColorFunction -> GrayLevel, Frame -> False]

A faster and less discretization-dependent way to solve the Poisson equation uses the fast Fourier transform (FFT).

FDSTGoat =    FourierDST[    Table[(* \[CapitalDelta]^-1 *) -1./(4 - 2 Cos[x  Pi/h] -          2 Cos[y Pi/w]), {y, h}, {x, w}] FourierDST[ImageData[goat],       1], 1];

ListPlot3D[FDSTGoat, MeshFunctions -> {#3 &}]
ListPlot3D[FDSTGoat, MeshFunctions -> {#3 &}]

This solution recovers the goat more faithfully. Here is the recovered goat after interpolating the function values.

if\[CapitalDelta]Goat =   Interpolation[   Flatten[MapIndexed[{Reverse@#2, #1} &, FDSTGoat, {2}], 1],    InterpolationOrder -> 3]

ReliefPlot[  Table[Evaluate[Laplacian[if\[CapitalDelta]Goat[x, y], {x, y}]], {y,     h, 1, -1}, {x, w}],                         ColorFunction -> GrayLevel, Frame -> False]

Taking into account that any physical realization of a magic window made from glass will unavoidably have imperfections, a natural question to ask is: What happens if one adds small perturbations to the solution of the Poisson equation?

The next input modifies each grid point randomly by a perturbation of relative size 10-p. We see that for this goat, the relative precision of the surface has to be on the order of 10-6 or better—a challenging but realizable mechanical accuracy.

Table[if\[CapitalDelta]GoatRandomized = Interpolation[Flatten[     MapIndexed[{Reverse@#2, (1 + 10^-p RandomReal[{-1, 1}]) #1} &,       FDSTGoat, {2}], 1], InterpolationOrder -> 3];  ReliefPlot[   Table[Evaluate[     Laplacian[if\[CapitalDelta]GoatRandomized[x, y], {x, y}]], {y, h,      1, -1}, {x, w}],                          ColorFunction -> GrayLevel, Frame -> False,    PlotLabel -> HoldForm[10^-#] &[p]],  {p, 3, 6}]

To see how the goat emerges after differentiation (Δ = ∂2 ./∂x2+∂2 ./∂y2), here are the partial derivatives shown.

Function[\[Delta],    ReliefPlot[    Table[Evaluate[\[Delta] @ if\[CapitalDelta]Goat[x, y]], {y, h,       1, -1}, {x, w}],                             ColorFunction -> GrayLevel,     Frame -> False, PlotLabel -> \[Delta]]] /@                        {Function[f, D[f, x]], Function[f, D[f, y]],                         Function[f, D[f, {x, 2}]],    Function[f, D[f, {y, 2}]]}

And because we have ∂2 ./∂x2+∂2 ./∂y2 =(∂./∂x+ⅈ ∂./∂y)(∂./∂x–ⅈ ∂./∂y), we also look at the Wirtinger derivatives.

Function[\[Delta],    ReliefPlot[    Table[Evaluate[\[Delta] @ if\[CapitalDelta]Goat[x, y]], {y, h,       1, -1}, {x, w}],                             ColorFunction -> GrayLevel,     Frame -> False, PlotLabel -> \[Delta]]] /@                        {Function[f, Arg[D[f, x] + I D[f, y]]],    Function[f, Abs[D[f, x] + I D[f, y]]],                        Function[f, Arg[D[f, x] - I D[f, y]]],    Function[f, Abs[D[f, x] + I D[f, y]]]}

Function[\[Delta],    ReliefPlot[    Table[Evaluate[\[Delta] @ if\[CapitalDelta]Goat[x, y]], {y, h,       1, -1}, {x, w}],                             ColorFunction -> GrayLevel,     Frame -> False, PlotLabel -> \[Delta]]] /@                        {Function[f, Arg[D[f, x] + I D[f, y]]],    Function[f, Abs[D[f, x] + I D[f, y]]],                        Function[f, Arg[D[f, x] - I D[f, y]]],    Function[f, Abs[D[f, x] + I D[f, y]]]}

We could also just use a simple finite difference formula to get the goat back. This avoids any interpolation artifacts and absolutely faithfully reproduces the original goat.

{ImageConvolve[   Image[FDSTGoat], -{{0, -1, 0}, {-1, 4, -1}, {0, -1, 0}},    Padding -> None],   LaplacianFilter[Image[FDSTGoat], 1],   LaplacianFilter[Image[FDSTGoat], 2]}

The differentiation can even be carried out as an image processing operation.

ImageConvolve[Image[FDSTGoat], -{{0, -1, 0}, {-1, 4, -1}, {0, -1, 0}},    Padding -> None] // ImageAdjust

So far, nothing really interesting. We integrated and differentiated a function. Let’s switch gears and consider the refraction of a set of parallel light rays on a glass sheet.

We consider the lower parts of the glass sheet planar and the upper part slightly wavy with an explicit description height = f(x,y). The index of refraction is n, and we follow the light rays (coming from below) up to the imaging plane at height = Z. Here is a small Manipulate that visualizes this situation for the example surface f(x,y) = 1 + ε (cos(α x) + cos(β y).

We do want the upper surface of the glass nearly planar, so we use the factor ε in the previous equation.

g[x_, y_] := Cos[\[Alpha] x] + Cos[\[Beta] y]; f[x_, y_] := 1 +(* small height variations *) \[CurlyEpsilon]  g[x, y];  normalize = #/Sqrt[#.#] &; lightRay[{\[Alpha]_, \[Beta]_, \[CurlyEpsilon]_}, {x_, y_}, n_, Z_] =   Module[{dir0 = normalize[{0, 0, 1}], normal, \[Phi], \[Theta], P0,      direction2, dir, \[Sigma]},    (* surface normal *)    normal = normalize[Grad[z - f[x, y], {x, y, z}]];    (* refract the light ray using Snell's law *)    \[Phi] = ArcCos[normal.dir0];    \[Theta] = ArcSin[n Sin[\[Phi]]];    (* surface point of refraction *)    P0 = {x, y, f[x, y]};     (* refracted ray *)    direction2 = normalize[dir0 - normal.dir0 normal];     dir = Cos[\[Theta]] normal + Sin[\[Theta]] direction2 ;    (* ray up to height Z *)    \[Sigma] = (Z - P0[[3]])/dir[[3]];    (* return pair: {surface point, image plane point} *)    {P0, P0 + \[Sigma] dir}    ];

Manipulate[  Module[{surface,  p0, p1, rays, \[CapitalDelta] = 2 Pi/pp},   surface =     Plot3D[Evaluate[      1 + \[CurlyEpsilon] (Cos[\[Alpha] x] + Cos[\[Beta] y])], {x, -Pi,       Pi}, {y, -Pi, Pi},     Filling -> -2, FillingStyle -> Directive[White, Opacity[0.4]],      MeshFunctions -> {#3 &},     MeshStyle -> GrayLevel[0.5], Lighting -> "Neutral",      ImageSize -> 400,     BoundaryStyle -> Gray,      PlotStyle -> Directive[White, Opacity[0.4]]];     rays = Table[{p0, p1} =        N@lightRay[{\[Alpha], \[Beta], \[CurlyEpsilon]}, {\[Xi], \ \[Eta]}, n, Z];                           (* ignore rays with total reflection *)                          If[MatchQ[p1, {_Real, _Real, _Real}],                                  {Tube[         Line[{MapAt[-4 &, p0, 3], p0, p1}], 0.02],         Sphere[p1, 0.04]}, {}],          {\[Eta], -Pi + \[CapitalDelta]/2,        Pi - \[CapitalDelta]/         2, \[CapitalDelta]},   {\[Xi], -Pi + \[CapitalDelta]/2,        Pi - \[CapitalDelta]/2, \[CapitalDelta]}] // Quiet;   Show[{surface, Graphics3D[{Yellow, rays}]},            PlotRange -> All, BoxRatios -> Automatic,     Background -> Black]],   {{pp, 18, "rays"}, 1, 32, 1, Appearance -> "Labeled"}, Delimiter,  {{n, 3}, 1, 5, Appearance -> "Labeled"}, Delimiter,  {{Z, 5}, 1, 20, Appearance -> "Labeled"}, Delimiter,  {{\[CurlyEpsilon], 0.08}, -1, 1, Appearance -> "Labeled"}, Delimiter,  {{\[Alpha], 1}, 0, 5, Appearance -> "Labeled"},  {{\[Beta], 1}, 0, 5, Appearance -> "Labeled"},  TrackedSymbols :> True]

Manipulate[  Module[{surface,  p0, p1, rays, \[CapitalDelta] = 2 Pi/pp},   surface =     Plot3D[Evaluate[      1 + \[CurlyEpsilon] (Cos[\[Alpha] x] + Cos[\[Beta] y])], {x, -Pi,       Pi}, {y, -Pi, Pi},     Filling -> -2, FillingStyle -> Directive[White, Opacity[0.4]],      MeshFunctions -> {#3 &},     MeshStyle -> GrayLevel[0.5], Lighting -> "Neutral",      ImageSize -> 400,     BoundaryStyle -> Gray,      PlotStyle -> Directive[White, Opacity[0.4]]];     rays = Table[{p0, p1} =        N@lightRay[{\[Alpha], \[Beta], \[CurlyEpsilon]}, {\[Xi], \ \[Eta]}, n, Z];                           (* ignore rays with total reflection *)                          If[MatchQ[p1, {_Real, _Real, _Real}],                                  {Tube[         Line[{MapAt[-4 &, p0, 3], p0, p1}], 0.02],         Sphere[p1, 0.04]}, {}],          {\[Eta], -Pi + \[CapitalDelta]/2,        Pi - \[CapitalDelta]/         2, \[CapitalDelta]},   {\[Xi], -Pi + \[CapitalDelta]/2,        Pi - \[CapitalDelta]/2, \[CapitalDelta]}] // Quiet;   Show[{surface, Graphics3D[{Yellow, rays}]},            PlotRange -> All, BoxRatios -> Automatic,     Background -> Black]],   {{pp, 18, "rays"}, 1, 32, 1, Appearance -> "Labeled"}, Delimiter,  {{n, 3}, 1, 5, Appearance -> "Labeled"}, Delimiter,  {{Z, 5}, 1, 20, Appearance -> "Labeled"}, Delimiter,  {{\[CurlyEpsilon], 0.08}, -1, 1, Appearance -> "Labeled"}, Delimiter,  {{\[Alpha], 1}, 0, 5, Appearance -> "Labeled"},  {{\[Beta], 1}, 0, 5, Appearance -> "Labeled"},  TrackedSymbols :> True]

The reason we want the upper surface mostly planar is that we want to avoid rays that “cross” near the surface and form caustics. We want to be in a situation where the density of the rays is position dependent, but the rays do not yet cross. This restricts the values of n, Z and the height of the surface modulation.

Now let’s do the refraction experiment with the previous solution of the Laplace equation as the height of the upper glass surface. To make the surface variations small, we multiply that solution by 0.0001.

WolframAlpha["refractive index of glass", {{"Result", 1},    "ComputableData"},   PodStates -> {"Result__Show details", "Result__Hide details"}]

We use the median refractive index of glass, n = 1.53.

ifGoatSmall[x_, y_] = 0.0001  if\[CapitalDelta]Goat[x, y];

gradGoatSmall[x_, y_] = Grad[z - ifGoatSmall[x, y], {x, y, z}];

Instead of using lightRay, we will use a compiled version for faster numerical evaluation.

refractCompiled[{x_, y_}, n_, Z_] :=    cf[x, y, Z, n, gradGoatSmall[x, y], ifGoatSmall[x, y]];

cf = Compile[{x, y, Z, n, {g2, _Real, 1}, s2},    Module[{dir0 = Normalize[{0, 0, 1}], normal, \[Phi], \[Theta], P0,       direction2 = {1., 1, 1}, dir, \[Sigma]},     normal = Normalize[g2];     \[Phi] = ArcCos[normal.dir0];     \[Theta] = ArcSin[n Sin[\[Phi]]];     P0 = {x, y, s2};     direction2 = Normalize[dir0 - normal.dir0 normal];      dir = Cos[\[Theta]] normal + Sin[\[Theta]] direction2 ;     \[Sigma] = (Z - P0[[3]])/dir[[3]];     {P0, P0 + \[Sigma] dir}]];

In absolute units, say the variations in glass height are at most 1 mm; we look at the refracted rays a few meters behind the glass window. We will use about 3.2 million lights rays (42 per pixel).

With[{\[CapitalDelta] = 0.25, Z = 5000},   Monitor[   data = Line[      Flatten[Table[        refractCompiled[{x, y}, 1.53, Z], {y, 1,          h, \[CapitalDelta]}, {x, 1, w, \[CapitalDelta]}], 1]];,   {y, x}]]

points = Most[#] & /@ data[[1, All, 2]]; Length[points]

Displaying all endpoints of the rays gives a rather strong Moiré effect. But the goat is visible—a true refraction goat!

Graphics[{PointSize[0.001], Opacity[0.02],    Point[Developer`ToPackedArray[{1, -1} # & /@ points]]}]

Graphics[{PointSize[0.001], Opacity[0.02],    Point[Developer`ToPackedArray[{1, -1} # & /@ points]]}]

If we accumulate the number of points that arrive in a small neighborhood of the given points {X,Y} in the plane height=Z, the goat becomes much more visible. (This is what would happen if we would observe the brightness of the light that goes through the glass sheet and we assume that the intensities are additive.) To do the accumulation efficiently, we use the function Nearest.

nf = Nearest[points]; Monitor[tNF =     Table[Length@nf[{x, y}, {Infinity, 2}], {x, -20, w + 20}, {y, -20,       h + 20}];, {x, y}] ReliefPlot[Reverse@Transpose[tNF],   ColorFunction -> (GrayLevel[1 - #^2] &), Frame -> False]

nf = Nearest[points]; Monitor[tNF =     Table[Length@nf[{x, y}, {Infinity, 2}], {x, -20, w + 20}, {y, -20,       h + 20}];, {x, y}] ReliefPlot[Reverse@Transpose[tNF],   ColorFunction -> (GrayLevel[1 - #^2] &), Frame -> False]

Note that looking into the light that comes through the window would not show the goat because the light that would fall into the eye would mostly come from a small spatial region due to the mostly parallel light rays.

The appearance of the Laplacian of the surface of the glass sheet is not restricted to only parallel light. In the following, we use a point light source instead of parallel light. This means that the effect would also be visible by using artificial light sources, rather than sunlight with a magic window.

cfP = With[{w = w, h = h},   Compile[{x, y, Z, n, {g2, _Real, 1}, s2},    Module[{dir0 = Normalize[{x, y, s2} - {w/2, h/2, 5000}],       normal, \[Phi], \[Theta], P0, direction2 = {1., 1, 1},       dir, \[Sigma]},     normal = Normalize[g2];     \[Phi] = ArcCos[normal.dir0];     \[Theta] = ArcSin[n Sin[\[Phi]]];     P0 = {x, y, s2};     direction2 = Normalize[dir0 - normal.dir0 normal];      dir = Cos[\[Theta]] normal + Sin[\[Theta]] direction2 ;     \[Sigma] = (Z - P0[[3]])/dir[[3]];     {P0, P0 + \[Sigma] dir}]]]

refractCompiledP[{x_, y_}, n_, Z_] :=   cfP[x, y, Z, n, gradGoatSmall[x, y], ifGoatSmall[x, y]]

With[{Z = 5000, \[CapitalDelta] = .25},  Monitor[   dataP =      Line[Flatten[       Table[refractCompiledP[{x, y}, 1.5, Z], {x, 1,          w, \[CapitalDelta]}, {y, 1, h, \[CapitalDelta]}], 1]];,   {x, y}] ]

ptsP = Reverse[Most[#]] & /@ dataP[[1, All, 2]];  nfP = Nearest[ptsP]; With[{padding = 400},  Monitor[tNFP = Table[Length@nfP[{x, y}, {Infinity, 2}],                   {x, -padding, w + padding},   {y, -padding,        h + padding}];, {x, y}]]

ReliefPlot[Reverse@tNFP, ColorFunction -> (GrayLevel[1 - #^2] &),     Frame -> False] //                                                                      Rasterize[#, "Image", ImageSize -> 400] & // ImageCrop

ReliefPlot[Reverse@tNFP, ColorFunction -> (GrayLevel[1 - #^2] &),     Frame -> False] //                                                                      Rasterize[#, "Image", ImageSize -> 400] & // ImageCrop

So why is the goat visible in the density of rays after refraction? At first, it seems quite surprising whether either a parallel or point source shines on the window.

On second thought, one remembers Maxwell’s geometric meaning of the Laplace operator:

(\[CapitalDelta] f)(Subscript[Overscript[r, \[RightVector]], 0])=Underscript[lim, \[Rho]->0]((2d)/\[Rho]^2 (Subscript[\[LeftAngleBracket]f\[RightAngleBracket], S(Subscript[Overscript[r, \[RightVector]], 0],\[Rho])]-f(Subscript[Overscript[r, \[RightVector]], 0])))

… where Math notation indicates the average of f on a sphere centered at Math notation 2 with radius ρ. Here is a quick check of the last identity for two and three dimensions.

 Limit[2*2/\[Rho]^2 (Normal[       Series[Integrate[\[ScriptF][x, y], {x, y} \[Element]           Sphere[{x0, y0}, \[Rho]],          Assumptions -> \[Rho] > 0 \[And] (x0 | y0) \[Element]             Reals], {x, x0, 3}, {y, y0, 3}]]/      ArcLength[Sphere[{x0, y0}, \[Rho]]] - \[ScriptF][x0,       y0]), \[Rho] -> 0]

 Limit[2*3/\[Rho]^2 (Normal[       Series[Integrate[\[ScriptF][x, y, z], {x, y, z} \[Element]           Sphere[{x0, y0, z0}, \[Rho]],          Assumptions -> \[Rho] > 0 \[And] (x0 | y0 | z0) \[Element]             Reals], {x, x0, 3}, {y, y0, 3}, {z, z0, 3}]]/      Area[Sphere[{x0, y0, z0}, \[Rho]]] - \[ScriptF][x0, y0,       z0]), \[Rho] -> 0]

At a given point in the imaging plane, we add up the light rays from different points of the glass surface. This means we carry out some kind of averaging operation.

So let’s go back to the general refraction formula and have a closer look. Again we assume that the upper surface is mostly flat and that the parameter ε is small. The position {X,Y} of the light ray in the imaging plane can be calculated in closed form as a function of the surface g(x,y), the starting coordinates of the light ray {x,y}, the index of refraction n and the distance of the imaging plane Z.

Clear[f, g]; f[x_, y_] := f0 + \[CurlyEpsilon] g[x, y]; normalize = #/Sqrt[#.#] &; \[ScriptCapitalR] = Module[{dir0 = normalize[{0, 0, 1}]},    normal = normalize[Grad[z - f[x, y], {x, y, z}]];    \[Phi] = ArcCos[normal.dir0];    \[Theta] = ArcSin[n Sin[\[Phi]]];    P0 = {x, y, f[x, y]};     direction2 = normalize[dir0 - normal.dir0 normal];     dir = Cos[\[Theta]] normal + Sin[\[Theta]] direction2 ;    \[Sigma] = (Z - P0[[3]])/dir[[3]];    P0 + \[Sigma] dir ] // Simplify

That is a relatively complicated-looking formula. For a nearly planar upper glass surface (small ε), we have the following approximate coordinates for the {X,Y} coordinates of the imaging plane where we observe the light rays in terms of the coordinate {x,y} of the glass surface.

Series[Most[\[ScriptCapitalR]], {\[CurlyEpsilon], 0, 1}]

This means in zeroth order we have {X,Y} ≈ {x,y}. And the deviation of the light ray position in the imaging plane is proportional (n–1)Z. (Higher-order corrections to {X,Y} ≈ {x,y} we could get from Newton iterations, but we do not need them here.)

The density of rays is the inverse of the Jacobian for going from {x,y} to {X,Y}. (Think on the change of variable formulas for 1:1 transforms for multivariate integration.)

1/Det[Grad[Most[\[ScriptCapitalR]], {x, y}]] // Short[#, 6] &

LeafCount[1/Det[Grad[Most[\[ScriptCapitalR]], {x, y}]]]

Quantifying the size of the resulting expression shows that it is indeed a large expression. This is quite a complex formula. For a quadratic function in x and y, we can get some feeling for the density as a function of the physical parameters ε, n and Z as well as the parameters that describe the surface by varying them in an interactive demonstration. For large values of n, Z and ε, we see how caustics arise.

refractionDensity[{x_, y_}, {n_, Z_, \[CurlyEpsilon]_}, {c00_, c10_,      c01_, c11_, c20_, c02_}] =    1/Det[Grad[Most[\[ScriptCapitalR]], {x, y}]] /.      g -> Function[{x, y},        c00 + c10 x + c01 y + c11 x y + c20 x^2 + c02 y^2] /. f0 -> 1;

Manipulate[  {Plot3D[Evaluate[     c00 + c10 x + c01 y + c11 x y + c20 x^2 + c02 y^2], {x, -1,      1}, {y, -1, 1},    MeshFunctions -> {#3 &}],   Plot3D[Evaluate[     refractionDensity[{x, y}, {n, Z, \[CurlyEpsilon]}, {c00, c10, c01,        c11, c20, c02}]], {x, -1, 1}, {y, -1, 1},    MeshFunctions -> {#3 &}]},   {{n, 3}, 1, 5, Appearance -> "Labeled"},   {{Z, 5}, 1, 20, Appearance -> "Labeled"},   {{\[CurlyEpsilon], 0.08}, -1, 1, Appearance -> "Labeled"}, Delimiter,  {{c00, 0}, -2, 2, Appearance -> "Labeled"},  {{c10, 0}, -2, 2, Appearance -> "Labeled"},  {{c01, 0}, -2, 2, Appearance -> "Labeled"},  {{c11, 0.7}, -2, 2, Appearance -> "Labeled"},  {{c20, 1.2}, -2, 2, Appearance -> "Labeled"},  {{c02, 1}, -2, 2, Appearance -> "Labeled"},  ControlPlacement -> Top, TrackedSymbols :> True]

For nearly planar surfaces (first order in ε), the density is equal to the Laplacian of the surface heights (in x,y coordinates). This is the main “trick” in the construction of magic windows.

intensity =   Series[1/Det[      Grad[Most[\[ScriptCapitalR]], {x, y}]], {\[CurlyEpsilon], 0,      1}] // Simplify

This explains why the goat appears as the intensity pattern of the light rays after refraction. This means glass sheets act effectively as a Laplace operator.

Using Newton’s root-finding method, we could calculate the intensity in X,Y coordinates, but the expression explains heuristically why refraction on a nearly planar surface behaves like an optical Laplace operator. For more details, see this article.

Now we could model a better picture of the light ray density by pre-generating a matrix of points in the imaging plane using, say, 10 million rays, and record where they fall within the imaging plane. This time we model the solution of the Poisson equation using ListDeconvolve.

ifGoat2 = Interpolation[   Flatten[    MapIndexed[{Reverse@#2, #1} &,      ListDeconvolve[-{{0, -1, 0}, {-1, 4, -1}, {0, -1, 0}},       ImageData[goat]], {2}], 1], InterpolationOrder -> 5]

The approximate solution of the Poisson equation is not quite as smooth as the global solutions, but the goat is nevertheless invisible.

Plot3D[Evaluate[ifGoat2[x, y]], {x, 1, w}, {y, 1, h},   MeshFunctions -> {#3 &}, PlotPoints -> 80]

ReliefPlot[  Table[Evaluate[Laplacian[ifGoat2[x, y], {x, y}]], {y, h, 1, -1}, {x,     w}], ColorFunction -> GrayLevel, Frame -> False]

ifGoatSmall2[x_, y_] = 0.002  ifGoat2[x, y];

gradGoatSmall2[x_, y_] = Grad[z - ifGoatSmall2[x, y], {x, y, z}];

refractCompiled2[{x_, y_}, n_, Z_] :=    cf[x, y, Z, n, gradGoatSmall2[x, y], ifGoatSmall2[x, y]];

(* this will take a few minutes *) With[{\[Mu] = 4, \[CapitalDelta] = 0.1, Z = 1000, \[Delta] = 25},  Monitor[   mat = Table[0, {\[Mu] (h + 2 \[Delta])}, {\[Mu] (w + 2 \[Delta])}];    Do[\[Upsilon] = \[Mu] Round[(refractCompiled2[{x, y}, 1.53, Z][[          2]] + \[Delta]), 1./\[Mu]];    If[1 <= \[Upsilon][[2]] <= \[Mu] (h + 2 \[Delta]) &&       1 <= \[Upsilon][[1]] <= \[Mu] (w + 2 \[Delta]),      mat[[\[Upsilon][[2]], \[Upsilon][[1]]]] =       mat[[\[Upsilon][[2]], \[Upsilon][[1]]]] + 1],     {x, 1, w, \[CapitalDelta]}, {y, 1, h, \[CapitalDelta]}];, {x, y}]]

We adjust the brightness/darkness through a power law (a crude approximation for a Weber–Fechner perception).

ImageResize[Blur[Image[1 - (mat/Max[mat])^0.3], 6], 600] // ImageCrop

If the imaging plane is too far away, we do get caustics (that remind me of the famous cave paintings from Lascaux).

With[{\[Mu] = 4, \[CapitalDelta] = 0.25, Z = 4000, \[Delta] = 25},  Monitor[   matC = Table[0, {\[Mu] (h + 2 \[Delta])}, {\[Mu] (w + 2 \[Delta])}];    Do[\[Upsilon] = \[Mu] Round[(refractCompiled2[{x, y}, 1.53, Z][[          2]] + \[Delta]), 1./\[Mu]];    If[1 <= \[Upsilon][[2]] <= \[Mu] (h + 2 \[Delta]) &&       1 <= \[Upsilon][[1]] <= \[Mu] (w + 2 \[Delta]),                matC[[\[Upsilon][[2]], \[Upsilon][[1]]]] =       matC[[\[Upsilon][[2]], \[Upsilon][[1]]]] + 1],     {x, 1, w, \[CapitalDelta]}, {y, 1, h, \[CapitalDelta]}];, {x, y}]]

ImageResize[Blur[Image[1 - (matC/Max[matC])^.2], 5], 600] // ImageCrop

If the image plane is even further away, the goat slowly becomes unrecognizable.

With[{\[Mu] = 4, \[CapitalDelta] = 0.25, Z = 8000, \[Delta] = 60},  Monitor[   matC2 =     Table[0, {\[Mu] (h + 2 \[Delta])}, {\[Mu] (w + 2 \[Delta])}];    Do[\[Upsilon] = \[Mu] Round[(refractCompiled2[{x, y}, 1.53, Z][[          2]] + \[Delta]), 1./\[Mu]];    If[1 <= \[Upsilon][[2]] <= \[Mu] (h + 2 \[Delta]) &&       1 <= \[Upsilon][[1]] <= \[Mu] (w + 2 \[Delta]),                 matC2[[\[Upsilon][[2]], \[Upsilon][[1]]]] =       matC2[[\[Upsilon][[2]], \[Upsilon][[1]]]] + 1],     {x, 1, w, \[CapitalDelta]}, {y, 1, h, \[CapitalDelta]}];, {x, y}]]

ImageResize[Blur[Image[1 - (matC2/Max[matC2])^.1], 5],    600] // ImageCrop

Although not practically realizable, we also show what the goat would look like for negative Z; now it seems much more sheep-like.

With[{\[Mu] = 4, \[CapitalDelta] = 0.25, Z = -3000, \[Delta] = 60},  Monitor[   matC3 =     Table[0, {\[Mu] (h + 2 \[Delta])}, {\[Mu] (w + 2 \[Delta])}];    Do[\[Upsilon] = \[Mu] Round[(refractCompiled2[{x, y}, 1.53, Z][[          2]] + \[Delta]), 1./\[Mu]];    If[1 <= \[Upsilon][[2]] <= \[Mu] (h + 2 \[Delta]) &&       1 <= \[Upsilon][[1]] <= \[Mu] (w + 2 \[Delta]),                 matC3[[\[Upsilon][[2]], \[Upsilon][[1]]]] =       matC3[[\[Upsilon][[2]], \[Upsilon][[1]]]] + 1],     {x, 1, w, \[CapitalDelta]}, {y, 1, h, \[CapitalDelta]}];, {x, y}]]

ImageResize[Blur[Image[1 - (matC3/Max[matC3])^.2], 5],    600] // ImageCrop

Here is a small animation showing the shape of the goat as a function of the distance Z of the imaging plane from the upper surface.

Even if the image is just made from a few lines (rather than each pixel having a non-white or non-black value), the solution of the Poisson equation is a smooth function, and the right-hand side is not recognizable in a plot of the solution.

imHomer =   ColorConvert[   Rasterize[    Show[Entity["PopularCurve", "HomerSimpsonCurve"]["Plot"],       Axes -> False] /.                                                                       \                 _RGBColor :> Black, "Image", ImageSize -> 300],    "Grayscale"]

{wHomer, hHomer} = ImageDimensions[imHomer];

ifHomer =    Interpolation[    Flatten[MapIndexed[{#2, #1} &, ImageData[imHomer], {2}], 1],     InterpolationOrder -> 4];

im\[CapitalDelta]Homer =    FourierDST[    Table[-1./(4 - 2 Cos[x  Pi/hHomer] - 2 Cos[y Pi/wHomer]), {y,        hHomer}, {x, wHomer}] *                                                  FourierDST[ImageData[imHomer], 1], 1];

if\[CapitalDelta]Homer =    Interpolation[    Flatten[MapIndexed[{Reverse@#2, #1} &,       im\[CapitalDelta]Homer, {2}], 1], InterpolationOrder -> 2];

Plot3D[Evaluate[if\[CapitalDelta]Homer[x, y]], {x, 1, wHomer}, {y, 1,    hHomer}, MeshFunctions -> {#3 &}, PlotPoints -> 80]

But after refraction on a glass sheet (or applying the Laplacian), we see Homer quite clearly.

 Image[ListConvolve[-{{0, -1, 0}, {-1, 4, -1}, {0, -1, 0}},     im\[CapitalDelta]Homer]] // ImageAdjust

Despite the very localized curve-like structures that make the Homer image, the resulting Poisson equation solution again looks quite smooth. Here is the solution textured with its second derivative (the purple line will be used in the next input).

Plot3D[Evaluate[if\[CapitalDelta]Homer[x, y]], {x, 1, wHomer}, {y, 1,    hHomer}, PlotPoints -> 80,  BoxRatios -> {wHomer, hHomer, wHomer/2},   PlotStyle -> Texture[imHomer], Mesh -> {{200}},  MeshFunctions -> {#2 &}, MeshStyle -> Purple,   ViewPoint -> {0.3, -1.9, 2.9}]

The next graphic shows a cross-section of the Poisson equation solution together with its (scaled) first and second derivatives with respect to x along the purple line of the last graphic. The lines show up quite pronounced in the second derivatives.

Plot[Evaluate[{if\[CapitalDelta]Homer[x, 200]/10000,     D[if\[CapitalDelta]Homer[x, 200], x]/100,     D[if\[CapitalDelta]Homer[x, 200], x, x]}], {x, 1, wHomer},  PlotLegends -> {HoldForm[if\[CapitalDelta]Homer[x, 200]/10000],     HoldForm[D[if\[CapitalDelta]Homer[x, 200], x]],     HoldForm[D[if\[CapitalDelta]Homer[x, 200], {x, 2}]]}]

Let’s repeat a modification of the previous experiment to see how precise the surface would have to be to show Homer. We add some random waves to the Homer solution.

With[{M = 20, n = 8},   if\[CapitalDelta]Homer2[\[Delta]_][x_, y_] =     if\[CapitalDelta]Homer[x, y]*     (1 + \[Delta] Sum[         RandomReal[{}] Cos[           RandomReal[{-M, M}] x + 2 Pi RandomReal[]] Cos[           RandomReal[{-M, M}] y + 2 Pi RandomReal[]],         {n}])];

Again we see that the surface would have to be correct at the (10-6) level or better.

ArrayPlot[    Table[Evaluate[       Laplacian[if\[CapitalDelta]Homer2[10^-#][x, y], {x, y}]], {y, 1,       hHomer}, {x, 1, wHomer}], Frame -> False,    ColorFunction -> GrayLevel, PlotLabel -> HoldForm[10^-# ]] & /@ {5,    6, 7, 8}

Or one can design a nearly planar window that will project one’s most favorite physics equation on the wall when the Sun is shining.

physicsFormulas =    Select[(Last /@ Select[{#, FormulaData[#]} & /@ FormulaData[],         MemberQ[#,           "SpeedOfLight" | "GravitationalConstant" |            "BoltzmannConstant" | "ElectricConstant" |                            "MagneticConstant" | "PlanckConstant" |            "ReducedPlanckConstant" | "ElectronMass" |                             "StefanBoltzmannConstant" |            "ElementaryCharge" | "FaradayConstant" |            "RydbergConstant", {-1}] &]),                                FreeQ[#, _Real, \[Infinity]] &] /.     Quantity[1, s_String] :> HoldForm[Quantity[None, s]];

imPhysics =   ColorConvert[   ImageCollage[    Rasterize[#, "Image", ImageSize -> RandomInteger[{200, 400}]] & /@      TraditionalForm /@                                                                              \             RandomSample[physicsFormulas, 12], Background -> White,     ImageSize -> 800], "Grayscale"]

{wPhysics, hPhysics} = ImageDimensions[imPhysics];

ifPhysics =    Interpolation[    Flatten[MapIndexed[{#2, #1} &,       N[Floor[ImageData[imPhysics]]], {2}], 1],     InterpolationOrder -> 4];

im\[CapitalDelta]Physics =    FourierDST[    Table[-1./(4 - 2 Cos[x  Pi/hPhysics] - 2 Cos[y Pi/wPhysics]), {y,        hPhysics}, {x, wPhysics}]*                                                      FourierDST[ImageData[imPhysics], 1], 1];

if\[CapitalDelta]Physics =    Interpolation[    Flatten[MapIndexed[{Reverse@#2, #1} &,       im\[CapitalDelta]Physics, {2}], 1], InterpolationOrder -> 6];

When looking at the window, one will not notice any formulas. But this time, the solution of the Poisson equation has more overall structures.

Plot3D[Evaluate[if\[CapitalDelta]Physics[x, y]], {x, 1, wPhysics}, {y,    1, hPhysics}, MeshFunctions -> {#3 &}, PlotPoints -> 80]

But the refracted light will make physics equations. The resulting window is perfect for the entrance of, say, physics department buildings.

Image[ListConvolve[-{{0, -1, 0}, {-1, 4, -1}, {0, -1, 0}},     im\[CapitalDelta]Physics]] // ImageAdjust

Now that we’re at the end of this post, let us mention that one can also implement the Laplacian through a mirror, rather than a window. See Michael Berry’s paper from 2006, “Oriental Magic Mirrors and the Laplacian Image” (see this article as well). Modifying the above function for refracting a light ray to reflecting a light ray and assuming a mostly flat mirror surface, we see the Laplacian of the mirror surface in the reflected light intensity.

Clear[f, g]; f[x_, y_] := f0 + \[CurlyEpsilon] g[x, y]; normalize = #/Sqrt[#.#] &; \[ScriptCapitalR]R =   Module[{dir0 = normalize[{0, 0, 1}], normal, \[Phi], P0, direction2,      dir, \[Sigma]},    normal = normalize[Grad[z - f[x, y], {x, y, z}]];    \[Phi] = ArcCos[normal.dir0];     P0 = {x, y, f[x, y]};     direction2 = normalize[dir0 - normal.dir0 normal];     dir = Cos[\[Phi]] normal - Sin[\[Phi]] direction2 ;    \[Sigma] = (Z - P0[[3]])/dir[[3]];    P0 + \[Sigma] dir ] // Simplify

Series[1/Det[     Grad[Most[\[ScriptCapitalR]R], {x, y}]], {\[CurlyEpsilon], 0,     1}] // Simplify

Making transparent materials and mirrors of arbitrary shape, now called free-form optics, is considered the next generation of modern optics and will have wide applications in science, technology, architecture and art (see here). I think that a few years from now, when the advertising industry recognizes their potential, we will see magic windows with their unexpected images behind them everywhere.


Download this post as a Computable Document Format (CDF) file. New to CDF? Get your copy for free with this one-time download.

]]>
http://blog.wolfram.com/2017/08/25/how-laplace-would-hide-a-goat-the-new-science-of-magic-windows/feed/ 1
How Many Animals and Arp-imals Can One Find in a Random 3D Image? http://blog.wolfram.com/2017/02/23/how-many-animals-and-arp-imals-can-one-find-in-a-random-3d-image/ http://blog.wolfram.com/2017/02/23/how-many-animals-and-arp-imals-can-one-find-in-a-random-3d-image/#comments Thu, 23 Feb 2017 15:16:56 +0000 Michael Trott http://blog.internal.wolfram.com/?p=34823 And How Many Animals, Animal Heads, Human Faces, Aliens and Ghosts in Their 2D Projections?

Introduction

In my recent Wolfram Community post, “How many animals can one find in a random image?,” I looked into the pareidolia phenomenon from the viewpoints of pixel clusters in random (2D) black-and-white images. Here are some of the shapes I found, extracted, rotated, smoothed and colored from the connected black pixel clusters of a single 800×800 image of randomly chosen, uncorrelated black-and-white pixels.

arpimals

For an animation of such shapes arising, changing and disappearing in a random gray-level image with slowly time-dependent pixel values, see here. By looking carefully at a selected region of the image, at the slowly changing, appearing and disappearing shapes, one frequently can “see” animals and faces.

The human mind quickly sees faces, animals, animal heads and ghosts in these shapes. Human evolution has optimized our vision system to recognize predators and identify food. Our recognition of an eye (or a pair of eyes) in the above shapes is striking. For the neuropsychological basis of seeing faces in a variety of situations where actual faces are absent, see Martinez-Conde2016.

A natural question: is this feature of our vision specific to 2D silhouette shapes, or does the same thing happen for 3D shapes? So here, I will look at random shapes in 3D images and the 2D projections of these 3D shapes. Various of the region-related functions that were added in the last versions of the Wolfram Language make this task possible, straightforward and fun.

I should explain the word Arp-imals from the title. With the term “Arp-imals” I refer to objects in the style of the sculptures by Jean Arp, meaning smooth, round, randomly curved biomorphic forms. Here are some examples.

personOverview[person_] :=   With[{props = {"Entity", EntityProperty["Person", "Image"],       EntityProperty["Person", "BirthDate"],       EntityProperty["Person", "BirthPlace"],       EntityProperty["Person", "DeathDate"]}},   TextGrid[DeleteMissing[Transpose[{props, person[props]}], 1, 2],                      Dividers -> All, Background -> GrayLevel[0.9]]]

artworkOverview[art_] :=   With[{props = {"Entity", EntityProperty["Artwork", "Image"],       EntityProperty["Artwork", "Artist"],       EntityProperty["Artwork", "StartDate"],       EntityProperty["Artwork", "Owner"]}},   TextGrid[    DeleteMissing[     Transpose[{props, Item[#, ItemSize -> 15] & /@ art[props]}], 1, 2],                      Dividers -> All, Background -> GrayLevel[0.9]]]

Forms such as these hide frequently in 3D images made from random black-and-white voxels. Here is a quick preview of shapes we will extract from random images.

Quick Preview of Shapes

We will also encounter what I call Moore-iens, in the sense of the sculptures by the slightly later artist Henry Moore.

personOverview[Entity["Person", "HenryMoore::96psy"]]

artworkOverview /@ {Entity["Artwork",     "LargeInteriorForm::HenryMoore"],    Entity["Artwork", "KnifeEdgeTwoPiece::HenryMoore"],    Entity["Artwork", "OvalWithPointsPrinceton::HenryMoore"]}

With some imagination, one can also see forms of possible aliens in some of the following 2D shapes. (See Domagal-Goldman2016 for a discussion of possible features of alien life forms.)

As in the 2D case, we start with a random image: this time, a 3D image of voxels of values 0 and 1. For reproducibility, we will seed the random number generator. The Arp-imals are so common that virtually any seed produces them. And we start with a relatively small image. Larger images will contain many more Arp-imals.

Shapes from Random 3D Images

SeedRandom[1]; randomImage =   Image3D[Table[RandomChoice[{6, 1} -> {0, 1}], {20}, {20}, {20}]]

Hard to believe at first, but the blueprints of the above-shown 3D shapes are in the last 3D cube. In the following, we will extract them and make them more visible.

As in the 2D case, we again use ImageMesh to extract connected regions of white cells. The regions still look like a random set of connected polyhedra. After smoothing the boundaries, nicer shapes will arise.

Show[imesh = ImageMesh[randomImage, Method -> "MarchingSquares"],   ImageSize -> 400]

Here are the regions, separated into non-touching ones, using the function ConnectedMeshComponents. The function makeShapes3D combines the image creation, the finding of connected voxel regions, and the region separation.

makeShapes3D[{dimz_, dimy_, dimx_}, {black_, white_}] :=  Module[{randomImage, imesh},   randomImage =     Image3D[Table[      RandomChoice[{black, white} -> {0, 1}], {dimx}, {dimy}, {dimz}]];    imesh = ImageMesh[randomImage, Method -> "MarchingSquares"];                Select[ConnectedMeshComponents@imesh, 10 < Volume[#] < 200 &]]

For demonstration purposes, in the next example, we use a relatively low density of white voxels to avoid the buildup of a single large connected region that spans the whole cube.

SeedRandom[333]; shapes = makeShapes3D[{20, 20, 20}, {7, 1}]

Here are the found regions individually colored in their original positions in the 3D image.

Show[HighlightMesh[#, Style[2, RandomColor[]]] & /@ shapes,   Boxed -> True]

To smooth the outer boundaries, thereby making the shapes more animal-, Arp-imal- and alien-like, the function smooth3D (defined in the accompanying notebook) is a quick-and-dirty implementation of the Loop subdivision algorithm. (As the 3D shapes might have a higher genus, we cannot use BSplineSurface directly, which would have been the direct equivalent to the 2D case.) Here are successive smoothings of the third of the above-extracted regions.

{sampleRegion,    Graphics3D[{EdgeForm[],     sampleRegionSmooth1 = smooth3D[sampleRegion, 1]},                         ImageSize -> {{320}, {320}}]}  {Graphics3D[{EdgeForm[],     sampleRegionSmooth2 = smooth3D[sampleRegion, 2]},                           ImageSize -> {{320}, {320}}],  Graphics3D[{EdgeForm[],     sampleRegionSmooth3 = smooth3D[sampleRegion, 3]},                         ImageSize -> {{320}, {320}}]}

Using the region plot theme "SmoothShading" of the function BoundaryMeshRegion, we can add normals to get the feeling of a genuinely smooth boundary.

shapeF = With[{sr = sampleRegionSmooth3},   BoundaryMeshRegion[sr[[1]],     Style[sr[[2, 1]] ,      Directive[GrayLevel[0.4],       Specularity[RGBColor[0.71, 0.65, 0.26], 12]]],     PlotTheme -> "SmoothShading"]]

And for less than $320 one can obtain this Arp-inspired piece in brass. A perfect, unique, stunning post-Valentine’s gift. For hundreds of alternative shapes to print, see below. We use ShellRegion to reduce the price and save some internal material by building a hollow region.

thinreg = ShellRegion[shapeF]; Printout3D[thinreg, "IMaterialise",   RegionSize -> Quantity[10, "Centimeters"]]

Here is the smoothing procedure shown for another of the above regions.

sampleRegion2 = {sampleRegion2,   Graphics3D[{EdgeForm[],     sampleRegionSmooth21 = smooth3D[sampleRegion2, 1]},                           ImageSize -> {{360}, {360}}],  Graphics3D[{EdgeForm[],     sampleRegionSmooth22 = smooth3D[sampleRegion2, 2]},                          ImageSize -> {{360}, {360}}]}

And for three more.

With[{sf = Directive[#, Specularity[ColorNegate[#], 10]] &},  Row[{Graphics3D[{EdgeForm[], sf[Red], smooth3D[shapes[[4]], 3]},      ImageSize -> {{360}, {360}},                                 ViewPoint -> {0.08, -3.31, 0.67},      ViewVertical -> {0.00, -0.85, 0.90}],    Graphics3D[{EdgeForm[], sf[Blue], smooth3D[shapes[[8]], 3]},      ImageSize -> {{360}, {360}},                    ViewPoint -> {2.99, 0.66, 1.43},      ViewVertical -> {1.07, 0.90, 0.23}],    Graphics3D[{EdgeForm[], sf[Green], smooth3D[shapes[[13]], 3]},      ImageSize -> {{360}, {360}},                    ViewPoint -> {-2.53, 2.18, 0.49},      ViewVertical -> {-0.93, 0.598, 0.76}]}]]

Many 3D shapes can now be extracted from random and nonrandom 3D images. The next input calculates the region corresponding to lattice points with coprime coordinates.

Graphics3D[{EdgeForm[], Directive[Gray, Specularity[Pink, 12]],             smooth3D[ConnectedMeshComponents[ImageMesh[       Image3D[        Table[Boole@CoprimeQ[x, y, z], {x, -6, 6}, {y, -6, 6}, {z, -6,           6}]],       Method -> "MarchingSquares"]][[1]], 2]},  ViewPoint -> {2, -3, 2}, ViewVertical -> {1, 0, 1}, Boxed -> False]

The Importance of Coarse Rasterization and Smoothing

In the above example, we start with a coarse 3D region, which feels polyhedral due to the obvious triangular boundary faces. It is only after the smoothing procedure that we obtain “interesting-looking” 3D shapes. The details of the applied smoothing procedure do not matter, as long as sharp edges and corners are softened.

Human perception is optimized for smooth shapes, and most plants and animals have smooth boundaries. This is why we don’t see anything interesting in the collection of regions returned from ImageMesh applied to a 3D image. This is quite similar to the 2D case. In the following visualization of the 2D case, we start with a set of randomly selected points. Then we connect these points through a curve. Filling the curve yields a deformed checkerboard-like pattern that does not remind us of a living being. Rasterizing the filled curve in a coarse-grained manner still does not remind us of organic shapes. The connected region, and especially the smoothed region, do remind most humans of living beings.

Smoothed Region

The following Manipulate (available in the notebook) allows us to explore the steps and parameters involved in an interactive session.

smooth2D[reg_, col_, d_] :=   Graphics[{col, (ToExpression[ToString[InputForm@reg], StandardForm,         Hold] /.       HoldPattern[BoundaryMeshRegion[v_, b__, ___Rule]] :>         GraphicsComplex[v,         FilledCurve[{b} /. Line[l_] :>                         BSplineCurve[DeleteDuplicates[Flatten[l, 1]],              SplineClosed -> True, SplineDegree -> d]]])[[1]]}]

Manipulate[  Module[{randomFunction, f1, f2, filledPolygon, ras, im, imesh,     shapes, toShow, map},   Block[{$PerformanceGoal = "Quality"},    randomFunction[m_] :=      Interpolation[      MapIndexed[{(#2[[1]] - 1)/(m + 1), #} &,        Join[#, Take[#, 2]] &@ RandomReal[{0, 1}, {m, 2}]],       InterpolationOrder -> 3];    SeedRandom[seed]; f1 = randomFunction[deg];       f2 = randomFunction[deg];    pp = ParametricPlot[Evaluate[(1 - s) f1[t] + s f2[t]], {t, 0, 1},       PlotStyle -> Directive[Opacity[1], Black], Axes -> False,       PlotRange -> {{-0, 1}, {-0, 1}}] ;    filledPolygon = pp /. Line :> Polygon;    ras = Rasterize[filledPolygon, RasterSize -> {rs, rs},       ImageSize -> {rs, rs}];      im = Image[ras];                   imesh = ImageMesh[ColorNegate[im], Method -> m];     II = imesh;     shapes =      Reverse[SortBy[ConnectedMeshComponents@imesh,        Length[MeshCells[#, 1]] &]];    map[{x_, y_}] := rs {x, y} + {1/2, -1/2};    toShow = {If[sI, Graphics[ras], {}],      If[sP,        Graphics[{Opacity[0.8],          filledPolygon[[1]] /.           Polygon[l_] :> Polygon[Map[map, l, {-2}]]}], {}],      If[sO, Graphics[{Opacity[0.8], Blue, Show[imesh][[1]]}], {}],      If[sR,        Table[smooth2D[shapes[[k]], Directive[Opacity[0.7], rC],          d], {k, Length[shapes]}], {}],      If[sC,        pp /. Line[l_] :> {ColorNegate[rC],           Line[Map[map, l, {-2}]]}, {}],      If[sIP,        Graphics[{ Gray, PointSize[Medium],          Point[map /@ ((1 - s) f1[[4, All, 1]] +              s  f2[[4, All, 1]])]}], {}]};    If[toShow === {{}, {}, {}, {}, {}}, Text["nothing to show" ] ,         Graphics[ Rotate[First /@ Flatten[toShow], \[CurlyPhi]],      PlotRangePadding -> 0, ImagePadding -> 0,       PlotRange -> {{-0.05 rs, 1.05 rs}, {-0.05 rs, 1.05 rs}},       ImageSize -> 400]]]],   {{seed, 595}, 1, 10000, 1},  {{deg, 24, "curve degree"}, 2, 36, 1},  {{s, 0.961, "transition"}, 0, 1},  Delimiter,  {{rs, 24, "raster size"}, 10, 60, 1},  Row[{"show: ",     Control[{{sR, True,        "smoothed region" <> FromCharacterCode[62340]}, {True,        False}}], "|  ",                       Control[{{sO, False, "region" <> FromCharacterCode[62340]}, {True,        False}}], "|  ",                      Control[{{sI, False, "raster" <> FromCharacterCode[62340]}, {True,        False}}], "|\n          ",                         Control[{{sP, False,        "polygon" <> FromCharacterCode[62340]}, {True, False}}],     "|  ",                        Control[{{sC, False, "curve" <> FromCharacterCode[62340]}, {True,        False}}], "|  ",                        Control[{{sIP, False,        "points" <> FromCharacterCode[62340]}, {True, False}}]}],  Delimiter,  {{d, 3, "smoothness"}, 0, 8, 1, SetterBar},   {{m, "DualMarchingSquares", "method"}, {"MarchingSquares",     "DualMarchingSquares", "Exact"}},   {{rC, Darker[Green, 0.6], "region color"}, Red, ImageSize -> Small},  Delimiter,  {{\[CurlyPhi], -2.06, "rotation"}, -Pi, Pi},  Delimiter,  Button["random shape", seed = RandomInteger[{1, 1000}];                                                         deg = RandomInteger[{2, 36}];                                                        s = RandomReal[{0, 1}]],  ControlPlacement -> Left,  TrackedSymbols :> True,   SaveDefinitions -> True]
3D Manipulate

And here is a corresponding 3D example.

SeedRandom[1]; Module[{deg = 3, pp = 16, L = 3, \[Delta], p, pts, sol, p1, cp,    pointsGraphic3D, pointsAndSurface, im2,               imesh, sm, ccs, bmr},   \[Delta] = 2 L/pp;  p[x_, y_, z_] = (x^2 + y^2 + z^2)^(2 deg) +     Sum[c[i, j, k] x^i y^j z^k, {i, 0, deg}, {j, 0, deg}, {k, 0, deg}];   pts = RandomReal[{-1, 1}, {Length@Cases[p[x, y, z], _c, \[Infinity]],      3}];    sol = Solve[(p @@@ pts) == 0, Cases[p[x, y, z], _c, \[Infinity]]];    p1 = p[x, y, z] /. sol[[1]];     cp = ContourPlot3D[    Evaluate[p1], {x, -L, L}, {y, -L, L}, {z, -L, L}, Contours -> {0}];  L = Ceiling[    Max[Abs[Transpose[       Cases[cp, _GraphicsComplex, \[Infinity]][[1, 1]]]]], 0.2];    pointsGraphic3D =    Graphics3D[{Red, Sphere[#, 0.05] & /@ pts}, PlotRange -> L];   pointsAndSurface =    Show[{cp =       ContourPlot3D[Evaluate[p1], {x, -L, L}, {y, -L, L}, {z, -L, L},        Contours -> {0},       ContourStyle -> Gray, Lighting -> "Neutral",        MeshFunctions -> {Norm[{#1, #2, #3}] & }], pointsGraphic3D},     Axes -> False];  im2 = Graphics3D[    Table[If[p1 < 0, {Opacity[0.3], EdgeForm[Blue], Gray, Opacity[0.3],                                                                       \  Cuboid[{x, y, z}/\[Delta] + pp/2, {x, y, z}/\[Delta] + pp/2 +          1]}, {}],                                                  {x, -L, L,       2 L/pp}, {y, L, -L, -2 L/pp}, {z, L, -L, -2 L/pp}],                   Lighting -> "Neutral", Axes -> False];   imesh = ImageMesh[Image3D[Table[Boole[p1 < 0],                                                          {x, -L, L,        2 L/pp}, {y, L, -L, -2 L/pp}, {z, L, -L, -2 L/pp}]],                                                     Method -> "MarchingCubes"];  ccs = Reverse[    SortBy[ConnectedMeshComponents[imesh], Length[MeshCells[#, 2]] &]];    sm = smooth3D[ccs[[1]], 2];   bmr = BoundaryMeshRegion[sm[[1]],     Style[Cases[sm, _Polygon, \[Infinity]],      Directive[Opacity[0.5], Darker[Green]]]];     Column[{Row[{pointsGraphic3D, " \[DoubleLongRightArrow] ",        pointsAndSurface, " \[DoubleLongRightArrow] "}],                     Row[{im2, " \[DoubleLongRightArrow] " ,        Show[{im2, imesh}, Boxed -> True], " \[DoubleLongRightArrow] "}],                      Row[{Show[{im2, bmr}, Boxed -> True],        " \[DoubleLongRightArrow] ",  Show[bmr, Boxed -> True]}]} /.                                                                       \                  gr_Graphics3D :> Show[gr, ImageSize -> 200]]]
3D Example

Shadows of the 3D Shapes

In her reply to my community post, Marina Shchitova showed some examples of faces and animals in shadows of hands and fingers. Some classic examples from the Cassel1896 book are shown here.

Hand shadows

So, what do projections/shadows of the above two 3D shapes look like? (For a good overview of the use of shadows in art at the time and place of the young Arp, see Forgione1999.)

The projections of these 3D shapes are exactly the types of shapes I encountered in the connected smoothed components of 2D images. The function projectTo2D takes a 3D graphic complex and projects it into a thin slice parallel to the three coordinate planes. The result is still a Graphics3D object.

projectTo2D[GraphicsComplex[vs_, r__]] :=   Module[{f = 0.2, \[CurlyEpsilon] = 10^-2, t = Developer`ToPackedArray,    xMin, xMax, yMin, yMax, zMin,     zMax, \[Delta]x, \[Delta]y, \[Delta]z},   {{xMin, xMax}, {yMin, yMax}, {zMin, zMax}} = MinMax /@ Transpose[vs];   {\[Delta]x, \[Delta]y, \[Delta]z} = {xMax - xMin, yMax - yMin,      zMax - zMin};   {EdgeForm[],    {Darker[Red],      GraphicsComplex[      t[{xMin -            f \[Delta]x + \[CurlyEpsilon] (#1 -                xMin)/\[Delta]x, #2, #3} & @@@ vs], r]},     {Darker[Blue],      GraphicsComplex[      t[{#1, yMax +            f \[Delta]y + \[CurlyEpsilon] (#2 - yMin)/\[Delta]y, #3} & @@@         vs], r]},    {Darker[Green, 0.6],      GraphicsComplex[      t[{#1, #2,           zMin - f \[Delta]z + \[CurlyEpsilon] (#3 -                zMin)/\[Delta]z} & @@@ vs], r]}} ]

These are the 2×3 projections of the above two 2D shapes. Most people recognize animal shapes in the projections.

We get exactly these projections if we just look at the 3D shape from a larger distance with a viewpoint and direction parallel to the coordinate axes.

{Graphics3D[{Darker[Blue], EdgeForm[], sampleRegionSmooth2},    ViewPoint -> {1, -20, 0}],   Graphics3D[{Darker[Green, 0.6], EdgeForm[], sampleRegionSmooth2},    ViewPoint -> {1, 0, 20}, ViewVertical -> {0, 1, 0}],  Graphics3D[{Darker[Red, 0.6], EdgeForm[], sampleRegionSmooth2},    ViewPoint -> {20, 0, 1}]}

For comparison, here are three views of the first object from very far away, effectively showing the projections.

By rotating the 3D shapes, we can generate a large variety of different shapes in the 2D projections. The following Manipulate allows us to explore the space of projections’ shapes interactively. Because we need the actual rotated coordinates, we define a function rotate, rather than using the built-in function Rotate.

rotationMatrix3D[{\[Alpha]1_, \[Alpha]2_, \[Alpha]3_}] :=   Module[{c1, s1, c2, s2, c3, s3},   {c3, s3, c2, s2, c1, s1} =     N@{Cos[\[Alpha]3], Sin[\[Alpha]3], Cos[\[Alpha]2], Sin[\[Alpha]2],       Cos[\[Alpha]1], Sin[\[Alpha]1]};   {{c3, s3, 0}, {-s3, c3, 0}, {0, 0, 1}}.           {{c2, 0, s2}, {0, 1, 0}, {-s2, 0, c2}}.           {{1, 0, 0}, {0, c1, s1}, {0, -s1, c1}}]

Here is an array of 16 projections into the x-z plane for random orientations of the 3D shape.

projectToXZImage[GraphicsComplex[vs_, r__]] :=   Module[{f = 0.2, \[CurlyEpsilon] = 10^-2,     t = Developer`ToPackedArray, yMin, yMax, \[Delta]y },   {yMin, yMax} = MinMax@ Transpose[vs][[2]]; \[Delta]y = yMax - yMin;   ImageCrop@Image[Rasterize[      Graphics3D[{EdgeForm[], Darker[Blue],         GraphicsComplex[         t[{#1, yMax +               f \[Delta]y + \[CurlyEpsilon] (#2 -                   yMin)/\[Delta]y, #3} & @@@ vs], r]},       ViewPoint -> {0, -5, 0}, Boxed -> False]]]]

GraphicsGrid[Partition[Show[#, ImageSize -> 120] & /@    Table[projectToXZImage[      rotate[sampleRegionSmooth2, RandomReal[{-Pi, Pi}, 3]]], 16], 4],  Spacings -> {0, 0}]

The initial 3D image does not have to be completely random. In the next example, we randomly place circles in 3D and color a voxel white if the circle intersects the voxel. As a result, the 3D shapes corresponding to the connected voxel regions have a more network-like shape.

randomCircle[   l : {{xml : in_, xmax_}, {ymin_, ymax_}, {zmin_, zmax_}}]  :=    Module[{mp = RandomReal /@ l, \[Delta] = Mean[Abs[Subtract @@@ l]],     dir1, dir2, \[Rho]1, \[Rho]2},    {dir1, dir2} = Orthogonalize[RandomReal[{-1, 1}, {2, 3}]];     {\[Rho]1, \[Rho]2} = RandomReal[\[Delta]/2 {0, 1}, 2];   Circle3D[mp, {\[Rho]1, \[Rho]2}, {dir1, dir2}]]

3D Shapes with Bilateral Symmetry

2D projection shapes of 3D animals typically have no symmetry. Even if an animal has a symmetry, the visible shape from a given viewpoint and a given animal posture does not have a symmetry. But most animals have a bilateral symmetry. I will now use random images that have a bilateral symmetry. As a result, many of the resulting shapes will also have a bilateral symmetry. Not all of the shapes, because some regions do not intersect the symmetry plane. Bilateral symmetry is important for the classic Rorschach inkblot test: “The mid-line appears to attract the patient’s attention with a sort of magical power,” noted Rorschach (Schott2013). The function makeSymmetricShapes3D will generate regions with bilateral symmetry.

makeSymmetricShapes3D[{dimz_, dimy_, dimx_}, {black_, white_}] :=    Module[{ii, randomImage, imesh},    ii[x_, y_,      z_] := (ii[x, y, z] =       ii[x, 1 + dimy - y, z] =        RandomChoice[{black, white} -> {0, 1}]);   randomImage =     Image3D[Table[ii[x, y, z], {x, dimx}, {y, dimy}, {z, dimz}]];    imesh =     ImageMesh[randomImage, Method -> "MarchingCubes",      CornerNeighbors -> False];        Select[ConnectedMeshComponents@imesh, 10 < Volume[#] &]]

Here are some examples.

SeedRandom[888]; symmShapes =   Table[makeSymmetricShapes3D[{d, d, d}, {3, 1}], {d, 5, 8}]

And here are smoothed and colored versions of these regions. The viewpoint is selected in such a way as to make the bilateral symmetry most obvious.

displaySmoothedRegion[reg_BoundaryMeshRegion, color_Directive,    opts___] :=   With[{sm = smooth3D[reg, 2]},   Show[BoundaryMeshRegion[sm[[1]], Style[sm[[2, 1]] , color],      PlotTheme -> "SmoothShading"], opts]]

To get a better feeling for the connection between the pixel values of the 3D image and the resulting smoothed shape, the next Manipulate allows us to specify each pixel value for a small-sized 3D image. The grids/matrices of checkboxes represent the voxel values of one-half of a 3D image with bilateral symmetry.

Manipulate[  DynamicModule[{v = v0, T, imesh, sb, reg, gList},    Column[{Column[{Text[Style["voxel values", Gray, Italic]],        Row[Join[Riffle[          Table[           With[{j = j},             Underscript[Grid[Table[With[{iL = i, jL = j, kL = k},                Checkbox[Dynamic[v[[iL, jL, kL]]]]], {k, kz}, {i, ix}],               Spacings -> 0],                                                                  Text[Style[Row[{"y", "=", j, If[j == jy + 1 - j, "",                                                         Row[{" | y", "=", jy + 1 - j}]]}], Gray,                Italic]]]], {j, Ceiling[jy/2]}],           "\[VerticalSeparator]"], {" "},         {Dynamic[           If[imesh =!= EmptyRegion[3],             Show[reg, ImageSize -> {{140}, {140}},              ViewPoint -> {-3, 1, 1}], ""],           TrackedSymbols :> {reg, imesh}]}]]}],                      Dynamic[T =        Table[Boole@v[[i, Min[j, jy + 1 - j], k]], {k, kz}, {j, jy}, {i,          ix}];                    imesh = ImageMesh[Image3D[T], Method -> "MarchingSquares"];                 If[imesh =!= EmptyRegion[3],        sb = SortBy[ConnectedMeshComponents@imesh, Volume];            Column[{reg = sb[[-1]];         Graphics3D[smooth3D[reg, sm], ImageSize -> 400,           ViewPoint -> {-3, 1, 1},                               Ticks -> None, Axes -> True,           AxesLabel -> {"x", "y", "z"}]}], "empty region"],      TrackedSymbols :> {v}]}, Dividers -> All]],  Row[{Underscript[Control[{{ix, 5, ""}, 3, 10, 1, SetterBar}],      Style["(x)", Gray]], "\[Times]",          Underscript[Control[{{jy, 5, ""}, 3, 10, 1, SetterBar}],      Style["(y)", Gray]], "\[Times]",               Underscript[Control[{{kz, 6, ""}, 3, 10, 1, SetterBar}],      Style["(z)", Gray]]}],      Delimiter,  {{sm, 1, "smoothness"}, 1, 3, 1, SetterBar},  {{v0, MapAt[True &, Table[False, {10}, {10}, {10}],     {{1, 1, 2}, {2, 1, 2}, {3, 1, 2}, {3, 2, 2}, {3, 3, 2}, {3, 4,        2}, {1, 5, 2},      {2, 5, 2}, {3, 5, 2}, {3, 2, 3}, {3, 4, 3}, {3, 1, 4}, {3, 3,        4}, {3, 5, 4},      {2, 1, 5}, {3, 3, 5}, {2, 5, 5}, {1, 1, 6}, {1, 3, 6}, {2, 3,        6}, {3, 3, 6}, {1, 5, 6}} ]}, None},  TrackedSymbols :> {ix, jy, kz, sm}, SaveDefinitions -> True]

Manipulate[ DynamicModule[{v = v0, T, imesh, sb, reg, gList}, Column[{Column[{Text[Style["voxel values", Gray, Italic]], Row[Join[Riffle[ Table[ With[{j = j}, Underscript[Grid[Table[With[{iL = i, jL = j, kL = k}, Checkbox[Dynamic[v[[iL, jL, kL]]]]], {k, kz}, {i, ix}], Spacings -> 0], Text[Style[Row[{"y", "=", j, If[j == jy + 1 - j, "", Row[{" | y", "=", jy + 1 - j}]]}], Gray, Italic]]]], {j, Ceiling[jy/2]}], "[VerticalSeparator]"], {" "}, {Dynamic[ If[imesh =!= EmptyRegion[3], Show[reg, ImageSize -> {{140}, {140}}, ViewPoint -> {-3, 1, 1}], ""], TrackedSymbols :> {reg, imesh}]}]]}], Dynamic[T = Table[Boole@v[[i, Min[j, jy + 1 - j], k]], {k, kz}, {j, jy}, {i, ix}]; imesh = ImageMesh[Image3D[T], Method -> "MarchingSquares"]; If[imesh =!= EmptyRegion[3], sb = SortBy[ConnectedMeshComponents@imesh, Volume]; Column[{reg = sb[[-1]]; Graphics3D[smooth3D[reg, sm], ImageSize -> 400, ViewPoint -> {-3, 1, 1}, Ticks -> None, Axes -> True, AxesLabel -> {"x", "y", "z"}]}], "empty region"], TrackedSymbols :> {v}]}, Dividers -> All]], Row[{Underscript[Control[{{ix, 5, ""}, 3, 10, 1, SetterBar}], Style["(x)", Gray]], "[Times]", Underscript[Control[{{jy, 5, ""}, 3, 10, 1, SetterBar}], Style["(y)", Gray]], "[Times]", Underscript[Control[{{kz, 6, ""}, 3, 10, 1, SetterBar}], Style["(z)", Gray]]}], Delimiter, {{sm, 1, "smoothness"}, 1, 3, 1, SetterBar}, {{v0, MapAt[True &, Table[False, {10}, {10}, {10}], {{1, 1, 2}, {2, 1, 2}, {3, 1, 2}, {3, 2, 2}, {3, 3, 2}, {3, 4, 2}, {1, 5, 2}, {2, 5, 2}, {3, 5, 2}, {3, 2, 3}, {3, 4, 3}, {3, 1, 4}, {3, 3, 4}, {3, 5, 4}, {2, 1, 5}, {3, 3, 5}, {2, 5, 5}, {1, 1, 6}, {1, 3, 6}, {2, 3, 6}, {3, 3, 6}, {1, 5, 6}} ]}, None}, TrackedSymbols :> {ix, jy, kz, sm}, SaveDefinitions -> True]

Randomly and independently selecting the voxel value of a 3D image makes it improbable that very large connected components without many holes form. Using instead random functions and deriving voxel values from these random continuous functions yields different-looking types of 3D shapes that have a larger uniformity over the voxel range. Effectively, the voxel values are no longer totally uncorrelated.

makeSymmetricShapes3DFunctionBased[{dimz_, dimy_, dimx_}, G_] :=  Module[{fun, randomImage, imesh, M = 2 Max[{dimx, dimy, dimz}], x, y,     z},  fun[x_, y_, z_] =      Sum[Cos[RandomReal[{-M, M}] (y - (dimy + 1)/2)]                                                          Cos[RandomReal[{-M, M}] x + 2 Pi RandomReal[]]                                                                    Cos[RandomReal[{-M, M}] z + 2 Pi RandomReal[]], {4}];   randomImage =     Image3D[Table[      If[fun[x, y, z] > G, 0, 1], {x, dimx}, {y, dimy}, {z, dimz}]] ;    imesh = ImageMesh[randomImage, Method -> "MarchingSquares"];        Select[ConnectedMeshComponents@imesh, 10 < Volume[#] &]]

Here are some examples of the resulting regions, as well as their smoothed versions.

SeedRandom[55]; symmFunctionShapes =   Table[makeSymmetricShapes3DFunctionBased[{d, d, d}, -0.3], {d, 5, 8}]

symmFunctionShapes /. bmr_BoundaryMeshRegion :>    displaySmoothedRegion[bmr,     Directive[Blend[{GrayLevel[0.5], Orange}, 0.1],      Specularity[Purple, 10]], ViewPoint -> {-3, -0.5, 1.2}]

Selected Examples of 3D Shapes

Our notebook contains in the initialization section more than 400 selected regions of “interesting” shapes classified into five types (mostly arbitrarily, but based on human feedback).

types = <|"asymmetric general shapes" -> aymmetricGeneralShapes,                 "asymmetric animal shapes" -> asymmetricAnimalShapes,                 "symmetric general shapes"  -> symmetricGeneralShapes,                 "symmetric animal shapes" -> symmetricAnimalShapes,                  "symmetric alien shapes" -> symmetricAlienShapes,                     "asymmetric function animal shapes" ->      asymmetricFunctionAnimalShapes,                     "symmetric function animal shapes" ->      symmetricFunctionAnimalShapes|>;

Let’s look at some examples of these regions. Here is a list of some selected ones. Many of these shapes found in random 3D images could be candidates for Generation 8 Pokémon or even some new creatures, tentatively dubbed Mathtubbies.

selections = <|    "asymmetric general shapes" ->       {1, 4, 7, 8, 9, 10, 11, 13, 18, 20, 32, 35, 39, 43, 48, 49},     "asymmetric animal shapes" ->       {3, 4, 5, 6, 7, 10, 11, 13, 14, 15, 16, 17, 18, 24, 25, 28},     "symmetric general shapes"  ->  {1, 4, 7, 12, 15, 16, 18, 20, 22,       25, 26, 27, 28, 29, 33, 35, 36, 39, 41, 42} ,       "symmetric animal shapes" ->  {2, 3, 5, 6, 7, 8, 9, 10, 11, 12,       14, 15, 20, 22, 23, 25, 26, 31, 32, 35},       "symmetric alien shapes" ->      {2, 4, 5, 6, 8, 9, 13, 15, 17, 18, 19, 20, 26, 30, 38, 39},         "asymmetric function animal shapes" -> {4, 5, 6, 9, 10, 11,       13, 15, 18, 22, 29, 30, 34, 39, 41, 54, 58, 66, 69, 76},         "symmetric function animal shapes" -> {1, 4, 5, 6, 10, 13, 16,       20, 21, 26, 29, 32, 34, 35, 36, 41, 78, 88, 90, 92}|>;

Many of the shapes are reminiscent of animals, even if the number of legs and heads is not always the expected number.

Do[Print[Framed[Style[t, Bold, Gray], FrameStyle -> Gray]];   Print /@ Partition[    Show[Rasterize[#], ImageSize -> {{200}, {200}}] & /@ (makeRegion /@        types[t][[selections[[t]]]]), 4],  {t, Keys[types]}]

asymmetrical general shapes

asymmetrical general shapes 1

asymmetrical general shapes 2

asymmetrical general shapes 3

asymmetrical general shapes 4

asymmetric animal shapes

asymmetric animal shapes 1

asymmetric animal shapes 2

asymmetric animal shapes 3

asymmetric animal shapes 4

symmetric general shapes

symmetric general shapes 1

symmetric general shapes 2

symmetric general shapes 3

symmetric general shapes 4

symmetric general shapes 5

symmetric animal shapes

symmetric animal shapes 1

symmetric animal shapes 2

symmetric animal shapes 3

symmetric animal shapes 4

symmetric animal shapes 5

symmetric alien shapes

symmetric alien shapes 1

symmetric alien shapes 2

symmetric alien shapes 3

symmetric alien shapes 4

asymmetric functional animal shapes

assymetric functional animal shapes 1

assymetric functional animal shapes 2

assymetric functional animal shapes 3

assymetric functional animal shapes 4

assymetric functional animal shapes 5

symmetric function animal shapes

symmetric function animal shapes 1

symmetric function animal shapes 2

symmetric function animal shapes 3

symmetric function animal shapes 4

symmetric function animal shapes 5

To see all of the 400+ shapes from the initialization cells, one could carry out the following.

Do[Print[Framed[Style[t, Bold, Gray], FrameStyle -> Gray]];
Do[Print[Rasterize @ makeRegion @ r], {r, types[t]}], {t,Keys[types]}]

The shapes in the list above were manually selected. One could now go ahead and partially automate the finding of interesting animal-looking shapes and “natural” orientations using machine learning techniques. In the simplest case, we could just use ImageIdentify.

ImageIdentify[ , "animal", 5, "Probability"]

This seems to be a stegosaurus-poodle crossbreed. But we will not pursue this direction here and now, but rather return to the 2D projections. (For using software to find faces in architecture and general equipment, see Hong2014.)

Modifying the 3D Shapes

Before returning to the 2D projections, we will play for a moment with the 3D shapes generated and modify them for a different visual appearance.

For instance, we could tetrahedralize the regions and fill the tetrahedra with spheres.

makeRegion[reg_, n_] :=   With[{sr = smooth3D[reg[[1]], n]},    BoundaryMeshRegion[sr[[1]], sr[[2, 1]]]]

Or with smaller tetrahedra.

dualTetrahedron[Tetrahedron[l_]] :=   Tetrahedron[ Mean /@ Subsets[l, {3}]]

Or add some spikes.

addPrickle[Polygon[{p1_, p2_, p3_}], \[Alpha]_: 1 ] :=   Module[{mp = Mean[{p1, p2, p3}], normal, \[Lambda]},   normal = Normalize[Cross[p1 - mp, p2 - mp]];   \[Lambda] = Mean[EuclideanDistance[#, mp] & /@ {p1, p2, p3}];   Tetrahedron[{p1, p2, p3, mp + \[Alpha] \[Lambda] normal}] ]

Or fill the shapes with cubes.

makeRandomPoints[d_, n_] := RandomPoint[makeRegion[d, 2], n]

Or thicken or thin the shapes.

thickenThinnen[gr_, d_] :=   Show[gr] /.    GraphicsComplex[vs_, b_, VertexNormals -> ns_] :>     GraphicsComplex[ vs + d Normalize /@ ns, b, VertexNormals -> ns]

Or thicken and add thin bands.

Module[{ob = symmetricAlienShapes[[43]], dr, dd},  dr = SignedRegionDistance[ob[[1]]];  dd[{x_Real, y_Real, z_Real}] := dr[{x, y, z}];  Row[{Show[makeRegion[ob], ImageSize -> 240],       ContourPlot3D[dd[{x, y, z}], {x, 0, 9}, {y, -1, 9}, {z, -1, 8},      Contours -> {0.33}, PlotPoints -> 80, MaxRecursion -> 0,     MeshFunctions -> {#3 &}, Mesh -> 40,      MeshShading -> {ob[[2]], None},     Evaluate[makeOptions[ob]], Boxed -> False, Axes -> False,      ImageSize -> 320]}]]

Or just add a few stripes as camouflage.

tigerize[{reg_, col_, {vp_, vd_}}, {col1_, col2_}, {stripes_, xyz_}] :=   Module[{sm = smooth3D[reg, 3], g, size},   g = Show[     BoundaryMeshRegion[sm[[1]], sm[[2, 1]],       PlotTheme -> "SmoothShading"], ViewPoint -> vp,      ViewVertical -> vd];           size = Abs[Subtract @@ MinMax[Transpose[sm[[1]]][[xyz]]]];   g /. GraphicsComplex[vs_, rest__] :> GraphicsComplex[vs, rest,                                             VertexColors -> (         Blend[{col1, col2}, Sin[2 Pi stripes #[[xyz]]/size]^2] & /@ vs

Or model the inside through a wireframe of cylinders.

makeCylinders[pts_, m_, \[Rho]_] := Module[{nf = Nearest[pts]},     {Union[Flatten[      Function[p,         Cylinder[Sort@{#, p}, \[Rho]] & /@  Rest[ nf[p, m + 1]]] /@        pts]],     Sphere[#, \[Rho]] & /@ pts} ]

Or build a stick figure.

toStickFigure[ob_, \[Delta]_] :=   Module[{pts, nf, gr, ccs, modCol,                      f = RandomChoice[{Lighter, Darker}][#, RandomReal[{0, 0.2}]] &},     nf = Nearest[     pts = Cases[makeRegion[ob], _GraphicsComplex, \[Infinity]][[1,        1]]];   gr = Graph[     UndirectedEdge[#, nf[#, {Infinity, \[Delta]}][[-1]]] & /@ pts];   ccs = WeaklyConnectedGraphComponents[gr];   modCol[] := ob[[2]] /. Directive[col1_, Specularity[col2_, e2_]] :>                                                         Directive[f[col1],        Specularity[f[col2], RandomReal[{0.75, 1.25}] e2]];   Graphics3D[{EdgeForm[], CapForm[None],      {modCol[],         Cylinder[Union[Sort /@ List @@@ EdgeList[#]], 0.05]} & /@       Take[ccs, All],       ob[[2]], Sphere[#, 0.05] & /@ pts}, makeOptions[ob],     Boxed -> False,     Method -> {"TubePoints" -> 6, "SpherePoints" -> 6}]]

Or fill the surface with a tube.

makeTube[ob_, n_, \[Rho]_] :=  Module[{dr = makeRegion[ob, 1], pairs, neighbors, nl, mcs},   pairs = {#[[1, 1]], Last /@ #} & /@ Split[Sort[Flatten[{First[#],            Reverse[First[#]]} & /@ MeshCells[dr, 1],         1]], #1[[1]] == #2[[1]] &];   (neighbors[#1] = #2) & @@@ pairs;   nl = NestList[RandomChoice[DeleteCases[neighbors[#], #]] &, 1, n];   mcs = MeshCoordinates[dr];   Tube[BSplineCurve[mcs[[nl]]], \[Rho]]]

Or a Kelvin inversion.

With[{g = With[{o = aymmetricGeneralShapes[[50]]},     With[{sm = smooth3D[o[[1]], 3]},      Show[BoundaryMeshRegion[sm[[1]], Style[sm[[2, 1]], o[[2]]],        PlotTheme -> "SmoothShading"]]]]},  {Row[{Show[g, ImageSize -> 240],                 Show[invert3D[g, {4, 4, 4}], ViewPoint -> {2.62, -2.06, -0.52},                       ViewVertical -> {-0.04, -0.92, -0.42},       ImageSize -> 280]}]}]

Shadows of the Selected Examples

If we look at the 2D projections of some of these 3D shapes, we can see again (with some imagination) a fair number of faces, witches, kobolds, birds and other animals. Here are some selected examples. We show the 3D shape in the original orientation, a randomly oriented version of the 3D shape, and the three coordinate-plane projections of the randomly rotated 3D shape.

projectionPair[{{type_, n_}, angles_}] :=  Module[{opts, col, sr},   opts = Sequence[ImageSize -> {{220}, {220}}, BoxRatios -> {1, 1, 1},      ViewPoint -> {3, -3, 3}, Axes -> False, Boxed -> False];   col = types[type][[n, 2]];   sr = smooth3D[types[type][[n, 1]], 3];   Row[Riffle[Framed /@ Rasterize /@        {Graphics3D[{EdgeForm[], col, sr},          ViewPoint -> types[type][[n]][[3, 1]],                                           ViewVertical -> types[type][[n]][[3, 2]],          ImageSize -> {{220}, {220}}, Axes -> False, Boxed -> False],         Graphics3D[{EdgeForm[], col, rotate[sr, angles]}, opts],         Graphics3D[projectTo2D[rotate[sr, angles]], opts]}, " "]]]

Unsurprisingly, some are recognizable 3D shapes, like these projections that look like bird heads.

projectionPair[{{"asymmetric animal shapes", 15}, {-2.8, 3.05, 2.35}}]

Others are much more surprising, like the two heads in the projections of the two-legged-two-finned frog-dolphin.

projectionPair[{{"symmetric general shapes", 34}, {2.8, -1.4, 1.4}}]

Different orientations of the 3D shape can yield quite different projections.

projectionPair[{{"asymmetric general shapes",     49}, {-3.05, -0.75, -1.3}}]

For the reader’s amusement, here are some more projections.

projectionPair[{{"symmetric alien shapes", 3}, {-0.4, -0.25, 0.85}}]

projectionPair[{{"symmetric alien shapes", 7}, {0., 2.55, 0.6}}]

projectionPair[{{"asymmetric general shapes", 11}, {-1.25,     0.05, -1.6}}]

projectionPair[{{"asymmetric general shapes",     9}, {-0.15, -0.85, -0.55}}]

projectionPair[{{"symmetric general shapes", 26}, {1.8, -2.6, -2.3}}]

projectionPair[{{"asymmetric animal shapes", 5}, {2.65, 2.1, -2.85}}]

projectionPair[{{"asymmetric general shapes",     34}, {-3.1, -2.95, -1.}}]

Shapes from 4D Images

Now that we have looked at 2D projections of 3D shapes, the next natural step would be to look at 3D projections of 4D shapes. And while there is currently no built-in function Image4D, it is not too difficult to implement for finding the connected components of white 4D voxels. We implement this through the graph theory function ConnectedComponents and consider two 4D voxels as being connected by an edge if they share a common 3D cube face. As an example, we use a 10*10*10*10 voxel 4D image. makeVoxels4D makes the 4D image data and whitePositionQ marks the position of the white voxels for quick lookup.

makeVoxels4D[{dimw_, dimz_, dimy_, dimx_}, {black_, white_}] :=  Table[RandomChoice[{black, white} -> {0,       1}], {dimw}, {dimz}, {dimy}, {dimx}]

The 4D image contains quite a few connected components.

ccs = ConnectedComponents[gr];

Here are the four canonical projections of the 4D complex.

With[{cc = ccs[[1]]},  {Graphics3D[(Cuboid[# - 1/2, # + 1/2] &@{#1, #2, #3}) & @@@ cc,                       AxesLabel -> {"x", "y", "z"}, Axes -> True,     Ticks -> False],   Graphics3D[(Cuboid[# - 1/2, # + 1/2] &@{#1, #2, #4}) & @@@ cc,                           AxesLabel -> {"x", "y", "w"}, Axes -> True,     Ticks -> False],   Graphics3D[(Cuboid[# - 1/2, # + 1/2] &@{#1, #3, #4}) & @@@ cc,                           AxesLabel -> {"x", "z", "w"}, Axes -> True,     Ticks -> False],   Graphics3D[(Cuboid[# - 1/2, # + 1/2] &@{#2, #3, #4}) & @@@ cc,                           AxesLabel -> {"y", "z", "w"}, Axes -> True,     Ticks -> False]}]

We package the finding of the connected components into a function getConnected4DVoxels.

getConnected4DVoxels[Image4D[l_], n_] :=   Module[{posis, blackPos, edges, gr, v = UnitVector[4, #] &},   posis =     DeleteCases[     Level[MapIndexed[If[# === 0, #2, Nothing] &, l, {-1}], {-2}], {}];   (blackPos[#] = True) & /@ posis;    edges = Union[Flatten[Table[If[TrueQ[blackPos[# + v[j]]],                Sort@ UndirectedEdge[#, # + v[j]], {}] & /@ posis, {j,         4}]]];   gr = Graph[edges];   Take[Reverse[SortBy[ConnectedComponents[gr], Length]], UpTo[n]]]

We also define a function rotationMatrix4D for conveniently carrying rotations in the six 2D planes of the 4D space.

rotationMatrix4D[{\[Omega]xy_, \[Omega]xz_, \[Omega]xw_, \[Omega]yz_, \ \[Omega]yw_, \[Omega]zw_}] :=    With[{u = UnitVector[4, #] &, c = Cos, s = Sin},     Fold[Dot, IdentityMatrix[4],        {{{c[\[Omega]xy], s[\[Omega]xy], 0, 0}, {-s[\[Omega]xy],         c[\[Omega]xy], 0, 0},  u[3], u[4]},          {{c[\[Omega]xz], 0, s[\[Omega]xz], 0},        u[2], {-s[\[Omega]xz], 0, c[\[Omega]xz], 0}, u[4]},          {{c[\[Omega]xw], 0, 0, s[\[Omega]xw]}, u[2],        u[3], {-s[\[Omega]xw], 0, 0, c[\[Omega]xw]}},          {u[1], {0, c[\[Omega]yz], s[\[Omega]yz],         0}, {0, -s[\[Omega]yz], c[\[Omega]yz], 0}, u[4]},          {u[1], {0, c[\[Omega]yw], 0, s[\[Omega]yw]},        u[3], {0, -s[\[Omega]yw], 0, c[\[Omega]yw]}},          {u[1],        u[2], {0, 0, c[\[Omega]zw], s[\[Omega]zw]}, {0,         0, -s[\[Omega]zw], c[\[Omega]zw]}}}]];

Once we have the 3D projections, we can again use the above function to smooth the corresponding 3D shapes.

to3DImage[l_] :=   With[{mins = Min /@ Transpose[l]}, (# - mins) + 1 & /@ l]

In the absence of Tralfamadorian vision, we can visualize a 4D connected voxel complex, rotate this complex in 4D, then project into 3D, smooth the shapes and then project into 2D. For a single 4D shape, this yields a large variety of possible 2D projections. The function projectionGrid3DAnd2D projects the four 3D projections canonically into 2D. This means we get 12 projections. Depending on the shape of the body, some might be identical.

extractRegion[vs_] := Last[SortBy[ConnectedMeshComponents[     ImageMesh[Image3D[SparseArray[vs -> 1]],       Method -> "MarchingSquares"]], Volume]]

We show the 3D shape in a separate graphic so as not to cover up the projections. Again, many of the 2D projections, and also some of the 3D projections, remind us of animal shapes.

projectionGrid3DAnd2D[ccs[[1]], {1, 2, 3, 4, 5, 6}, 2,   Directive[GrayLevel[0.4], Specularity[Yellow, 12]]]

The following Manipulate allows us to rotate the 4D shape. The human mind sees many animal shapes and faces.

Manipulate[  projectionGrid3DAnd2D[   ccs[[c]], {\[Omega]xy, \[Omega]xz, \[Omega]xw, \[Omega]yz, \ \[Omega]yw, \[Omega]zw},                                                     1,    Directive[GrayLevel[0.4], Specularity[Yellow, 12]]],  {{c, 2, "component"}, 1, 12, 1, SetterBar},   Delimiter,  {{s, 1, "smoothness"}, {0, 1, 2}},   Delimiter,  {{\[Omega]xy, 0}, -Pi, Pi, ImageSize -> Small},  {{\[Omega]xz, 0}, -Pi, Pi, ImageSize -> Small},  {{\[Omega]xw, 0}, -Pi, Pi, ImageSize -> Small},  {{\[Omega]yz, 0}, -Pi, Pi, ImageSize -> Small},  {{\[Omega]yw, 0}, -Pi, Pi, ImageSize -> Small},  {{\[Omega]zw, 0}, -Pi, Pi, ImageSize -> Small},  TrackedSymbols :> True, ControlPlacement -> Left,  SaveDefinitions -> True]

Manipulate[  projectionGrid3DAnd2D[   ccs[[c]], {\[Omega]xy, \[Omega]xz, \[Omega]xw, \[Omega]yz, \ \[Omega]yw, \[Omega]zw},                                                     1,    Directive[GrayLevel[0.4], Specularity[Yellow, 12]]],  {{c, 2, "component"}, 1, 12, 1, SetterBar},   Delimiter,  {{s, 1, "smoothness"}, {0, 1, 2}},   Delimiter,  {{\[Omega]xy, 0}, -Pi, Pi, ImageSize -> Small},  {{\[Omega]xz, 0}, -Pi, Pi, ImageSize -> Small},  {{\[Omega]xw, 0}, -Pi, Pi, ImageSize -> Small},  {{\[Omega]yz, 0}, -Pi, Pi, ImageSize -> Small},  {{\[Omega]yw, 0}, -Pi, Pi, ImageSize -> Small},  {{\[Omega]zw, 0}, -Pi, Pi, ImageSize -> Small},  TrackedSymbols :> True, ControlPlacement -> Left,  SaveDefinitions -> True]

Here is another example, with some more scary animal heads.

SeedRandom[8]; projectionGrid3DAnd2D[          getConnected4DVoxels[    Image4D[makeVoxels4D[{10, 10, 10, 10}, {4, 1}]], 5][[1]],     {-1.8, 2.6, 1., 2.2, -2.7, -1.5}, 3,   Directive[Darker[Yellow, 0.4], Specularity[Red, 10]]]

SeedRandom[8]; projectionGrid3DAnd2D[          getConnected4DVoxels[    Image4D[makeVoxels4D[{10, 10, 10, 10}, {4, 1}]], 5][[1]],     {-1.8, 2.6, 1., 2.2, -2.7, -1.5}, 3,   Directive[Darker[Yellow, 0.4], Specularity[Red, 10]]]

We could now go to 5D images, but this will very probably bring no new insights. To summarize some of the findings: After rotation and smoothing, a few percent of the connected regions of black voxels in random 3D images have an animal-like shape, or an artistic rendering of an animal-like shape. A large fraction (~10%) of the projections of these 3D shapes into 2D pronouncedly show the pareidolia phenomenon, in the sense that we believe we can recognize animals and faces in these projections. 4D images, due to the voxel count that increases exponentially with dimension, yield an even larger number of possible animal and face shapes.

To download this post as a CDF, click here. New to CDF? Get your copy for free with this one-time download.

]]>
http://blog.wolfram.com/2017/02/23/how-many-animals-and-arp-imals-can-one-find-in-a-random-3d-image/feed/ 5
What Do Gravitational Crystals Really Look (i.e. Move) Like? http://blog.wolfram.com/2016/06/02/what-do-gravitational-crystals-really-look-i-e-move-like/ http://blog.wolfram.com/2016/06/02/what-do-gravitational-crystals-really-look-i-e-move-like/#comments Thu, 02 Jun 2016 18:21:15 +0000 Michael Trott http://blog.internal.wolfram.com/?p=31322 In a recent blog, Stephen Wolfram discusses the idea of what he calls “gravitational crystals.” These are infinite arrays of gravitational bodies in periodic motion. Two animations of mesmerizing movements of points were given as examples of what gravitational crystals could look like, but no explicit orbit calculations were given.

In this blog, I will carefully calculate explicit numerical examples of gravitational crystal movements. The “really” in the title should be interpreted as a high-precision, numerical solution to an idealized model problem. It should not be interpreted as “real world.” No retardation, special or general relativistic effects, stability against perturbation, tidal effects, or so on are taken into account in the following calculations. More precisely, we will consider the simplest case of a gravitational crystal: two gravitationally interacting, rigid, periodic 2D planar arrays embedded in 3D (meaning a 1/distance2 force law) of masses that can move translationally with respect to each other (no rotations between the two lattices). Each infinite array can be considered a crystal, so we are looking at what could be called the two-crystal problem (parallel to, and at the same time in distinction to, the classical gravitational two-body problem).

Crystals in motion

Crystals have been considered for centuries as examples of eternal, never-changing objects. Interestingly, various other time-dependent versions of crystals have been suggested over the last few years. Shapere and Wilczek suggested space-time crystals in 2012, and Boyle, Khoo, and Smith suggested so-called choreographic crystals in 2014.

In the following, I will outline the detailed asymptotic calculation of the force inside a periodic array of point masses and the numerical methods to find periodic orbits in such a force field. Readers not interested in the calculation details should fast-forward to the interactive demonstration in the section “The resulting gravitational crystals.”

The force of a square grid of masses

Within an infinite crystal-like array of point masses, no net force is exerted on any of the point masses due to symmetry cancellation of the forces of the other point masses. This means we can consider the whole infinite array of point masses as rigid. But the space between the point masses has a nontrivial force field.

To calculate orbits of masses, we will have to solve Newton’s famous Newton's equation. So, we need the force of an infinite array of 1/r potentials. We will consider the simplest possible case, namely a square lattice of point masses and lattice constant L. The force at a point {x,y} is given by the following double sum:

The force at a point {x,y}

Unfortunately, we can’t sum this expression in closed form. Using the sum of the potential is not easier, either; it actually increases the likelihood of a complication in the form of the potential diverging. (Although deriving and subtracting the leading divergent term is possible: if we truncate the sums at ±M, we have a linearly divergent term 8 M sinh-1(1).)

Truncating the sums at ±<em>M</em>

So one could consider a finite 2D array of (2M+1)×(2M+1) point masses in the limit M→∞.

Finite 2D array of (2M+1)×(2M+1)

But the convergence of the double sum is far too slow to get precise values for the force. (We want the orbit periodicity to be correct to, say, 7 digits. This means we need to solve the differential equation to about 9 digits, and for this we need the force to be correct to at least 12 digits.)

Comparing force values for various lattice truncations

Because the force is proportional to 1/distance2, and the number of point masses grows with distance squared, taking all points into account is critical for a precise force value. Any approximation can’t make use of a finite number of point masses, but must instead include all point masses.

Borrowing some ideas from York and Wang, Lindbo and Tornberg, and Bleibel for calculating the Madelung constant to high precision, we can make use of one of the most popular integrals in mathematics.

One of the most popular math integrals

This allows us to write the force as:

Writing the force as an expression

Exchanging integration and summation, we can carry out the double sum over all (2∞+1)2 lattice points in terms of elliptic theta functions.

Double sum over all lattice points in terms of elliptic theta functions

Here we carry out the gradient operation under the integral sign:

Gradient operation under the integral sign

We obtain the following converging integral:

Obtaining a converging integral

While the integral does converge, numerical evaluation is still quite time consuming, and is not suited for a right-hand-side calculation in a differential equation.

Timing a force calculation

Now let’s remind ourselves about some properties of the Jacobi elliptic theta function 3. The two properties of relevance to us are its sum representation and its inversion formula.

Sum representation and inversion formula from the Jacobi elliptic theta function 3

The first identity shows that for t→0, the theta function (and its derivative) vanishes exponentially. The second identity shows that exponential decay can also be achieved at t→∞.

Using the sum representation, we can carry out the t integration in closed form after splitting the integration interval in two parts. As a result, we obtain for the force a sum representation that is exponentially convergent.

After some lengthy algebra, as one always says (which isn’t so bad when using the Wolfram Language, but is still too long for this short note), one obtains a formula for the force when using the above identities for ϑ3 and similar identities for ϑ´3. Here is the x component of the force. Note that M is now the limit of the sum representation of the elliptic theta function, not the size of the point mass lattice. The resulting expression for the force components is pretty large, with a leaf count of nearly 4,000. (Open the cell in the attached notebook to see the full expression.)

leaf count = 3744

Here is a condensed form for the force in the x direction that uses the abbreviation
ri j = (x + i L)2 + (y + j L)2:

Condensed form of the force

Truncating the exponentially convergent sums shows that truncation at around 5 terms gives about 17 correct digits for the force.

Truncation at around 5 terms gives about 17 correct digits for the force

The convergence speed is basically independent of the position {x, y}. In the next table, we use a point on the diagonal near to the point mass at the origin of the coordinate system.

Point on the diagonal near to the point mass at the origin of the coordinate system

For points near to a point mass, we recover, of course, the 1/distance2 law.

Radial expansion of the force

For an even faster numerical calculation of the force, we drop higher-order terms in the double sums and compile the force.

Dropping higher-order terms in the double sums to compile the force

All digits of the force are correct to machine precision.

Numerical force computation

And the calculation of a single force value takes about a tenth of a millisecond, which is well suited for further numerical calculations.

Timing of numerical force computation

For further use, we define the function forceXY that for approximate position values returns the 2D force vector.

Definition of force computation

The space of possible orbits

So now that we have a fast-converging series expansion for the force for the full infinite array of point masses, we are in good shape to calculate orbits.

The simplest possible situation is two square lattices of identical lattice spaces with the same orientation, moving relative to each other. In this situation, every point mass of lattice 1 experiences the same cumulative force from lattice 2, and vice versa. And within each lattice, the total force on each point mass vanishes because of symmetry.

Similar to the well-known central force situation, we can also separate the center of mass from the relative motion. The result is the equation of motion for a single mass point in the field of one lattice.

Here is a plot of the magnitude of the resulting force.

Plot of the magnitude of the resulting force

And here is a plot of the direction field of the force. The dark red dots symbolize the positions of the point masses.

Plot of the direction field of the force

How much does the field strength of the periodic array differ from the field strength of a single point mass? The following graphic shows the relative difference. On the horizontal and vertical lines in the middle of the rows and columns of the point masses, the difference is maximal. Due to the singularity of the force at the point masses, the force of a single mass point and the one of the lattice become identical in the vicinity of a point mass.

Difference between field strength of the periodic array and the field strength of a single point mass

The next plot shows the direction field of the difference between a single point mass and the periodized version.

Plot showing the direction field of the difference between a single point mass and the periodized version

Once we have the force field, inverting the relation Right vector over F (right vector over r) =-grad V (right vector over r) numerically allows us (because the force is obviously conservative) to calculate the potential surface of the infinite square array of point masses.

Calculating the potential surface of the infinite square array of point masses
Calculating the potential surface of the infinite square array of point masses

Now lets us look at actual orbits in the potential shown in the last two images.

The following Manipulate allows us to interactively explore the motion of a particle in the gravitational field of the lattice of point masses.

Manipulate exploring the motion of a particle in the gravitational field of the lattice of point masses

The relatively large (five-dimensional) space of possible orbits becomes more manageable if we look especially for some symmetric orbits, e.g. we enforce that the orbit crosses the line x = 0 or
x = 1/2 horizontally. Many orbits that one would intuitively expect to exist that move around 1, 2, or 3 point masses fall into this category. We use a large 2D slider to allow a more fine-grained control of the initial conditions.

Manipulate of orbits with restricted initial conditions

Another highly symmetric situation is a starting point along the diagonal with an initial velocity perpendicular to it.

Manipulate of orbits with restricted initial conditions

Finding periodic orbits

For the desired motion we are looking for, we demand that after a period, the particle comes back to either its original position with its original velocity vector or has moved to an equivalent lattice position.

Given an initial position, velocity, mass, and approximate period, it is straightforward to write a simple root-finding routine to zoom into an actual periodic orbit. We implement this simply by solving the differential equation for a time greater than the approximate orbit time, and find the time where the sum of the difference Right vector over x sub i - Right vector over x sub f + Right vector over v sub i - Right vector over v sub f of the initial and final positions (right vector over x sub i and right vector over x sub f) and initial and final velocities (right vector over v sub i and right vector over v sub f) is minimal. The function findPeriodicOrbit carries out the search. This method is well suited for orbits whose periods are not too long. This will yield a nice collection of orbits. For longer orbits, errors in the solution of the differential equation will accumulate, and more specialized methods could be employed, e.g. relaxation methods.

Given some starting values, findPeriodicOrbit attempts to find a periodic orbit, and returns the corresponding initial position and velocity.

findPeriodicOrbit attempting to find a periodic orbit

Given initial conditions and a maximal solution time, the function minReturnData determines the exact time at which the differences between the initial and final positions and velocities are minimal. The most time-consuming step in the search process is the solution of the differential equation. To avoid repeating work, we do not include the period time as an explicit search variable, but rather solve the differential equation for a fixed time T and then carry out a one-dimensional minimization to find the time at which the sum of the position and velocity differences becomes minimal.

One-dimensional minimization to find the time at which the sum of the position and velocity differences becomes minimal

As the search will take about a minute per orbit, we monitor the current orbit shape to entertain us while we wait. Typically, after a couple hundred steps we either find a periodic orbit, or we know that we failed to find a periodic orbit. In the latter case, the local minimum of the function to be minimized (the sum of the norms of initial versus final positions and velocities) has a finite value and so does not correspond to a periodic orbit.

Here is a successful search for a periodic orbit. The initial conditions for the search we either get interactively from the above Manipulate or from a random search selecting viable candidate initial conditions.

Search for a periodic orbit

Here is a successful search for an orbit that ends at an equivalent lattice position.

Search for an orbit that ends at an equivalent lattice position

So what kind of periodic orbits can we find? As the result of about half a million solutions with random initial positions, velocities, masses, and solution times of the equations of motion, we find the following types of solutions:

    1. Closed orbits around a single point mass

    2. Closed orbits around a finite (≥ 2) point mass

    3. “Traveling orbits” that don’t return to the initial position but to an equivalent position in another lattice cell

(In this classification, we ignore “head-on” collision orbits and separatrix-like orbits along the symmetry lines between rows and columns of point masses.)

Here is a collection of initial values and periods for periodic orbits found in the carried-out searches. The small summary table gives the counts of the orbits found.

Initial values and periods for periodic orbits found in the carried-out searches
Summary table giving the counts of the orbits found

Using minReturnDistance, we can numerically check the accuracy of the orbits. At the “return time” (the last element of the sublists of orbitData), the sum of the differences of the position and velocity vectors is quite small.

Using minReturnDistance to numerically check the accuracy of orbits

Now let’s make some graphics showing the orbits from the list orbitData using the function showOrbit.

Making graphics showing orbits using showOrbit from the list orbitData

1. Orbits around a single point mass

In the simplest case, these are just topologically equivalent to a circle. This type of solution is not unexpected; for initial conditions close to a point mass, the influence of the other lattice point masses will be small.

Orbits around a single point mass

2. Orbits around two point masses

In the simplest case, these are again topologically equivalent to a circle, but more complicated orbits exist. Here are some examples.

Orbits around two point masses

3. “Traveling orbits” (open orbits) that don’t return to the initial position but to an equivalent position in another lattice cell

These orbits come in self-crossing and non-self-crossing versions. Here are some examples.

Self-crossing and non-self-crossing versions of orbits

Individually, the open orbits look quite different from the closed ones. When plotting the continuations of the open orbits, their relation to the closed orbits becomes much more obvious.

Plotting continuations of open orbits

For instance, the following open orbit reminds me of the last closed orbit.

Showing multiple closed orbits

The last graphic suggests that closed orbits around a finite number of points could become traveling orbits after small perturbations by “hopping” from a closed orbit around a single or finite number of point masses to the next single or finite group of point masses.

But there are also situations where one intuitively might expect closed orbits to exist, but numerically one does not succeed in finding a precise solution. One example is a simple rounded-corner, triangle-shaped orbit that encloses three point masses.

Simple rounded-corner, triangle-shaped orbit that encloses three point masses

Showing 100 orbits with slightly disturbed initial conditions gives an idea of why a smooth match of the initial point and the final point does not work out. While we can make the initial and final point match, the velocity vectors do not agree in this case.

Family of nearly closed orbits

Another orbit that seems not to exist, although one can make the initial and final points and velocities match pretty well, is the following double-slingshot orbit. But reducing the residue further by small modifications of the initial position and velocity seems not to be possible.

Data for the slingshot orbit

Here are a third and fourth type of orbit that nearly match up, but the function findPeriodicOrbit can’t find parameters that bring the difference below 10-5.

Data for the coathanger orbit

Here are two graphics of the last two orbits.

Three examples of open orbits

There are many more periodic orbits. The above is just a small selection of all possible orbits. Exploring a family of trajectories at once shows the wide variety of orbits that can arise. We let all orbits start at the line segment {{x, 1/2}|-1/2 ≤ x ≤ 1/2} with an angle α(x) = 𝜋(1/2-|x|).

Manipulate of families of orbits

If we plot sufficiently many orbits and select the ones that do not move approximately uniformly, we can construct an elegant gravitational crystal church.

Gravitational crystal church

The last image nicely shows the “branching” of the trajectories at point masses where the overall shape of the trajectory changes discontinuously. Displaying the flow in the three-dimensional x-t-y space shows the branching even better.

Displaying the flow in the three-dimensional x-t-y space

General trajectories

We were looking for concrete periodic orbits in the field of an infinite square array of point masses. For more general results on trajectories in such a potential, see Knauf. Knauf proves that the behavior of average orbits is diffusive. Periodic orbits are the exception in the space of initial conditions. Almost all orbits will wander randomly around. So let’s have a quick look at a larger number of orbits. The following calculation will take about six hours, and evaluates the final points and velocities of masses starting at {x,0.5} with a velocity {0,v} on a dense x-v grid with 0 ≤ x ≤ 1 and 1 ≤ v ≤ 3.

Calculating diffusive trajectories

If we display all final positions, we get the following graphic that gives a feeling of the theoretically predicted diffusive behavior of the orbits.

Displaying all final positons

While diffusive in average, as we are solving a differential equation, we expect (at least piecewise) that the final positions depend continuously on the initial conditions. So we burn another six hours of CPU time and calculate the final positions of 800,000 test particles that start radially from a circle around a lattice point mass. (Because of the symmetry of the force field, we have only 100,000 different initial value problems to solve numerically.)

Calculating diffusive trajectories

Here are the points of the final positions of the 800,000 points. We again see how nicely the point masses of the lattice temporarily deflect the test masses.

Points of the final positions of the 800,000 points

We repeat a variation of the last calculation and determine the minimum value of right vector over x sub i - right vector over x sub f + right vector over v sub i - right vector over v sub f in the x-v plane, where x and v are the initial conditions of the particle starting at y = 0.5 perpendicular upward.

Calculation of phase-space differences

We solve the equations of motions for 0 ≤ t ≤ 2.5 and display the value of the minimum of right vector over x sub i - right vector over x sub f + right vector over v sub i - right vector over v sub f in the time range 0.5 ≤ t ≤ 2.5. If the minimum occurs for t=0.5, we use a light gray color; if the minimum occurs for t=2.5, a dark gray color; and for 0.5 < t < 2.5, we color the sum of norms from pink to green. Not unexpectedly, the distance sum shows a fractal-like behavior, meaning the periodic orbits form a thinly spaced subset of initial conditions.

Visualization of phase-space distances

A (2D) grain of salt

Now that we have the force field of a square array of point masses, we can also use this force to model electrostatic problems, as these obey the same force law.

Identical charges would form a Wigner crystal, which is hexagonal. Two interlaced square lattices of opposite charges would make a model for a 2D NaCl salt crystal.

2D NaCl salt crystal

By summing the (signed) forces of the four sublattices, we can again calculate a resulting force of a test particle.

Calculating the resulting force of a test particle

The trajectories of a test charge become more irregular as compared with the gravitational model considered above. The following Manipulate allows us to get a feeling for the resulting trajectories. The (purple) test charge is attracted to the green charges of the lattice and repelled from the purple charges of the lattice.

Trajectories of a test charge becoming more irregular

The resulting gravitational crystals

We can now combine all the elements together to visualize the resulting gravitational crystals. We plot the resulting lattice movements in the reference frame of one lattice (the blue lattice). The red lattice moves with respect to the blue lattice.

Lattice point orbits in gravitational crystals

Summary

Using detailed numerical calculation, we verified the existence of the suggested gravitational crystals. For the simplest case, the two square lattice, many periodic orbits of small period were found. More extensive searches would surely return more, longer period solutions.

Using the general form of the Poisson summation formula for general lattices, the above calculations could be extended to different lattices, e.g. hexagonal lattices or 3D lattices.

Download this post as a Computable Document Format (CDF) file. New to CDF? Get your copy for free with this one-time download.

]]>
http://blog.wolfram.com/2016/06/02/what-do-gravitational-crystals-really-look-i-e-move-like/feed/ 2
An Exact Value for the Planck Constant: Why Reaching It Took 100 Years http://blog.wolfram.com/2016/05/19/an-exact-value-for-the-planck-constant-why-reaching-it-took-100-years/ http://blog.wolfram.com/2016/05/19/an-exact-value-for-the-planck-constant-why-reaching-it-took-100-years/#comments Thu, 19 May 2016 20:53:24 +0000 Michael Trott http://blog.internal.wolfram.com/?p=30964
Blog communicated on behalf of Jean-Charles de Borda.

Some thoughts for World Metrology Day 2016

Please allow me to introduce myself
I’m a man of precision and science
I’ve been around for a long, long time
Stole many a man’s pound and toise
And I was around when Louis XVI
Had his moment of doubt and pain
Made damn sure that metric rules
Through platinum standards made forever
Pleased to meet you
Hope you guess my name

Introduction and about me

In case you can’t guess: I am Jean-Charles de Borda, sailor, mathematician, scientist, and member of the Académie des Sciences, born on May 4, 1733, in Dax, France. Two weeks ago would have been my 283rd birthday. This is me:

Jean-Charles de Borda

In my hometown of Dax there is a statue of me. Please stop by when you visit. In case you do not know where Dax is, here is a map:

Map of Dax and statue of Jean-Charles de Borda

In Europe when I was a boy, France looked basically like it does today. We had a bit less territory on our eastern border. On the American continent, my country owned a good fraction of land:

France and French territory in America in 1733

I led a diverse earthly life. At 32 years old I carried out a lot of military and scientific work at sea. As a result, in my forties I commanded several ships in the Seven Years’ War. Most of the rest of my life I devoted to the sciences.

But today nobody even knows where my grave is, as my physical body died on February 19, 1799, in Paris, France, in the upheaval of the French Revolution. (Of course, I know where it is, but I can’t communicate it anymore.) My name is the twelfth listed on the northeast side of the Eiffel Tower:

Borda listed on the northeast side of the Eiffel Tower

Over the centuries many of my fellow Frenchman who joined me up here told me that I deserved a place in the Panthéon. But you will not find me there, nor at the Père Lachaise, Montparnasse, or Montmartre cemeteries.

But this is not why I still cannot rest in peace. I am a humble man; it is the kilogram that keeps me up at night. But soon I will be able to rest in peace at night for all time and approach new scientific challenges.

Let me tell you why I will soon find a good night’s sleep.

All my life, I was into mathematics, geometry, physics, and hydrology. And overall, I loved to measure things. You might have heard of substitution weighing (also called Borda’s method)—yes, this was my invention, as was the Borda count method. I also substantially improved the repeating circle. Here is where the story starts. The repeating circle was crucial in making a high-precision determination of the size of the Earth, which in turn defined the meter. (A good discussion of my circle can be found here.)

Repeating circle

I lived in France when it was still a monarchy. Times were difficult for many people—especially peasants—partially because trade and commerce were difficult due to the lack of measures all over the country. If you enjoy reading about history, I highly recommend Kula’s Measures and Men to understand the weights and measurements situation in France in 1790. The state of the weights and measures were similar in other countries; see for instance Johann Georg Trallesreport about the situation in Switzerland.

In August 1790, I was made the chairman of the Commission of Weights and Measures as a result of a 1789 initiative from Louis XVI. (I still find it quite miraculous that 1,000 years after Charlemagne’s initiative to unify weights and measures, the next big initiative in this direction would be started.) Our commission created the metric system that today is the International System of Units, often abbreviated as SI (le Système international d’unités in French).

In the commission were, among others, Pierre-Simon Laplace (think the Laplace equation), Adrien-Marie Legendre (Legendre polynomials), Joseph-Louis Lagrange (think Lagrangian), Antoine Lavoisier (conservation of mass), and the Marquis de Condorcet. (I always told Adrien-Marie that he should have some proper portrait made of him, but he always said he was too busy calculating. But for 10 years now, the politician Louis Legendre’s portrait has not been used in math books instead of Adrien-Marie’s. Over the last decades, Adrien-Marie befriended Jacques-Louis David, and Jacques-Louis has made a whole collection of paintings of Adrien-Marie; unfortunately, mortals will never see them.) Lagrange, Laplace, Monge, Condorcet, and I were on the original team. (And, in the very beginning, Jérôme Lalande was also involved; later, some others were as well, such as Louis Lefèvre‑Gineau.)

Portraits of Pierre-Simon Laplace, Adrien-Marie Legendre, Joseph-Louis Lagrange, Antoine Lavoisier, and Marquis de Condorcet

Three of us (Monge, Lagrange, and Condorcet) are today interred or commemorated at the Panthéon. It is my strong hope that Pierre-Simon is one day added; he really deserves it.

As I said before, things were difficult for French citizens in this era. Laplace wrote:

The prodigious number of measures in use, not only among different people, but in the same nation; their whimsical divisions, inconvenient for calculation, and the difficulty of knowing and comparing them; finally, the embarrassments and frauds which they produce in commerce, cannot be observed without acknowledging that the adoption of a system of measures, of which the uniform divisions are easily subjected to calculation, and which are derived in a manner the least arbitrary, from a fundamental measure, indicated by nature itself, would be one of the most important services which any government could confer on society. A nation which would originate such a system of measures, would combine the advantage of gathering the first fruits of it with that of seeing its example followed by other nations, of which it would thus become the benefactor; for the slow but irresistible empire of reason predominates at length over all national jealousies, and surmounts all the obstacles which oppose themselves to an advantage, which would be universally felt.

All five of the mathematicians (Monge, Lagrange, Laplace, Legendre, and Condorcet) have made historic contributions to mathematics. Their names are still used for many mathematical theorems, structures, and operations:

Monge, Lagrange, Laplace, Legendre, and Condorcet's contributions to mathematics
Monge, Lagrange, Laplace, Legendre, and Condorcet's contributions to mathematics

In 1979, Ruth Inez Champagne wrote a detailed thesis about the influence of my five fellow citizens on the creation of the metric system. For Legendre’s contribution especially, see C. Doris Hellman’s paper. Today it seems to me that most mathematicians no longer care much about units and measures and that physicists are the driving force behind advancements in units and measures. But I did like Theodore P. Hill’s arXiv paper about the method of conflations of probability distributions that allows one to consolidate knowledge from various experiments. (Yes, before you ask, we do have instant access to arXiv up here. Actually, I would say that the direct arXiv connection has been the greatest improvement here in the last millennium.)

Our task was to make standardized units of measure for time, length, volume, and mass. We needed measures that were easily extensible, and could be useful for both tiny things and astronomic scales. The principles of our approach were nicely summarized by John Quincy Adams, Secretary of State of the United States, in his 1821 book Report upon the Weights and Measures.

Excerpt from John Quincy Adams' Report upon Weights and Measures

Originally we (we being the metric men, as we call ourselves up here) had suggested just a few prefixes: kilo-, deca-, hecto-, deci-, centi-, milli-, and the no-longer-used myria-. In some old books you can find the myria- units.

We had the idea of using prefixes quite early in the process of developing the new measurements. Here are our original proposals from 1794:

Excerpts of original proposals from 1794

Side note: in my time, we also used the demis and the doubles, such as a demi-hectoliter (=50 liters) or a double dekaliter (=20 liters).

As inhabitants of the twenty-first century know, times, lengths, and masses are measured in physics, chemistry, and astronomy over ranges spanning more than 50 orders of magnitude. And the units we created in the tumultuous era of the French Revolution stood the test of time:

Orders of magnitude plots for length and area

Orders of magnitude plots for length Orders of magnitude plot for area

In the future, the SI might need some more prefixes. In a recent LIGO discovery, the length of the interferometer arms changed on the order of 10 yoctometers. Yoctogram resolution mass sensors exist. One yoctometer equals 10–24 meter. Mankind can already measure tiny forces on the order of zeptonewtons.

On the other hand, astronomy needs prefixes larger than 1024. One day, these prefixes might become official.

Proposed prefixes larger than 10^24

I am a man of strict rules, and it drives me nuts when I see people in the twenty-first century not obeying the rules for using SI prefixes. Recently I saw somebody writing on a whiteboard that a year is pretty much exactly 𝜋 dekamegaseconds (𝜋 daMs):

1 year approximately pi dekamegaseconds

While it’s a good approximation (only 0.4% off), when will this person learn that one shouldn’t concatenate prefixes?

The technological progress of mankind has occurred quickly in the last two centuries. And mega-, giga-, tera- or nano-, pico-, and femto- are common prefixes in the twenty-first century. Measured in meters per second, here is the probability distribution of speed values used by people. Some speeds (like speed limits, the speed of sound, or the speed of light) are much more common than others, but many local maxima can be found in the distribution function:

Probability distribution of speed values used by people

Here is the report we delivered in March of 1791 that started the metric system and gave the conceptual meaning of the meter and the kilogram, signed by myself, Lagrange, Laplace, Monge, and Concordet (now even available through what the modern world calls a “digital object identifier,” or DOI, like 10.3931/e-rara-28950):

Report from 1791 that started the metric system and gave conceptual meaning of the meter and kilogram

Today most people think that base 10 and the meter, second, and kilogram units are intimately related. But only on October 27, 1790, did we decide to use base 10 for subdividing the units. We were seriously considering a base-12 subdivision, because the divisibility by 2, 3, 4, and 6 is a nice feature for trading objects. It is clear today, though, that we made the right choice. Lagrange’s insistence on base 10 was the right thing. At the time of the French Revolution, we made no compromises. On November 5, 1792, I even suggested changing clocks to a decimal system. (D’Alambert had suggested this in 1754; for the detailed history of decimal time, see this paper.) Mankind was not ready yet; maybe in the twenty-first century decimal clocks and clock readings would finally be recognized as much better than 24 hours, 60 minutes, and 60 seconds. I loved our decimal clocks—they were so beautiful. So it’s a real surprise to me today that mankind still divides the angle into 90 degrees. In my repeating circle, I was dividing the right angle into 100 grades.

We wanted to make the new (metric) units truly equal for all people, not base them, for instance, on the length of the forearm of a king. Rather, “For all time, for all people” (“À tous les temps, à tous les peuples”). Now, in just a few years, this dream will be achieved.

And I am sure there will come the day where Mendeleev’s prediction (“Let us facilitate the universal spreading of the metric system and thus assist the common welfare and the desired future rapprochement of the peoples. It will come not yet, slowly, but surely.”) will come true even in the three remaining countries of the world that have not yet gone metric:

Countries that have not gone metric

The SI units have been legal for trade in the USA since the mid-twentieth century, when United States customary units became derived from the SI definitions of the base units. Citizens can choose which units they want for trade.

We also introduced the decimal subdivision of money, and our franc was in use from 1793 to 2002. At least today all countries divide their money on the basis of base 10—no coins with label 12 are in use anymore. Here is the coin label breakdown by country:

Coin label breakdown by country

We took the “all” in “all people” quite seriously, and worked with our archenemy Britain and the new United States (through Thomas Jefferson personally) together to make a new system of units for all the major countries in my time. But, as is still so often the case today, politics won over reason.

I died on February 19, 1799, just a few months before the our group’s efforts. On June 22, 1799, my dear friend Laplace gave a speech about the finished efforts to build new units of length and mass before the new prototypes were delivered to the Archives of the Republic (where they are still today).

In case the reader is interested in my eventful life, Jean Mascart wrote a nice biography about me in 1919, and it is now available as a reprint from the Sorbonne.

From the beginnings of the metric system to today

Two of my friends, Jean Baptiste Joseph Delambre and Pierre Méchain, were sent out to measure distances in France and Spain from mountain to mountain to define the meter as one ten-millionth of the distance from the North Pole to the equator of the Earth. Historically, I am glad the mission was approved. Louis XVI was already under arrest when he approved the financing of the mission. My dear friend Lavoisier called their task “the most important mission that any man has ever been charged with.”

Pierre Méchain and Jean Baptiste Joseph Delambre

If you haven’t done so, you must read the book The Measure of All Things by Ken Alder. There is even a German movie about the adventures of my two old friends. Equipped with a special instrument that I had built for them, they did the work that resulted in the meter. Although we wanted the length of the meter to be one ten-millionth of the length of the half-meridian through Paris from pole to equator, I think today this is a beautiful definition conceptually. That the Earth isn’t quite as round as we had hoped for we did not know at the time, and this resulted in a small, regrettable error of 0.2 mm due to a miscalculation of the flattening of the Earth. Here is the length of the half-meridian through Paris, expressed through meters along an ellipsoid that approximates the Earth:

Length of the half-meridian through Paris, expressed through meters along an ellipsoid that approximates the Earth

If they had elevation taken into account (which they did not do—Delambre and Méchain would have had to travel the whole meridian to catch every mountain and hill!), and had used 3D coordinates (meaning including the elevation of the terrain) every few kilometers, they would have ended up with a meter that was 0.4 mm too short:

 Length of the meridian meter when taking elevation into account

Here is the elevation profile along the Paris meridian:

Elevation along the Paris meridian

And the meter would be another 0.9 mm longer if measured with a yardstick the length of a few hundred meters:

Length of the meridian meter when taking detailed elevation into account

Because of the fractality of the Earth’s surface, an even smaller yardstick would have given an even longer half-meridian.

It’s more realistic to follow the sea-level height. The difference between the length of the sea-level meridian meter and the ellipsoid approximation meter is just a few micrometers:

Difference between the length of the sea-level meridian and the ellipsoid approximation meter

But at least the meridian had to go through Paris (not London, as some British scientists of my time proposed). But anyway, the meridian length was only a stepping stone to make a meter prototype. Once we had the meter prototype, we didn’t have to refer to the meridian anymore.

Here is a sketch of the triangulation carried out by Pierre and Jean Baptiste in their adventurous six-year expedition. Thanks to the internet and various French digitization projects, the French-speaking reader interested in metrology and history can now read the original results online and reproduce our calculations:

Reproducing the triangulation carried out by Pierre and Jean Baptiste

The part of the meridian through Paris (and especially through the Paris Observatory, marked in red) is today marked with the Arago markers—do not miss them during your next visit to Paris! François Arago remeasured the Paris meridian. After Méchain joined me up here in 1804, Laplace got the go-ahead (and the money) from Napoléon to remeasure the meridian and to verify and improve our work:

Plotting the meridian through Paris and the Arago markers

Plotting the meridian through Paris

The second we derived from the length of a year. And the kilogram as a unit of mass we wanted to (and did) derive from a liter of water. If any liquid is special, it is surely water. Lavoisier and I had many discussions about the ideal temperature. The two temperatures that stand out are 0 °C and
4 °C. Originally we were thinking about 0 °C, as with ice water it is easy to see. But because of the maximal density of water at 4 °C, we later thought that would be the better choice. The switch to
4 °C was suggested by Louis Lefèvre-Gineau. The liter as a volume in turn we defined as one-tenth of a meter cubed. As it turns out, compared with high-precision measurements of distilled water,
1 kg equals the mass of 1.000028 dm3 of water. The interested reader can find many more details of the process of the water measurements here and about making the original metric system here. A shorter history in English can be found in the recent book by Williams and the ten-part series by Chisholm.

I don’t want to brag, but we also came up with the name “meter” (derived from the Greek metron and the Latin metrum), which we suggested on July 11 of 1792 as the name of the new unit of length. And then we had the area (=100 m2) and the stere (=1 m3).

And I have to mention this for historical accuracy: until I entered the heavenly spheres, I always thought our group was the first to carry out such an undertaking. How amazed and impressed I was when shortly after my arrival up here, I-Hsing and Nankung Yiieh introduced themselves to me and told me about their expedition from the years 721 to 725, more than 1,000 years before ours, to define a unit of length.

I am so glad we defined the meter this way. Originally the idea was to define a meter through a pendulum of proper length as a period of one second. But I didn’t want any potential change in the second to affect the length of the meter. While dependencies will be unavoidable in a complete unit system, they should be minimized.

Basing the meter on the Earth’s shape and the second on the Earth’s movement around the Sun seemed like a good idea at the time. Actually, it was the best idea that we could technologically realize at this time. We did not know how tides and time changed the shape of the Earth, or how continents drift apart. But we believed in the future of mankind, in ever-increasing measurement precision, but we did not know what concretely would change. But it was our initial steps for precisely measuring distances in France that were carried out. Today we have high-precision geo potential maps as high-order series of Legendre polynomials:

GeogravityModelData for the astronomical observatory in Paris

With great care, the finest craftsmen of my time melted platinum, and we forged a meter bar and a kilogram. It was an exciting time. Twice a week I would stop by Janety’s place when he was forging our first kilograms. Melting and forming platinum was still a very new process. And Janety, Louis XVI’s goldsmith, was a true master of forming platinum—to be precise, a spongelike eutectic made of platinum and arsenic. Just a few years earlier, on June 6, 1782, Lavoisier showed the melting of platinum in a hydrogen-oxygen flame to (the future) Tsar Paul I at a garden party at Versailles; Tsar Paul I was visiting Marie Antoinette and Loius XVI. And Étienne Lenoir made our platinum meter, and Jean Nicolas Fortin our platinum kilogram. For the reader interested in the history of platinum, I recommend McDonald’s and Hunt’s book.

Platinum is a very special metal; it has a high density and is chemically very inert. It is also not as soft as gold. The best kilogram realizations today are made from a platinum-iridium mixture (10% iridium), as adding iridium to platinum does improve its mechanical properties. Here is a comparison of some physical characteristics of platinum, gold, and iridium:

Comparison of physical characteristics of platinum, gold, and iridium

This sounds easy, but at the time the best scientists spent countless hours calculating and experimenting to find the best materials, the best shapes, and the best conditions to define the new units. But both the new meter bar and the new kilogram cylinder were macroscopic bodies. And the meter has two markings of finite width. All macroscopic artifacts are difficult to transport (we developed special travel cases); they change by very small amounts over a hundred years through usage, absorption, desorption, heating, and cooling. In the amazing technological progress of the nineteenth and twentieth centuries, measuring time, mass, and length with precisions better than one in a billion has become possible. And measuring time can even be done a billion times better.

I still vividly remember when, after we had made and delivered the new meter and the mass prototypes, Lavoisier said, “Never has anything grander and simpler and more coherent in all its parts come from the hands of man.” And I still feel so today.

Our goal was to make units that truly belonged to everyone. “For all time, for all people” was our motto. We put copies of the meter all over Paris to let everybody know how long it was. (If you have not done so, next time you visit Paris, make sure to visit the mètre étalon near to the Luxembourg Palace.) Here is a picture I recently found, showing an interested German tourist studying the history of one of the few remaining mètres étalons:

German tourist studying the history of one of the few remaining mètres étalons

It was an exciting time (even if I was no longer around when the committee’s work was done). Our units served many European countries well into the nineteenth and large parts of the twentieth century. We made the meter, the second, and the kilogram. Four more base units (the ampere, the candela, the mole, and the kelvin) have been added since our work. And with these extensions, the metric system has served mankind very well for 200+ years.

How the metric system took off after 1875, the year of the Metre Convention, can be seen by plotting how often the words kilogram, kilometer, and kilohertz appear in books:

How often the words kilogram, kilometer, and kilohertz appear in books

We defined only the meter, the seond, the liter, and the kilogram. Today many more name units belong to the SI: becquerel, coulomb, farad, gray, henry, hertz, joule, katal, lumen, lux, newton, ohm, pascal, siemen, sievert, tesla, volt, watt, and weber. Here is a list of the dimensional relations (no physical meaning implied) between the derived units:

List of the dimensional relations between the derived units

List of the dimensional relations between the derived units

Many new named units have been added since my death, often related to electrical and magnetic phenomena that were not yet known when I was alive. And although I am a serious person in general, I am often open to a joke or a pun—I just don’t like when fun is made of units. Like Don Knuth’s Potrzebie system of units, with units such as the potrzebie, ngogn, blintz, whatmeworry, cowznofski, vreeble, hoo, and hah. Not only are their names nonsensical, but so are their values:

Portzerbies and blintz units

Or look at Max Pettersson’s proposal for units for biology. The names of the units and the prefixes might sound funny, but for me units are too serious a subject to make fun of:

Max Pettersson's proposal for units for biology

These unit names do not even rhyme with any of the proper names:

Words that rhyme with meter
Words that rhyme with mile

To reiterate, I am all in favor of having fun, even with units, but it must be clear that it is not meant seriously:

Converting humorous units of measurement

Or explicitly nonscientific units, such as helens for beauty, puppies for happiness, or darwins for fame are fine with me:

Measuring beauty in helens

Measuring happiness in puppies

Measuring fame in darwins

I am so proud that the SI units are not just dead paper symbols, but tools that govern the modern world in an ever-increasing way. Although I am not a comics guy, I love the recent promotion of the base units to superheroes by the National Institute of Standards and Technology:

Base units to superheroes

Base units to superheroes

Note that, to honor the contributions of the five great mathematicians to the metric system, the curves in the rightmost column of the unit-representing characters are given as mathematical formulas, e.g. for Dr. Kelvin we have the following purely trigonometric parametrization:

Purely trigonometric parametrization of Dr. Kelvin

So we can plot Dr. Kelvin:

Plotting Dr. Kelvin

Having the characters in parametric form is handy: when my family has reunions, the little ones’ favorite activity is coloring SI superheroes. I just print the curves, and then the kids can go crazy with the crayons. (I got this idea a couple years ago from a coloring book by the NCSA.)

Printing randomly colored curves

And whenever a new episode comes out, all us “measure men” (George Clooney, if you see this: hint, hint for an exciting movie set in the 1790s!) come together to watch it. As you can imagine, the last episode is our all-time favorite. Rumor has it up here that there will be a forthcoming book The Return of the Metrologists (2018 would be a perfect year) complementing the current book.

And I am glad to see that the importance of measuring and the underlying metric system is in modern times honored through the World Metrology Day on May 20, which is today.

In my lifetime, most of what people measured were goods: corn, potatoes, and other foods, wine, fabric, and firewood, etc. So all my country really needed were length, area, volume, angles, and, of course, time units. I always knew that the importance of measuring would increase over time. But I find it quite remarkable that only 200 years after I entered the heavenly spheres, hundreds and hundreds of different physical quantities are measured. Today even the International Organization for Standardization (ISO) lists, defines, and describes what physical quantities to use. Below is an image of an interactive Demonstration (download the notebook at the bottom of this post to interact with it) showing graphically the dimensions of physical quantities for subsets of selectable dimensions. First select two or three dimensions (base units). Then the resulting graphics show spheres with sizes proportional to the number of different physical quantities with these dimensions. Mouse over the spheres in the notebook to see the dimensions. For example, with “meter”, “second”, and “kilogram” checked, the diagram shows the units of physical quantities like momentum (kg1 m1 s–1) or energy (kg2 m1 s–2):

Physical quantities of given dimensions

Here is a an excerpt of the code that I used to make these graphics. These are all physical quantities that have dimensions L2 M1 T–1. The last one is the slightly exotic electrodynamic observable
DESCRIPTION:

Excerpt of code from physical quantities of given dimensions demonstration

Today with smart phones and wearable devices, a large number of physical quantities are measured all the time by ordinary people. “Measuring rules,” as I like to say. Or, as my (since 1907) dear friend William Thomson liked to say:

… when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely, in your thoughts, advanced to the stage of science, whatever the matter may be.

Here is a graphical visualization of the physical quantities that are measured by the most common measurement devices:

Graphical visualization of the physical quantities that are measured by the most common measurement devices

Electrical and magnetic phenomena were just starting to become popular when I was around. Electromagnetic effects related to physical quantities that are expressed through the electric current only become popular much later:

Electrical and magnetic phenomena timeline

Electrical and magnetic phenomena timeline

I remember how excited I was when in the second half of the nineteenth century and the beginning of the twentieth century the various physical quantities of electromagnetism were discovered and their connections were understood. (And, not to be forgotten: the recent addition of memristance.) Here is a diagram showing the most important electric/magnetic physical quantities qk that have a relation of the form qk=qi qj with each other:

Diagram showing the most important electric/magnetic physical quantities q sub k, with relation of the form q subk = q sub i, q sub j, with each other

On the other hand, I was sure that temperature-related phenomena would soon be fully understood after my death. And indeed just 25 years later, Carnot proved that heat and mechanical work are equivalent. Now I also know about time dilation and length contraction due to Einstein’s theories. But mankind still does not know if a moving body is colder or warmer than a stationary body (or if they have the same temperature). I hear every week from Josiah Willard about the related topic of negative temperatures. And recently, he was so excited about a value for a maximal temperature for a given volume V expressed through fundamental constants:

Maximal temperature for a given volume V expressed through fundamental constants

For one cubic centimeter, the maximal temperature is about 5PK:

Maximal temperature for once cubic centimeter

The rise of the constants

Long after my physical death, some of the giants of physics of the nineteenth century and early twentieth century, foremost among them James Clerk Maxwell, George Johnstone Stoney, and Max Planck (and Gilbert Lewis) were considering units for time, length, and mass that were built from unchanging properties of microscopic particles and the associated fundamental constants of physics (speed of light, gravitational constant, electron charge, Planck constant, etc.):

James Clerk Maxwell, George Johnstone Stoney, and Max Planck

Maxwell wrote in 1870:

Yet, after all, the dimensions of our Earth and its time of rotation, though, relative to our present means of comparison, very permanent, are not so by any physical necessity. The earth might contract by cooling, or it might be enlarged by a layer of meteorites falling on it, or its rate of revolution might slowly slacken, and yet it would continue to be as much a planet as before.

But a molecule, say of hydrogen, if either its mass or its time of vibration were to be altered in the least, would no longer be a molecule of hydrogen.

If, then, we wish to obtain standards of length, time, and mass which shall be absolutely permanent, we must seek them not in the dimensions, or the motion, or the mass of our planet, but in the wavelength, the period of vibration, and the absolute mass of these imperishable and unalterable and perfectly similar molecules.

When we find that here, and in the starry heavens, there are innumerable multitudes of little bodies of exactly the same mass, so many, and no more, to the grain, and vibrating in exactly the same time, so many times, and no more, in a second, and when we reflect that no power in nature can now alter in the least either the mass or the period of any one of them, we seem to have advanced along the path of natural knowledge to one of those points at which we must accept the guidance of that faith by which we understand that “that which is seen was not made of things which do appear.’

At the time when Maxwell wrote this, I was already a man’s lifetime up here, and when I read it I applauded him (although at this time I still had some skepticism toward all ideas coming from Britain). I knew that this was the path forward to immortalize the units we forged in the French Revolution.

There are many physical constants. And they are not all known to the same precision. Here are some examples:

Examples of physical constants

Converting the values of constants with uncertainties into arbitrary precision numbers is convenient for the following computations. The connection between the intervals and the number of digits is given as follows. The arbitrary precision number that corresponds to v ± δ is the number v with precision –log10(2 δ/v) Conversely, given an arbitrary precision number (numbers are always convenient for computations), we can recover the v ± δ form:

Converting arbitrary precision numbers to intervals

After the exactly defined constants, the Rydberg constant with 11 known digits stands out for a very precisely known constant. On the end of the spectrum is G, the gravitational constant. At least once a month Henry Cavendish stops at my place with yet another idea on how to build a tabletop device to measure G. Sometimes his ideas are based on cold atoms, sometimes on superconductors, and sometimes on high-precision spheres. If he could still communicate with the living, he would write a comment to Nature every week. A little over a year ago Henry was worried that he should have done his measurements in winter as well in summer, but he was relieved to see that no seasonal dependence of G’s value seems to exist. The preliminary proposal deadline for the NSF’s Big G Challenge was just four days ago. I think sometime next week I will take a heavenly peek at the program officer’s preselected experiments.

There are more physical constants, and they are not all equal. Some are more fundamental than others, but for reasons of length I don’t want to get into a detailed discussion about this topic now. A good start for interested readers is Lévy-Leblond’s papers (also here), as well as this paper, this paper, and the now-classic Duff–Okun–Veneziano paper. For the purpose of making units from physical constants, the distinction of the various classes of physical constants is not so relevant.

The absolute values of the constants and their relations to heaven, hell, and Earth is an interesting subject on its own. It is a hot topic of discussion for mortals (also see this paper), as well as up here. Some numerical coincidences (?) are just too puzzling:

Absolute values of the constants and their relations to heaven, hell, and Earth

Of course, using modern mathematical algorithms, such as lattice reduction, we can indulge in the numerology of the numerical part of physical constants:

Numerology of the numerical part of physical constants

For instance, how can we form 𝜋 out of fundamental constant products?

Forming pi out of fundamental constant products

Or let’s look at my favorite number, 10, the mathematical basis of the metric system:

Forming 10 out of fundamental constant products

And given a set of constants, there are many ways to form a unit of a given unit. There are so many physical constants in use today, you have to be really interested to keep up on them. Here are some of the lesser-known constants:

Some of the lesser-known physical constants

Physical constants appear in so many equations of modern physics. Here is a selection of 100 simple physics formulas that contain the fundamental constants:

100 simple physics formulas that contain the fundamental constants

Of course, more complicated formulas also contain the physical constants. For instance, the gravitational constant appears (of course!) in the formula of the gravitational potentials of various objects, e.g. for the potential of a line segment and of a triangle:

Gravitational constant appears in formula of gravitational potentials of various objects

My friend Maurits Cornelis Escher loves these kinds of formulas. He recently showed me some variations of a few of his 3D pictures that show the equipotential surfaces of all objects in the pictures by triangulating all surfaces, then using the above formula—like his Escher solid. The graphic shows a cut version of two equipotential surfaces:

Equipotential surfaces of all objects in the pictures by triangulating all surfaces

I frequently stop by at Maurits Cornelis’, and often he has company—usually, it is Albrecht Dürer. The two love to play with shapes, surfaces, and polyhedra. They deform them, Kelvin-invert them, everse them, and more. Albrecht also likes the technique of smoothing with gravitational potentials, but he often does this with just the edges. Here is what a Dürer solid’s equipotential surfaces look like:

Dürer solid's equipotential surfaces

And here is a visualization of formulas that contain cα–hβ–Gγ in the exponent space αγβγγ. The size of the spheres is proportional to the number of formulas containing cα·hβ·Gγ; mousing over the balls in the attached notebook shows the actual formulas. We treat positive and negative exponents identically:

Visualization of formulas that contain c^alpha-h^beta-G^gamma in the exponant space of alpha-beta-gamma

One of my all-time favorite formulas is for the quantum-corrected gravitational force between two bodies, which contains my three favorite constants: the speed of light, the gravitational constants, and the Planck constant:

Quantum-corrected gravitational force between two bodies

Another of my favorite formulas is the one for the entropy of a black hole. It contains the Boltzmann constant in addition to c, h, and G:

Entropy of a black hole

And, of course, the second-order correction to the speed of light in a vacuum in the presence of an electric or magnetic field due to photon-photon scattering (ignoring a polarization-dependent constant). Even in very large electric and magnetic fields, the changes in the speed of light are very small:

Second-order correction to the speed of light in a vacuum in the presence of an electric or magnetic field

In my lifetime, we did not yet understand the physical world enough to have come up with the idea of natural units. That took until 1874, when Stoney proposed for the first time natural units in his lecture to the British Science Association. And then, in his 1906–07 lectures, Planck made use of the now-called Planck units extensively, already introduced in his famous 1900 article in Annalen der Physik. Unfortunately, both these unit systems use the gravitational constant G prominently. It is a constant that we today cannot measure very accurately. As a result, also the values of the Planck units in the SI have only about four digits:

Use of Planck units

These units were never intended for daily use because they are either far too small or far too large compared to the typical lengths, areas, volumes, and masses that humans deal with on a daily basis. But why not base the units of daily use on such unchanging microscopic properties?

(Side note: The funny thing is that in the last 20 years Max Planck again doubts if his constant h is truly fundamental. He had hoped in 1900 to derive its value from a semi-classical theory. Now he hopes to derive it from some holographic arguments. Or at least he thinks he can derive the value of h/kB from first principles. I don’t know if he will succeed, but who knows? He is a smart guy and just might be able to.)

Many exact and approximate relations between fundamental constants are known today. Some more might be discovered in the future. One of my favorites is the following identity—within a small integer factor, is the value of the Planck constant potentially related to the size of the universe?

Is the value of the Planck constant potentially related to the size of the universe?

Another one is Beck’s formula, showing a remarkable coincidence (?):

Beck's formula

But nevertheless, in my time we never thought it would be possible to express the height of a giraffe through the fundamental constants. But how amazed I was nearly ten years ago, when looking through the newly arrived arXiv preprints to find a closed form for the height of the tallest running, breathing organism derived by Don Page. Within a factor of two he got the height of a giraffe (Brachiosaurus and Sauroposeidon don’t count because they can’t run) derived in terms of fundamental constants—I find this just amazing:

Typical height of a giraffe

I should not have been surprised, as in 1983 Press, Lightman, Peierls, and Gold expressed the maximal running speed of a human (see also Press’ earlier paper):

Maximal running speed of a human

In the same spirit, I really liked Burrows’ and Ostriker’s work on expressing the sizes of a variety of astronomical objects through fundamental constants only. For instance, for a typical galaxy mass we obtain the following expression:

Expression for a typical galaxy mass

This value is within a small factor from the mass of the Milky Way:

Mass of the Milky Way

But back to units, and fast forward another 100+ years to the second half of the twentieth century: the idea of basing units on microscopic properties of objects gained more and more ground.

Since 1967, the second has been defined through 9,192,631,770 periods of the light from the transition between the two hyperfine levels of the ground state of the cesium 133, and the meter has been defined since 1983 as the distance light travels in one second when we define the speed of light as the exact quantity 299,792,458 meters per second. To be precise, this definition is to be realized at rest, at a temperature of 0 K, and at sea level, as motion, temperature, and the gravitational potential influence the oscillation period and (proper) time. Ignoring the sea-level condition can lead to significant measurement errors; the center of the Earth is about 2.5 years younger than its surface due to differences in the gravitational potential.

Now, these definitions for the unit second and meter are truly equal for all people. Equal not just for people on Earth right now, but also for in the future and far, far away from Earth for any alien. (One day, the 9,192,631,770 periods of cesium might be replaced by a larger number of periods of another element, but that will not change its universal character.)

But if we wanted to ground all units in physical constants, which ones should we choose? There are often many, many ways to express a base unit through a set of constants. Using the constants from the table above, there are thirty (thirty!) ways to combine them to make a mass dimension:

Thirty ways to combine constants to make a mass dimension

Because of the varying precision of the constants, the combinations are also of varying precision (and of course, of different numerical values):

Combinations are of varying precision

Now the question is which constants should be selected to define the units of the metric system? Many aspects, from precision to practicality to the overall coherence (meaning there is no need for various prefactors in equations to compensate for unit factors) must be kept in mind. We want our formulas to look like F = m a, rather than containing explicit numbers such as in the Thanksgiving turkey cooking time formulas (assuming a spherical turkey):

Turkey cooking time formulas

Or in the PLANK formula (Max hates this name) for the calculation of indicated horsepower:

Calculation of indicated horsepower

Here in the clouds of heaven, we can’t use physical computers, so I am glad that I can use the more virtual Wolfram Open Cloud to do my calculations and mathematical experimentation. I have played for many hours with the interactive units-constants explorer below, and agree fully with the choices made by the International Bureau of Weights and Measures (BIPM), meaning the speed of light, the Planck constant, the elementary charge, the Avogadro constant, and the Boltzmann constant. I showed a preliminary version of this blog to Edgar, and he was very pleased to see this table based on his old paper:

Tables based on Edgar's paper

I want to mention that the most popular physical constant, the fine-structure constant, is not really useful for building units. Just by its special status as a unitless physical quantity, it can’t be directly connected to a unit. But it is, of course, one of the most important physical constants in our universe (and is probably only surpassed by the simple integer constant describing how many spatial dimensions our universe has). Often various dimensionless combinations can be found from a given set of physical constants because of relations between the constants, such as c2=1/(ε0 μ0). Here are some examples:

Various dimensionless combinations found from a given set of physical constants

But there is probably no other constant that Paul Adrien Maurice Dirac and I have discussed more over the last 32 years than the fine-structure constant α=e2/(4 𝜋 ε0 ħ c). Although up here we meet with the Lord regularly in a friendly and productive atmosphere, he still refuses to tell us a closed form of α . And he will not even tell us if he selected the same value for all times and all places. For the related topic of the values of the constants chosen, he also refuses to discuss fine tuning and alternative values. He says that he chose a beautiful expression, and one day we will find out. He gave some bounds, but they were not much sharper than the ones we know from the Earth’s existence. So, like living mortals, for now we must just guess mathematical formulas:

Conjectured exact forms of the fine-structure constant

Or guess combinations of constants:

Guessing combinations of constants

And here is one of my favorite coincidences:

Favorite coincidence

And a few more:

A few more coincidences

The rise in importance and usage of the physical constants is nicely reflected in the scientific literature. Here is a plot of how often (in publications per year) the most common constants appear in scientific publications from the publishing company Springer. The logarithmic vertical axis shows the exponential increase in how often physical constants are mentioned:

How often the most common constants appear in scientific publications from the publishing company Springer

While the fundamental constants are everywhere in physics and chemistry, one does not see them so much in newspapers, movies, or advertisements, as they deserve. I was very pleased to see the introduction of the Measures for Measure column in Nature recently.

Fundamental constants in Measures for Measure column

To give the physical constants the presence they deserve, I hope that before (or at least not long after) the redefinition we will see some interesting video games released that allow players to change the values of at least c, G, and h to see how the world around us would change if the constants had different values. It makes me want to play such a video game right now. With large values of h, not only could one build a world with macroscopic Schrödinger cats, but interpersonal correlations would also become much stronger. This could make the constants known to children at a young age. Such a video game would be a kind of twenty-first-century Mr. Tompkins adventure:

Mr. Tompkins

It will be interesting to see how quickly and efficiently the human brain will adapt to a possible life in a different universe. Initial research seems to be pretty encouraging. But maybe our world and our heaven are really especially fine-tuned.

The current SI and the issue with the kilogram

The modern system of units, the current SI has, in addition to the second, the meter, and the kilogram, other units. The ampere is defined as the force between two infinitely long wires, the kelvin through the triple point of water, the mole through the kilogram and carbon-12, and the candela through blackbody radiation. If you have never read the SI brochure, I strongly encourage you to do so.

Two infinitely long wires are surely macroscopic and do not fulfill Maxwell’s demand (but it is at least an idealized system), and de facto it defines the magnetic constant. And the triple point of water needs a macroscopic amount of water. This is not perfect, but it’s OK. Carbon-12 atoms are already microscopic objects. Blackbody radiation is again an ensemble of microscopic objects, but a very reproducible one. So some of the current SI fulfills in some sense Maxwell’s goals.

But most of my insomnia over the last 50 years has been caused by the kilogram. It caused me real headaches, and sometimes even nightmares, when we could not put it on the same level as the second and the meter.

In the year of my physical death (1799), the first prototype of a kilogram, a little platinum cylinder, was made. About 39.7 mm in height and 39.4 mm in diameter, this was for 75 years “the” kilogram. It was made from the forged platinum sponge made by Janety. Miller gives a lot of the details of this kilogram. It is today in the Archives nationales. In 1879, Johnson Matthey (in Britain—the country I fought with my ships!), using new melting techniques, made the material for three new kilogram prototypes. Because of a slightly higher density, these kilograms were slightly smaller in size, at 39.14 mm in height. The cylinder was called KIII and became the current international prototype kilogram K. Here is the last sentence from the preface of the mass determination of the the international prototype kilogram from 1885, introducing K:

The cylinder was called KIII and became the current international prototype kilogram K

A few kilograms were selected and carefully compared to our original kilogram; for the detailed measurements, see this book. All three kilograms had a mass less than 1 mg different from the original kilogram. But one stood out: it had a mass difference of less than 0.01 mg compared to the original kilogram. For a detailed history of the making of K, see Quinn. And so, still today, per definition, a kilogram is the mass of a small metal cylinder sitting in a safe at the International Bureau of Weights and Measures near Paris. (It’s technically actually not on French soil, but this is another issue.) In the safe, which needs three keys to be opened, under three glass domes, is a small platinum-iridium cylinder that defines what a kilogram is. For the reader’s geographical orientation, here is a map of Paris with the current kilogram prototype (in the southwest), our original one (in the northeast), both with a yellow border, and some other Paris visitor essentials:

Map of Paris with current kilogram prototype (in the southwest) and our original one (in the northeast)

In addition to being an artifact, it was so difficult to get access to the kilogram (which made me unhappy). Once a year, a small group of people checks if it is still there, and every few years its weight (mass) is measured. Of course, the result is, per definition and the agreement made at the first General Conference on Weights and Measures in 1889, exactly one kilogram.

Over the years the original kilogram prototype gained dozens of siblings in the form of other countries’ national prototypes, all of the same size, material, and weight (up to a few micrograms, which are carefully recorded). (I wish the internet had been invented earlier, so that I had a communication path to tell what happened with the stolen Argentine prototype 45; since then, it has been melted down.) At least, when they were made they had the same weight. Same material, same size, similarly stored—one would expect that all these cylinders would keep their weight. But this is not what history showed. Rather than all staying at the same weight, repeated measurements showed that virtually all other prototypes got heavier and heavier over the years. Or, more probable, the international prototype has gotten lighter.

From my place here in heaven I have watched many of these the comparisons with both great interest and concern. Comparing their weights (a.k.a. masses) is a big ordeal. First you must get the national prototypes to Paris. I have silently listened in on long discussions with TSA members (and other countries’ equivalents) when a metrologist comes with a kilogram of platinum, worth north of $50k in materials—and add another $20k for the making (in its cute, golden, shiny, special travel container that should only be opened in a clean room with gloves and mouth guard, and never ever touched by a human hand)—and explains all of this to the TSA. An official letter is of great help here. The instances that I have watched from up here were even funnier than the scene in the movie 1001 Grams.

Then comes a complicated cleaning procedure with hot water, alcohol, and UV light. The kilograms all lose weight in this process. And they are all carefully compared with each other. And the result is that with very high probability, “the” kilogram, our beloved international prototype kilogram (IPK), loses weight. This fact steals my sleep.

Here are the results from the third periodic verification (1988 to 1992). The graphic shows the weight difference compared to the international prototype:

Weight difference between countries' national kilograms versus the international prototype

For some newer measurements from the last two years, see this paper.

What I mean by “the” kilogram losing weight is the following. Per definition (independent of its “real objective” mass), the international prototype has a mass of exactly 1 kg. Compared with this mass, most other kilogram prototypes of the world seem to gain weight. As the other prototypes were made, using different techniques over more than 100 years, very likely the real issue is that the international prototype is losing weight. (And no, it is not because of Ceaușescu’s greed and theft of platinum that Romania’s prototype is so much lighter; in 1889 the Romanian prototype was already 953 μg lighter than the international prototype kilogram.)

Josiah Willard Gibbs, who has been my friend up here for more than 110 years, always mentions that his home country is still using the pound rather than the kilogram. His vote in this year’s election would clearly go to Bernie. But at least the pound is an exact fraction of the kilogram, so anything that will happen to the kilogram will affect the pound the same way:

The pound is an exact fraction of the kilogram

The new SI

But soon all my dreams and centuries-long hopes will come true and I can find sleep again. In 2018, two years from now, the greatest change in the history of units and measures since my work with my friend Laplace and the others will happen.

All units will be based on things that are accessible to everybody everywhere (assuming access to some modern physical instruments and devices).

The so-called new SI will reduce all of the seven base units to seven fundamental constants of physics or basic properties of microscopic objects. Down on Earth, they started calling them “reference constants.”

Some people also call the new SI quantum SI because of its dependence on the Planck constant h and the elementary charge e. In addition to the importance of the Planck constant h in quantum mechanics, the following two quantum effects are connecting h and e: the Josephson effect and its associated Josephson constant KJ = 2 e / h, and the quantum Hall effect with the von Klitzing constant RK = h / e2. The quantum metrological triangle: connecting frequency and electric current through a singe electron tunneling device, connecting frequency and voltage through the Josephson effect, and connecting voltage and electric current through the quantum Hall effect will be a beautiful realization of electric quantities. (One day in the future, as Penin has pointed out, we will have to worry about second-order QED effects, but this will be many years from now.)

The BIPM already has a new logo for the future International System of Units:

New logo for the future International System of Units

Concretely, the proposal is:

    1. The second will continue to be defined through cesium atom microwave radiation.

    2. The meter will continue to be defined through an exactly defined speed of light.

    3. The kilogram will be defined through an exactly defined value of the Planck constant.

    4. The ampere will be defined through an exactly defined value of the elementary charge.

    5. The kelvin will be defined through an exactly defined value of the Boltzmann constant.

    6. The mole will be defined through an exact (counting) value.

    7. The candela will be defined through an exact value of the candela steradian-to-watt ratio at a fixed frequency (already now the case).

I highly recommend a reading of the draft of the new SI brochure. Laplace and I have discussed it a lot here in heaven, and (modulo some small issues) we love it. Here is a quick word cloud summary of the new SI brochure:

Word cloud summary of new SI brochure

Before I forget, and before continuing the kilogram discussion, some comments on the other units.

The second

I still remember when we discussed introducing metric time in the 1790s: a 10-hour day, with 100 minutes per hour, and 100 seconds per minute, and we were so excited by this prospect. In hindsight, this wasn’t such a good idea. The habits of people are sometimes too hard to change. And I am so glad I could get Albert Einstein interested in the whole metrology over the past 50 years. We have had so many discussions about the meaning of time and that the second measures local time, and the difference between measurable local time and coordinate time. But this is a discussion for another day. The uncertainty of a second is today less than 10−16. Maybe one day in the future, cesium will be replaced by aluminum or other elements to achieve 100 to 1,000 times smaller uncertainties. But this does not alter the spirit of the new SI; it’s just a small technical change. (For a detailed history of the second, see this article.)

Clearly, today’s definition of second is much better than one that depends on the Earth. At a time when stock market prices are compared at the microsecond level, the change of the length of a day due to earthquakes, polar melting, continental drift, and other phenomena over a century is quite large:

Change in the length of a day over time

The mole

I have heard some chemists complain that their beloved unit, the mole, introduced into the SI only in 1971, will become trivialized. In the currently used SI, the mole relates to an actual chemical, carbon-12. In the new SI, it will be just a count of objects. A true chemical equivalent to a baker’s dozen, the chemist’s dozen. Based on the Avogadro constant, the mole is crucial in connecting the micro world with the macro world. A more down-to-Earth definition of the mole matters for such quantitative values—for example, pH values. The second is the SI base unit of time; the mole is the SI base unit of the physical quantity, or amount of substance:

Mole is the SI base unit of the physical quantity

But not everybody likes the term “amount of substance.” Even this year (2016), alternative names are being proposed, e.g. stoichiometric amount. Over the last decades, a variety of names have been proposed to replace “amount of substance.” Here are some examples:

Alternative names for "amount of substance"

But the SI system only defines the unit “mole.” The naming of the physical quantity that is measured in moles is up to the International Union of Pure and Applied Chemistry.

For recent discussions from this year, see the article by Leonard, “Why Is ‘Amount of Substance’ So Poorly Understood? The Mysterious Avogadro Constant Is the Culprit!”, and the article by Giunta, “What’s in a Name? Amount of Substance, Chemical Amount, and Stoichiometric Amount.”

Wouldn’t it be nice if we could have made a “perfect cube” (number) that would represent the Avogadro number? Such a representation would be easy to conceptualize. This was suggested a few years back, and at the time was compatible with the value of the Avogadro constant, and would have been a cube of edge length 84,446,888 items. I asked Srinivasa Ramanujan, while playing a heavenly round of cricket with him and Godfrey Harold Hardy, his longtime friend, what’s special about 84,446,888, but he hasn’t come up with anything deep yet. He said that 84,446,888=2^3*17*620933, and that 620,933 appears starting at position 1,031,622 in the decimal digits of 𝜋, but I can’t see any metrological relevance in this. With the latest value of the Avogadro constant, no third power of an integer number falls into the possible values, so no wonder there is nothing special.

Here is the latest CODATA (Committee on Data for Science and Technology) value from the NIST Reference on Constants, Units, and Uncertainty:

Latest CODATA value from NIST Reference on Constants, Units, and Uncertainty

The candidate number 84,446,885 cubed is too small, and adding a one gives too large a number:

Candidate number 84,446,885

Interestingly, if we would settle for a body-centered lattice, with one additional atom per unit cell, then we could still maintain a cube interpretation:

Maintaining a cube interpretation with a body-centered lattice

A face-centered lattice would not work, either:

Using a face-centered lattice

But a diamond (silicon) lattice would work:

Diamond (silicon) lattice

To summarize:

Lattice summary

Here is a little trivia:

Sometime amid the heights of the Cold War, the accepted value of the Avogadro constant suddenly changed in the third digit! This was quite a change, considering that there is currently a lingering controversy regarding the discrepancy in the sixth digit. Can you explain the sudden decrease in Avogadro constant during the Cold War?

Do you know the answer? If not, see here or here.

But I am diverting from my main thread of thoughts. As I am more interested in the mechanical units anyway, I will let my old friend Antoine Lavoisier judge the new mole definition, as he was the chemist on our team.

The kelvin

Josiah Willard Gibbs even convinced me that temperature should be defined mechanically. I am still trying to understand John von Neumann’s opinion on this subject, but because I never fully understand his evening lectures on type II and type III factors, I don’t have a firm opinion on the kelvin. Different temperatures correspond to inequivalent representations of the algebras. As I am currently still working my way through Ruetsche’s book, I haven’t made my mind up on how to best define the kelvin from an algebraic quantum field theory point of view. I had asked John for his opinion of a first-principle evaluation of h / k based on KMS states and Tomita–Takesaki theory, and even he wasn’t sure about it. He told me some things about thermal time and diamond temperature that I didn’t fully understand.

And then there is the possibility of deriving the value of the Boltzmann constant. Even 40 years after the Koppe–Huber paper, it is not clear if it is possible. It is a subject I am still pondering, and I am taking various options into account. As mentioned earlier, the meaning of temperature and how to define its units are not fully clear to me. There is no question that the new definition of the kelvin will be a big step forward, but I don’t know if it will be the end of the story.

The ampere

This is one of the most direct, intuitive, and beautiful definitions in the new SI: the current is just the number of electrons that flow per second. Defining the value of the ampere through the number of elementary charges moved around is just a stroke of genius. When it was first suggested, Robert Andrews Millikan up here was so happy he invited many of us to an afternoon gathering in his yard. In practice (and in theoretical calculations), we have to exercise a bit more care, as we mainly measure the electric current of electrons in crystalline objects, and electrons are no longer “bare” electrons, but quasiparticles. But we’ve known since 1959, thanks to Walter Kohn, that we shouldn’t worry too much about this, and expect the charge of the electron in a crystal to be the same as the charge of a bare electron. As an elementary charge is a pretty small charge, the issue of measuring fractional charges as currents is not a practical one for now. I personally feel that Robert’s contribution to determining the value of the physical constants in the beginning of the twentieth century are not pointed out enough (Robert Andrews really knew what he was doing).

The candela

No, you will not get me started on my opinion the candela. Does it deserve to be a base unit? The whole story of human-centered physiological units is a complicated one. Obviously they are enormously useful. We all see and hear every day, even every second. But what if the human race continues to develop (in Darwin’s sense)? How will it fit together with our “for all time” mantra? I have my thoughts on this, but laying them out here and now would sidetrack me from my main discussion topic for today.

Why seven base units?

I also want to mention that originally I was very concerned about the introduction of some of the additional units that are in use today. In endless discussions with my chess partner Carl Friedrich Gauss here in heaven, he had originally convinced me that we can reduce all measurements of electric quantities to measurements of mechanical properties, and I already was pretty fluent in his CGS system, that originally I did not like it at all. But as a human-created unit system, it should be as useful as possible, and if seven units do the job best, it should be seven. In principle one could even eliminate a mass unit and express a mass through time and length. In addition to just being impractical, I strongly believe this is conceptually not the right approach. I recently discussed this with Carl Friedrich. He said he had the idea of just using time and length in the late 1820s, but abandoned such an approach. While alive, Carl Friedrich never had the opportunity to discuss the notion of mass as a synthetic a priori with Immanual, over the last century the two (Carl Friedrich and Immanuel) agreed on mass as an a priori (at least in this universe).

Our motto for the original metric system was, “For all time, for all people.” The current SI already realizes “for all people,” and by grounding the new SI in the fundamental constants of physics, the first promise “for all time” will finally become true. You cannot imagine what this means to me. If at all, fundamental constants seem to change maximally with rates on the order of 10–18 per year. This is many orders of magnitude away from the currently realized precisions for most units.

Granted, some things will get a bit numerically more cumbersome in the new SI. If we take the current CODATA values as exact values, then, for instance, the von Klitzing constant e2/h will be a big fraction:

von Klitzing contant with current CODATA values and exact values as a big fraction

The integer part of the last result is, of course, 25,812Ω. Now, is this a periodic decimal fraction or a terminating fraction? The prime factorization of the denominator tells us that it is periodic:

Prime factorization of the denominator tells us that it is periodic

Progress is good, but as happens so often, it comes at a price. While the new constant-based definitions of the SI units are beautiful, they are a bit harder to understand, and physics and chemistry teachers will have to come up with some innovative ways to explain the new definitions to pupils. (For recent first attempts, see this paper and this paper.)

And in how many textbooks have I seen that the value of the magnetic constant (permeability of the vacuum) μ0 is 4 𝜋 10–7 N / A2? The magnetic and the electric constants will in the new SI become measured quantities with an error term. Concretely, from the current exact value:

Current exact value

With the Planck constant h exactly and the elementary charge e exactly, the value of μ0 would incur the uncertainty of the fine-structure constant α. Fortunately, the dimensionless fine-structure constant α is one of the best-known constants:

Dimensionless fine-structure constant alpha

But so what? Textbook publishers will not mind having a reason to print new editions of all their books. They will like it—a reason to sell more new books.

With μ0 a measured quantity in the future, I predict one will see many more uses of the current underdog of the fundamental constant, the impedance of the vacuum Z in the future:

Impedance of the vacuum Z

I applaud all physicists and metrologist for the hard work they’ve carried out in continuation of my committee’s work over the last 225 years, which culminated in the new, physical constant-based definitions of the units. So do my fellow original committee members. These definitions are beautiful and truly forever.

(I know it is a bit indiscreet to reveal this, but Joseph Louis Lagrange told me privately that he regrets a bit that we did not introduce base and derived units as such in the 1790s. Now with the Planck constant being too important for the new SI, he thought we should have had a named base unit for the action (the time integral over his Lagrangian). And then make mass a derived quantity. While this would be the high road of classical mechanics, he does understand that a base unit for the action would not have become popular with farmers and peasants as a daily unit needed for masses.)

I don’t have the time today to go into any detailed discussion of the quarterly garden fests that Percy Williams Bridgman holds. As my schedule allows, I try to participate in every single one of them. It is also so intellectually stimulating to listen to the general discussions about the pros and cons of alternative unit systems. As you can imagine, Julius Wallot, Jan de Boer, Edward Guggenheim, William Stroud, Giovanni Giorgi, Otto Hölder, Rudolf Fleischmann, Ulrich Stille, Hassler Whitney, and Chester Page are, not unexpectedly, most outspoken at these parties. The discussion about coherence and completeness of unit systems and what is a physical quantity go on and on. At the last event, the discussion of whether probability is or is not a physical quantity went on for six hours, with no decision at the end. I suggested inviting Richard von Mises and Hans Reichenbach the next time. They might have something to contribute. At the parties, Otto always complains that mathematicians do not care enough anymore about units and unit systems as they did in the past, and he is so happy to see at least theoretical physicists pick up the topic from time to time, like the recent vector-based differentiation of physical quantities or the recent paper on the general structure of unit systems. And when he saw in an article from last year’s Dagstuhl proceedings that modern type theory met units and physical dimensions, he was the most excited he had been in decades.

Interestingly, basically the same discussions came up three years ago (and since then regularly) in the monthly mountain walks that Claude Shannon organizes. Leo Szilard argues that the “bit” has to become a base unit of the SI in the future. In his opinion, information as a physical quantity has been grossly underrated.

Once again: the new SI will be just great! There are a few more details that I would like to see changed. The current status of the radian and the steradian, which SP 811 now defines as derived units, saying, “The radian and steradian are special names for the number one that may be used to convey information about the quantity concerned.” But I see with satisfaction that the experts are discussing this topic recently quite in detail.

To celebrate the upcoming new SI here in heaven, we held a crowd-based fundraiser to celebrate this event. We raised enough funds to actually hire the master himself, Michelangelo. He will be making a sculpture. Some early sketches shown to the committee (I am fortunate to have the honorary chairmanship) are intriguing. I am sure it will be an eternal piece rivaling the David. One day every human will have the chance to see it (may it be a long time until then, dependent on your current age and your smoking habits). In addition to the constants and the units on their own, he plans to also work Planck himself, Boltzmann, and Avogadro into the sculpture, as these are the only three constants named after a person. Max was immediately accessible to model, but we are still having issues getting permission for Boltzmann to leave hell for a while to be a model. (Millikan and Fletcher were, understandably, a bit disappointed.) Ironically, it was Paul Adrien Maurice Dirac who came up with a great idea on how to convince Lucifer to get Boltzmann a Sabbath-ical. Ironically—because Paul himself is not so keen on the new SI because of the time dependence of the constants themselves over billions of years. But anyway, Paul’s clever idea was to point out that three fundamental constants, the Planck constant (6.62… × 1034 J · s), the Avogradro constant (6.02… × 1023 / mol), and the gravitational constant (6.6… × 10–11 m3 / (kg · s)) all start with the digit 6. And forming the number of the beast, 666, through three fundamental constants really made an impression on Lucifer, and I expect him to approve Ludwig’s temporary leave.

As an ex-mariner with an affinity for the oceans, I also pointed out to Lucifer that the mean ocean depth is exactly 66% of his height (2,443 m, according to a detailed re-analysis of Dante’s Divine Comedy). He liked this cute fact so much that he owes me a favor.

Mean depth of the oceans

So far, Lucifer insists on having the combination G(me / (h k))1/2 on the sculpture. For obvious reasons:

Lucifer's favorite combination

We will see how this discussion turns out. As there is really nothing wrong with this combination, even if it is not physically meaningful, we might agree to his demands.

All of the new SI 2018 committee up here has also already agreed on the music, we will play Wojciech Kilar’s Sinfonia de motu, which uniquely represents the physical constants as a musical composition using only the notes c, g, e, h (b-flat in the English-speaking world), and a (where a represents the cesium atom). And we could convince Rainer Maria Rilke to write a poem for the event. Needless to say, Wojciech, who has now been with us for more than two years, agreed, and even offered to compose an exact version.

Down on Earth, the arrival of the constants-based units will surely also be celebrated in many ways and many places. I am looking forward especially to the documentary The State of the Unit, which will be about the history of the kilogram and its redefinition through the Planck constant.

The path to the redefinition of the kilogram

As I already touched on, the most central point of the new SI will be the new definition of the kilogram. After all, the kilogram is the one artifact still present in the current SI that should be eliminated. In addition to the kilogram itself, many more derived units depend on it, say, the volt: 1 volt = 1 kilogram meters2/(ampere second3). Redefining the kilogram will make many (at least the theoretically inclined) electricians happy. Electrician have been using their exact conventional values for 25 years.

Exact conventional values

The value resulting from the convential value for the von Klitzing constant and the Josephson constant is very near to the latest CODATA value of the Planck constant:

Value resulting from the convential value for the von Klitzing constant and the Josephson constant

A side note on the physical quantity that the kilogram represents: The kilogram is the SI base unit for the physical quantity mass. Mass is most relevant for mechanics. Through Newton’s second law, Newton's second law, mass is intimately related to force. Assume we have understood length and time (and so also acceleration). What is next in line, force or mass? William Francis Magie wrote in 1912:

It would be very improper to dogmatize, and I shall accordingly have to crave your pardon for a frequent expression of my own opinion, believing it less objectionable to be egotistic than to be dogmatic…. The first question which I shall consider is that raised by the advocates of the dynamical definition of force, as to the order in which the concepts of force and mass come in thought when one is constructing the science of mechanics, or in other words, whether force or mass is the primary concept…. He [Newton] further supplies the measurement of mass as a fundamental quantity which is needed to establish the dynamical measure of force…. I cannot find that Lagrange gives any definition of mass…. To get the measure of mass we must start with the intuitional knowledge of force, and use it in the experiments by which we first define and then measure mass…. Now owing to the permanency of masses of matter it is convenient to construct our system of units with a mass as one of the fundamental units.

And Henri Poincaré in his Science and Method says, “Knowing force, it is easy to define mass; this time the definition should be borrowed from dynamics; there is no way of doing otherwise, since the end to be attained is to give understanding of the distinction between mass and weight. Here again, the definition should be led up to by experiments.”

While I always had an intuitive feeling for the meaning of mass in mechanics, up until the middle of the twentieth century, I never was able to put it into a crystal-clear statement. Only over the last decades, with the help of Valentine Bargmann and Jean-Marie Souriau did I fully understand the role of mass in mechanics: mass is an element in the second cohomology group of the Lie algebra of the Galilei group.

Mass as a physical quantity manifests itself in different domains of physics. In classical mechanics it is related to dynamics, in general relativity to the curvature of space, and in quantum field theory mass occurs as one of the Casimir operators of the Poincaré group.

In our weekly “Philosophy of Physics” seminar, this year led by Immanuel himself, Hans Reichenbach, and Carl Friedrich von Weizsäcker (Pascual Jordan suggested this Dreimännerführung of the seminars), we discuss the nature of mass in five seminars. The topics for this year’s series are mass superselection rules in nonrelativistic and relativistic theories, the concept and uses of negative mass, mass-time uncertainty relations, non-Higgs mechanisms for mass generation, and mass scaling in biology and sports. I need at least three days of preparation for each seminar, as the recommended reading list is more than nine pages—and this year they emphasize the condensed matter appearance of these phenomena a lot! I am really looking forward to this year’s mass seminars; I am sure that I will learn a lot about the nature of mass. I hope Ehrenfest, Pauli, and Landau don’t constantly interrupt the speakers, as they did last year (the talk on mass in general relativity was particularly bad). In the last seminar of the series, I have to give my talk. In addition to metabolic scaling laws, my favorite example is the following:

Shaking frequency of wet animal

I also intend to speak about the recently found predator-prey power laws.

For sports, I already have a good example inspired by Texier et al.: the relation between the mass of a sports ball and its maximal speed. The following diagram lets me conjecture speedmax~ln(mass). In the downloadable notebook, mouse over to see the sport, the mass of the ball, and the top speeds:

Mass of sports ball and its maximal speed

For the negative mass seminar, we had some interesting homework: visualize the trajectories of a classical point particle with complex mass in a double-well potential. As I had seen some of Bender’s papers on complex energy trajectories, the trajectories I got for complex masses did not surprise me:

Trajectories for complex masses

End side note.

The complete new definition reads thus: The kilogram, kg, is the unit of mass; its magnitude is set by fixing the numerical value of the Planck constant to be equal to exactly 6.62606X*10–34 when it is expressed in the unit s–1 · m2 · kg, which is equal to J · s. Here X stands for some digits soon to be explicitly stated that will represent the latest experimental values.

And the kilogram cylinder can finally retire as the world’s most precious artifact. I expect soon after this event the international kilogram prototype will finally be displayed in the Louvre. As the Louvre had been declared “a place for bringing together monuments of all the sciences and arts” in May 1791 and opened in 1793, all of us on the committee agreed that one day, when the original kilogram was to be replaced with something else, it would end up in the Louvre. Ruling the kingdom of mass for more than a century, IPK deserves its eternal place as a true monument of the sciences. I will make a bet—in a few years the retired kilogram, under its three glass domes, will become one of the Louvre’s most popular objects. And the queue that physicists, chemists, mathematicians, engineers, and metrologists will form to see it will, in a few years, be longer than the queue for the Mona Lisa. I would also make a bet that the beautiful miniature kilogram replicas will within a few years become the best-selling item in the Louvre’s museum store:

Miniature kilogram replicas

At the same time, as a metrologist, maybe the international kilogram prototype should stay where it is for another 50 years, so that it can be measured against a post-2018 kilogram made from an exact value of the Planck constant. Then we would finally know for sure if the international kilogram prototype is/was really losing weight.

Let me quickly recapitulate the steps toward the new “electronic” kilogram.

Intuitively, one could have thought to define the kilogram through the Avogadro constant as a certain number of atoms of, say, 12C. But because of binding energies and surface effects in a pile of carbon (e.g. diamond, graphene) made up from n = round(1 kg / m (12C)) atoms to realize the mass of one kilogram, all the n carbon-12 atoms would have to be well separated. Otherwise we would have a mass defect (remember Albert’s famous E = m c2 formula), and the mass equivalent for one kilogram or compact carbon versus the same number of individual, well-separated atoms is on the order of 10–10. Using the carbon-carbon bond energry, here is an estimation of the mass difference:

Estimation of the mass difference using the carbon-carbon bond energy

A mass difference of this size can for a 1 kg weight can be detected without problems with a modern mass comparator.

To give a sense of scale, this would be equivalent to the (Einsteinian) relativistic mass conversion of the energy expenditure of fencing for most of a day:

Energy expenditure of fencing for most of a day

This does not mean one could not define a kilogram through the mass of an atom or a fraction of it. Given the mass of a carbon atom m (12C), the atomic mass constant u = m (12C) / 12 follows, and using u we can easily connect to the Planck constant:

Connecting to the Planck constant

I read with great interest the recent comparison of using different sets of constants for the kilogram definition. Of course, if the mass of a 12C atom would be the defined value, then the Planck constant would become a measured, meaning nonexact, value. For me, having an exact value for the Planck constant is aesthetically preferable.

I have been so excited over the last decade following the steps toward the redefinition of the kilogram. For more than 20 years now, there has been a light visible at the end of the tunnel that would eliminate the one kilogram from its throne.

And when I read 11 years ago the article by Ian Mills, Peter Mohr, Terry Quinn, Barry Taylor, and Edwin Williams entitled “Redefinition of the Kilogram: A Decision Whose Time Has Come” in Metrologia (my second-favorite, late-morning Tuesday monthly read, after the daily New Arrivals, a joint publication of Hells’ Press, the Heaven Publishing Group, Jannah Media, and Deva University Press), I knew that soon my dreams would come true. The moment I read the Appendix A.1 Definitions that fix the value of the Planck constant h, I knew that was the way to go. While the idea had been floating around for much longer, it now became a real program to be implemented within a decade (give or take a few years).

James Clerk Maxwell wrote in his 1873 A Treatise on Electricity and Magnetism:

In framing a universal system of units we may either deduce the unit of mass in this way from those of length and time already defined, and this we can do to a rough approximation in the present state of science; or, if we expect soon to be able to determine the mass of a single molecule of a standard substance, we may wait for this determination before fixing a universal standard of mass.

Until around 2005, James Clerk thought that mass should be defined through the mass of an atom, but he came around over the last decade and now favors the definition through Planck’s constant.

In a discussion with Albert Einstein and Max Planck (I believe this was in the early seventies) in a Vienna-style coffee house (Max loves the Sachertorte and was so happy when Franz and Eduard Sacher opened their now-famous HHS (“Heavenly Hotel Sacher”)), Albert suggested using his two famous equations, E = m c2 and E = h f, to solve for m to get m = h f / c2. So, if we define h as was done with c, then we know m because we can measure frequencies pretty well. (Compton was arguing that this is just his equation rewritten, and Niels Bohr was remarking that we cannot really trust E = m c2 because of its relatively weak experimental verification, but I think he was just mocking Einstein, retaliating for some of the Solvay Conference Gedankenexperiment discussions. And of course, Bohr could not resist bringing up Δm Δt ~ h / c2 as a reason why we cannot define the second and the kilogram independently, as one implies an error in the other for any finite mass measurement time. But Léon Rosenfeld convinced Bohr that this is really quite remote, as for a day measurement time this limits the mass measurement precision to about 10–52 kg for a kilogram mass m.)

An explicit frequency equivalent f = m c2 / h is not practical for a mass of a kilogram as it would mean f ~ 1.35 1050 Hz, which is far, far too large for any experiment, dwarfing even the Planck frequency by about seven orders of magnitude. But some recent experiments from Berkeley from the last few years will maybe allow the use of such techniques at the microscopic scale. For more than 25 years now, in every meeting of the HPS (Heavenly Physical Society), Louis de Broglie insists on these frequencies being real physical processes, not just convenient mathematical tools.

So we need to know the value of the Planck constant h. Still today, the kilogram is defined as the mass of the IPK. As a result, we can measure the value of h using the current definition of the kilogram. Once we know the value of h to a few times 10–8 (this is basically where we are right now), we will then define a concrete value of h (very near or at the measured value). From then on, the kilogram will become implicitly defined through the value of the Planck constant. At the transition, the two definitions overlap in their uncertainties, and no discontinuities arise for any derived quantities. The international prototype has lost over the last 100 years on the order of 50 μg weight, which is a relative change of 5 × 10–8, so a value for the Planck constant with an error less than 2 × 10–8 does guarantee that the mass of objects will not change in a noticeable manner.

Looking back over the last 116 years, the value of the Planck constant gained about seven digits in precision. A real success story! In his paper “Ueber das Gesetz der Energieverteilung im Normalspectrum,” Max Planck for the first time used the symbol h, and gave for the first time a numerical value for the Planck constant (in a paper published a few months earlier, Max used the symbol b instead of h):

Excerpts from "Ueber das Gesetz der Energieverteilung im Normalspectrum"

(I had asked Max why he choose the symbol h, and he said he can’t remember anymore. Anyway, he said it was a natural choice in conjunction with the symbol k for the Boltzmann constant. Sometimes one reads today that h was used to express the German word Hilfsgrösse (auxiliary helping quantity); Max said that this was possible, and that he really doesn’t remember.)

In 1919, Raymond Thayer Birge published the first detailed comparison of various measurements of the Planck constant:

Various measurements of the Planck constant

From Planck’s value 6.55 × 10–34 J · s to the 2016 value 6.626070073(94) × 10–34 J · s, amazing measurement progress has been made.

The next interactive Demonstration allows you to zoom in and see the progress in measuring h over the last century. Mouse over the Bell curves (indicating the uncertainties of the values) in the notebook to see the experiment (for detailed discussions of many of the experiments for determining h, see this paper):

History of measurement of the Planck constant  h

There have been two major experiments carried out over the last few years that my original group eagerly followed from the heavens: the watt balance experiment (actually, there is more than one of them—one at NIST, two in Paris, one in Bern…) and the Avogadro project. As a person who built mechanical measurements when I was alive, I personally love the watt balance experiment. Building a mechanical device that through a clever trick by Bryan Kibble eliminates an unknown geometric quantity gets my applause. The recent do-it-yourself LEGO home version is especially fun. With an investment of a few hundred dollars, everybody can measure the Planck constant at home! The world has come a long way since my lifetime. You could perhaps even check your memory stick before and after you put a file on it and see if its mass has changed.

But my dear friend Lavoisier, not unexpectedly, always loved the Avogadro project that determines the value of the Avogadro constant to high precision. Having 99.995% pure silicon makes the heart of a chemist beat faster. I deeply admire the efforts (and results) in making nearly perfect spheres out of them. The product of the Avogadro constant with the Planck constant NA h is related to the Rydberg constant. Fortunately, as we saw above, the Rydberg constant is known to about 11 digits; this means that knowing NA h to a high precision allows us to find the value of our beloved Planck constant h to high precision. In my lifetime, we started to understand the nature of the chemical elements. We knew nothing about isotopes yet—if you had told me that there are more than 20 silicon isotopes, I would not even have understood the statement:

Silicon isotopes

I am deeply impressed how mankind today can even sort the individual atoms by their neutron count. The silicon spheres of the Avogadro project are 99.995 % silicon 28—much, much more than the natural fraction of this isotope:

Silicon spheres of the Avogadro project

While the highest-end beam balances and mass comparators achieve precisions of 10–11, they can only compare masses but not realize one. Once the Planck constant has a fixed value using the watt balance, a mass can be constructively realized.

I personally think the Planck constant is one of the most fascinating constants. It reigns in the micro world and is barely visible at macroscopic scales directly, yet every macroscopic object holds together just because of it.

A few years ago I was getting quite concerned that our dream of eternal unit definitions would never be realized. I could not get a good night’s sleep when the value for the Planck constant from the watt balance experiments and the Avogadro silicon sphere experiments were far apart. How relieved I was to see that over the last few years the discrepancies were resolved! And now the working mass is again in sync with the international prototype.

Before ending, let me say a few words about the Planck constant itself. The Planck constant is the archetypal quantity that one expects to appear in quantum-mechanical phenomena. And when the Planck constant goes to zero, we recover classical mechanics (in a singular limit). This is what I myself thought until recently. But since I go to the weekly afternoon lectures of Vladimir Arnold, which he started giving in the summer of 2010 after getting settled up here, I now have strong reservations against such simplistic views. In his lecture about high-dimensional geometry, he covered the symplectic camel; since then, I view the Heisenberg uncertainty relations more as a classical relic than a quantum property. And since Werner Heisenberg recently showed me the Brodsky–Hoyer paper on ħ expansions, I have a much more reserved view on the BZO cube (the Bronshtein–Zelmanov–Okun cGh physics cube). And let’s not forget recent attempts to express quantum mechanics without reference to Planck’s constant at all. While we understand a lot about the Planck constant, its obvious occurrences and uses (such as a “conversion factor” between frequency and energy of photons in a vacuum), I think its deepest secrets have not yet been discovered. We will need a long ride on a symplectic camel into the deserts of hypothetical multiverses to unlock it. And Paul Dirac thinks that the role of the Planck constant in classical mechanics is still not well enough understood.

For the longest time, Max himself thought that in phase space (classical or through a Wigner transform), the minimal volume would be on the order of his constant h. As one of the fathers of quantum mechanics, Max follows the conceptual developments still today, especially the decoherence program. How amazed was he when sub-h structures were discovered 15 years ago. Eugene Wigner told me that he had conjectured such fine structures since the late 1930s. Since then, he has loved to play around with plotting Wigner functions for all kind of hypergeometric potentials and quantum carpets. His favorite is still the Duffing oscillator’s Wigner function. A high-precision solution of the time-dependent Schrödinger equations followed by a fractional Fourier transform-based Wigner function construction can be done in a straightforward and fast way. Here is how a Gaussian initial wavepacket looks after three periods of the external force. The blue rectangle is an area with in the x p plane of area h:

How Gaussian initial wavepacket looks after three periods of the external force

Here are some zoomed-in (colored according to the sign of the Wigner function) images of the last Wigner function. Each square has an area of 4 h and shows a variety of sub-Planckian structures:

Zoomed-in images of the last Wigner function

For me, the forthcoming definition of the kilogram through the Planck constant is a great intellectual and technological achievement of mankind. It represents two centuries of hard work at metrological institutes, and cements some of the deepest physical truths found in the twentieth century into the foundations of our unit system. At once a whole slew of units, unit conversions, and fundamental constants will be known with greater precision. (Make sure you get a new CODATA sheet after the redefinition and have the pocket card with the new constant values with you always until you know all the numbers by heart!) This will open a path to new physics and new technologies. In case you make your own experiments determining the values of the constants, keep in mind that the deadline for the inclusion of your values is July 1, 2017.

The transition from the platinum-iridium kilogram, historically denoted platinum-iridium kilogram, to the kilogram based on the Planck constant h can be nicely visualized graphically as a 3D object that contains both characters. Rotating it shows a smooth transition of the projection shape from platinum-iridium kilogram to h representing over 200 years of progress in metrology and physics:

3D object of both the platinum-iridium kilogram and the Planck constant h

The interested reader can order a beautiful, shiny, 3D-printed version here. It will make a perfect gift for your significant other (or ask your significant other to get you one) for Christmas to be ready for the 2018 redefinition, and you can show public support for it as a pendent or as earrings. (Available in a variety of metals, platinum is, obviously, the most natural choice, and it is under $5k—but the $82.36 polished silver version looks pretty nice too.)

Here are some images of golden-looking versions of KToh3D (up here, gold, not platinum is the preferred metal color):

Golden-looking versions of KToh3D

I realize that not everybody is (or can be) as excited as I am about these developments. But I see forward to the year 2018 when, after about 225 years, the kilogram as a material artifact will retire and a fundamental constant will replace it. The new SI will base our most important measurement standards on twenty-first century technology.

If the reader has questions or comments, don’t hesitate to email me at jeancharlesdeborda@gmail.com; based on recent advances in the technological implications of EPR=ER, we now have a much faster and more direct connection to Earth.

À tous les temps, à tous les peuples!

Download this post as a Computable Document Format (CDF) file. New to CDF? Get your copy for free with this one-time download.

]]>
http://blog.wolfram.com/2016/05/19/an-exact-value-for-the-planck-constant-why-reaching-it-took-100-years/feed/ 11
Profiling the Eyes: ϕaithful or ROTen? Or Both? http://blog.wolfram.com/2016/03/02/profiling-the-eyes-phiaithful-or-roten-or-both/ http://blog.wolfram.com/2016/03/02/profiling-the-eyes-phiaithful-or-roten-or-both/#comments Wed, 02 Mar 2016 15:26:12 +0000 Michael Trott http://blog.internal.wolfram.com/?p=29945 An investigation of the golden ratio’s appearance in the position of human faces in paintings and photographs.

There is a vast amount of literature on the appearance of the golden ratio in nature, in physiology and psychology, and in human artifacts (see this page on the golden ratio; these articles on the golden ratio in art, in nature, and in the human body; and this paper on the structure of the creative process in science and art). In the past thirty years, there has been increasing skepticism about the prevalence of the golden ratio in these domains. Earlier studies have been revisited or redone. See, for example, Foutakis, Markowsky on Greek temples, Foster et al., Holland, Benjafield, and Svobodova et al. for human physiology.

In my last blog, I analyzed the aspect ratios of more than one million old and new paintings. Based on psychological experiments from the second half of the nineteenth century, especially by Fechner in the 1870s, one would expect many paintings to have a height-to-width ratio equal to the golden ratio or its inverse. But the large sets of paintings analyzed did not confirm such a conjecture.

While we did not find the expected prevalence of the golden ratio in external measurements of paintings, maybe looking “inside” will show signs of the golden ratio (or its inverse)?

In today’s blog, we will analyze collections of paintings, photographs, and magazine covers that feature human faces. We will also analyze where human faces appear in a few selected movies.

The literature on art history and the aesthetics of photography puts forward a theory of dividing the canvas into thirds, horizontally and vertically. And when human faces are portrayed, two concrete rules for the position of the eyeline are often mentioned:

  • the rule of thirds: the eyeline should be 2/3 (≈0.67) from the bottom
  • the golden ratio rule: the eyeline should be at 1/(golden ratio) (≈0.62) from the bottom

The rule of thirds is often abbreviated as ROT. In 1998 Frascari and Ghirardini—in the spirit of Adolf Zeising, the father of the so-called golden numberism—coined the term “ϕaithful” (making clever use of the Greek symbol ϕ that is used to denote the golden ratio) to label the unrestricted belief in the primacy of the golden ratio. Some consider the rule of thirds an approximation of the golden ratio rule; “ROT on steroids” and similar phrases are used. Various photograph-related websites contain a lot of discussion about the relation of these two rules. For early uses of the rule of thirds, see Nafisi. For the more modern use starting in the eighteenth century, see this history of the rule of thirds. For a recent human-judgment-based evaluation of the rule of thirds in paintings and photographs, see Amirshahi et al.

So because we cannot determine which rule is more common by first-principle mathematical computations, let’s again look at some data. At what height, measured from the bottom, are the eyes in paintings showing human faces?

Eyeline heights in older paintings—more ROTen than ϕaithful

Let’s start with paintings. As with the previous blog, we will use a few different data sources. We will look at four painting collections: Wikimedia, the Smithsonian, Britain’s Your Paintings, and Saatchi.

If we want to analyze the positions of faces within a painting, we must first locate the faces. The function FindFaces comes in handy. While typically used for photographs, it works pretty well on (representational) paintings too. Here are a few randomly selected paintings of people from Wikimedia. First, the images are imported and the faces located and highlighted by a yellow, translucent rectangle. We see potentially different amounts of horizontal space around a face, but the vertical extension is pretty uniform from the chin to the bottom of the forehead hairs.

Code for analyzing the positions of faces in paintings
Ols Maria Portert van Karel I Lodewijk van de Palts Catherine Brass Yates (Mrs. Richard Yates)
Italian Girl by the Well Prince Eugène, vice-roi d'Italie Dodo und ihr Bruder

A more detailed look reveals that the eyeline is approximately at 60% of the height of the selected face area. (Note that this is approximately 1/ϕ). To demonstrate the correctness of the 60%-of-the-face-height rule for some randomly selected images from Wikipedia, we show the resulting eyeline in red and the two lines ±5% above and below.

Eyeline at 60% of height of the face shown on Barack Obama, Mao Zedong, Carl Friedrich Gauss, Hillary Clinton, Gong Li, Magdalena Neuner

Independent of gender and haircut, the 60% height seems to be a good approximation for the eyeline. Of course, not all faces that we encounter in paintings and photographs are perfectly straightened. For tilting heads, we note both eyes will not be on a horizontal line. But as an average, the 60% rule works well.

Tilting heads and eyeline

Overall we see that the eyeline can be located within a few percent of the vertical height of the face rectangle. The error of the resulting estimation of the eyeline height in a painting/photograph in most collections should be about ≤2% for a typical ratio of face height to painting/photograph height. Plus or minus 2% should be small enough such that for a large enough painting/photograph collection we can discriminate the golden ratio height 1/ϕ from the rule of thirds 2/3. On the range [0,1], the distance between 1/ϕ and 2/3 is about 5%. (Using a specialized eye detection method to determine the vertical height of the eyes we leave for a later blog.)

We start with images of paintings from Wikimedia.

Using the 0.6 factor for the eyeline heights, we get the following distribution of the faces identified. About 12,000 faces were found in 8,000 images. The blue curve shows the probability density of the position of the eyelines of all faces, and the red curve the faces whose bounding rectangles occupy more than 1/12 of the total area of the painting. (While somewhat arbitrary, here and in the following, we will use 1/12 as the relative face rectangle area, above which a face will be considered to be a larger part of the whole image.) We see a clear single maximum at 2/3 from the bottom, as predicted by the ROT. (The two black vertical lines are at 2/3 and 1/ϕ).

Located eyeline across 12,000 faces in 8,000 images from Wikimedia

Because we determine the faces from potentially cropped images rather than ruler-based measurements on the actual paintings, we get some potential errors in our data. As analyzed in the last blog, these effects seem to average out and introduce final errors well under 1% for over 10,000 paintings.

Here are two heat maps: one for all faces, and the other for larger faces only. We place face-enclosing rectangles over each other, and the color indicates the fraction of all faces at a given position. One sees that human faces appear as frequently in the left half as in the right half. To allow comparisons of the face positions of paintings with different aspect ratios, the widths and heights of all paintings were rescaled to fit into a square. The centers of the faces fall nicely into the [2/3,1/ϕ] range. (The Wolfram Language code to generate the PDF and heat map plots is given below.)

Heat maps: one for all faces, one for larger faces only

Here is a short animation showing how the peak of the face distributions forms as more and more paintings are laid over each other.

Repeating the Wikimedia analysis with 4,000 portrait paintings from the portrait collection of the Smithsonian yields a similar result. This time, because we selected portrait paintings from the very beginning, the blue curve already shows a more located peak.

Located eyeline in 4,000 portrait paintings from the Smithsonian

The British Your Paintings website has a much larger collection of paintings. We find 58,000 paintings with a total of 76,000 faces.

Located eyeline of 76,000 faces in 58,000 paintings in the British Your Paintings

The mean and standard deviation for all eyeline heights is 0.64±0.19, and the median is 0.69.

In the eyeline position/relative face size plane, we obtain the following distribution showing that larger faces are, on average, positioned lower. Even for very small relative face sizes, the most common eyeline height is between 1/ϕ and 2/3.

yeline position/relative face size plane

The last image also begs for a plot of the PDF of the relative size of the faces in a painting. The mean area of a face rectangle is 3.9% of the whole painting area, with a standard deviation of 5.5%.

Relative size of the faces in a painting

Here is the corresponding cumulative distribution of all eyeline positions of faces larger than a given relative size. The two planes in the yz plane are at 1/ϕ and 2/3.

Cumulative distribution of all eyeline positions of faces larger than a given relative size

Did the fraction of paintings obeying the ROT of ϕ change over time? Looking at the data, the answer is no. For instance, here is the distribution of the eyeline heights for all nineteenth- and twentieth-century paintings from our dataset. (There are some claims that even Stone Age paintings already took the ROT into account.)

Eyeline heights for all nineteenth- and twentieth-century paintings

As paintings often contain more than one person, we repeat the analysis with the paintings that just have a single face. Now we see a broader maximum that spans the range from 1/ϕ to 2/3.

Eyeline heights in paintings that have a single face

Looking at the binned rather than the smoothed data in the range of the global maximum, we see two well-resolved maxima: one according to the ROT and one according to the golden ratio.

Binned data for eyeline heights

Now that we have gone through all the work to locate the faces, we might as well do something with them. For instance, we could superimpose them. And as a result, here is the average face from 11,000 large faces from nineteenth-century British paintings. The superimposed images of tens of thousands of faces also gives us some confidence in the robustness and quality of the face extraction process.

Average face from 11,000 large faces from nineteenth-century British paintings

Given a face from a nineteenth-century painting, which (famous) living person looks similar? Using Classify["NotablePerson",…], we can quickly find some unexpected facial similarities of living celebrities to people shown in older British paintings. The function findSimilarNotablePerson takes as the argument the abbreviated URL of a page from the Your Paintings website, imports the painting, extracts the face, and then finds the most similar notable person from the built-in database.

Using functions Classify, NotablePerson, findSimilarNotablePerson matching nineteenth-century faces with current living celebrities

Bob Dylan and Charles Kemble

William Shatner and Reverend William Morris

Mr. T and Sancho Panza

Here is a Demonstration that shows a few more similar pairs (please see the attached notebook to look through the different pairings).

Demonstration with similar pairs

The eyeline heights in newer paintings—more ϕaithful than ROTen

Now let us look at some more modern paintings. We find 15,000 modern portraits at Saatchi. Faces in modern portraits can look quite abstract, but FindFaces still is able to locate a fair number of them. Here are some concrete examples.

Using FindFaces to locate faces in modern portraits at Saatchi
sans titre The portraitist an ordinary person 19

In mozaik / One of us Model Jeanine

Dive into the Question #11 Eden PORTRAIT OF ANTON AT THE AGE OF 10

And here is an array of 144 randomly selected faces in modern art paintings. From a distance, one recognizes human faces, but deviations due to stylistic differences become less visible.

Array of 144 randomly selected faces in modern art paintings

If we again superimpose all faces, we get a quite normal-looking human face. With a more female appearance (e.g. softer jawline and fuller lips) as compared to the nineteenth-century British paintings, the overall face has more female characteristics. The fact that the average face looks quite “normal” is surprising when looking at the above 12*12 matrix of faces.

Faces from modern paintings superimposed

If we add not just all color values but also random positive and negative weights, we get much more modern-art-like average faces.

Adding all color values and random positive and negative weights

Now concerning the main question of this blog: what are the face positions in these modern portraits? Turns out, they again follow the golden ratio much more frequently than the ROT. About 30% more paintings have the eyeline at 1/ϕ±1% compared to 2/3±1%.

Face positioning in modern portraits

The mean and standard deviation for all eyeline heights is 0.60±0.16, and the median is 0.62. A clearly lower-centered and narrower distribution.

And if we plot the PDF of the eyeline height versus the relative face size, we clearly see a sweet spot at eyeline height 2/3 and relative face area 1/5. Smaller faces with relative size of about 5% occur higher, at eyeline height about 3/4.

Eyeline height versus relative face size in modern paintings

And here is again the corresponding 3D graphic that shows the 1/ϕ eyeline height for larger relative faces is quite pronounced.

3D graphic 1/ϕ eyeline height for larger relative faces

We should check with another data source to confirm that more modern paintings have a more ϕaithful eyeline. The site Fine Art America offers thousands of modern paintings of celebrities. Here is the average of 5,000 such celebrity paintings (equal amounts politicians, actors and actresses, musicians, and athletes). Again we clearly see the maximum of the PDF at 1/ϕ rather than at 2/3.

5,000 celebrity paintings from Fine Art America

For individual celebrities, the distribution might be different. Here is a small piece of code that uses some functions defined in the last section to analyze portrait paintings of individual persons.

Code used to analyze portraint paintings of individual persons

Here are some examples. (We used about 150 paintings per person.)

Jimi Hendrix

Mick Jagger

Perhaps unexpectedly, Jimi Hendrix is nearly perfectly ϕaithful, while Mick Jagger seems perfectly ROTen. Obama and Jesus obey nearly exactly the rule of thirds in its classic form.

Obama

Jesus

The eyeline heights in photographs by professional photographers

Now, for comparison to the eyeline positions in paintings, let us look at some sets of photographs and determine the positions of the faces in these. Let’s start with professional portrait photographs. The Getty Image collection is a premier collection of good photographs. In contrast to the paintings, the maximum for large faces is much closer to 2/3 (ROT) than to 1/ϕ for a random selection of 200,000 portrait photographs.

Eyeline positions in photographs from Getty Image collection

And here is again the distribution in the eyeline height/relative face size plane. For very large relative face sizes, the most common eyeline height even drops below 1/ϕ.

Distribution in the eyeline height/relative face size plane for Getty images

And here is the corresponding heat map arising from overlaying 300,000 head rectangles.

Heat map arising from overlaying 300,000 head rectangles

So what about other photographs, those aesthetically less perfect than Getty Images? The Shutterstock website has many photos. Selecting photos with subjects of various tags, we quite robustly (meaning independent of the concrete tags) see the maximum of the eyeline height PDF near 2/3. This time, we display the results for portraits showing groups of identically tagged people.

These are the eyeline height distributions and the average faces of 100,000 male and female portraits. (The relatively narrow peak in the twin-peak structure of the distribution between 0.5 and 0.55 comes from photos that are close-up headshots that don’t show the entire face.)

Eyeline height distributions and the average faces of 100,000 male and female portraits

Restricting the photograph selection even more, e.g. to over 10,000 photographs of persons tagged with nerd or beard shows again ROTen-ness.

Eyeline height distributions and the average faces of over 10,000 photographs of persons tagged with nerd or beard

The next two rows show photos tagged with happy or sad.

Eyeline height distributions and the average faces of photographs tagged with happy or sad

All of the last six tag types (male, female, nerd, beard, happy, sad) of photographs show a remarkable robustness of the position of the eyeline maximum. It is always in the interval [1/ϕ,2/3], with a trend toward 2/3 (ROT).

But where are the babies (the baby eyeline, to be precise)? The two peaks are now even more pronounced, with the first peak even bigger than the second—the reason being that many more baby pictures are just close-ups of the baby’s whole face.

Eyeline height on photographs of babies

Next we’ll have a look at the eyeline height PDFs for two professional photographers: Peggy Sirota and Mario Testino. Because both artists often photograph models, the whole human body will be in the photograph, which shifts the eyeline height well above 2/3. (We will come back to this phenomenon later.)

Eyeline height in Peggy Sirota's photographs

Eyeline height in Mario Testino's photographs

The eyeline heights in selfies—maybe too high?

After looking at professionally made photos, we should, of course, also have a look at the pinnacle of modern amateur portraiture—the selfie. (For a nice summary of the history of the selfie, see Saltz. For a detailed study in the increase of selfie popularity over the last three years by nearly three orders of magnitude, see Souza et al. Using some of the service connects, e.g. the “Flickr” connection, we can immediately download a sample of selfies. Here are five selfies from the last week in September around the Eiffel Tower. Not all images tagged as “selfies” are just the faces in close up.

Selfies from Flickr from around the Eiffel Tower

Every day, more than 100,000 selfies are added to Instagram (one can easily browse them here)—this is a perfect source for selfies. Here are the eyeline height distributions for 100,000 selfie thumbnails.

Eyeline height distributions for 100,000 selfies from Instagram

Compared with the professional photographs, we see that the maximum of the eyeline height distributions is clearly above 2/3 for photos that contain a face larger than 1/12 of the total photo. So the next time you take a selfie, position your face a bit lower in the picture to better obey the ROT and ϕ. (Systematic deviations of selfies from established photographic aesthetic principles have already been observed by Bruno et al.)

The eyeline height in a selfie changes much less with the total face area as compared to professional photographs.

Eyeline height compared to face size in selfies

And again, the corresponding heat map.

Heat map for selfies

The maximum of the total area of the faces in selfies is—not unexpectedly—due to the finite length of the human arm or typical telescopic selfie sticks, bounded by about one meter. So selfies with very small faces are scarcer than photographs or paintings with small faces.

Total area of the faces in selfies

What’s the average selfie face look like? The left image is the average over all faces, the middle image the average over all male faces, and the right image the average over all female faces. (Genders were heuristically determined by matching the genders associated with a given name to user names.) The fact that the average selfie looks female arises from the fact that a larger number of selfies are of female faces. This was also found in the recent study by Manovich et al.

xAverage of all selfie faces (left), average of male selfie faces (middle), average of female selfie faces (right)

Now, it could be that the relative height of the eyeline is dependent on the concrete person portrayed. We give the full code in case the reader wants to experiment with people not investigated here. Eyeline heights we measure in images from the Getty website, tagged with the keywords to be specified in the function positionSummary.

Full code for determining eyeline height
Full code for determining eyeline height

Now it takes just a minute to get the average eyeline height of people seen in the news, each based on analyzing 600 portrait shots of Lady Gaga, Taylor Swift, Brad Pitt, and Donald Trump. Lady Gaga’s eyeline is, on average, clearly higher, quite similar to typical selfie positions. On the other hand, Taylor Swift’s eyeline is peaked at the modern painting-like maximum at 1/ϕ.

Lady Gaga

Taylor Swift

Brad Pitt

Donald Trump

Many more types of photographs could be analyzed. But we end here and leave further exploration and more playtime to the reader.

LinkedIn profile photos—men seem to be more ϕaithful

Many LinkedIn profile pages have photographs of the page owners. These photographs are another data source for our eyeline height investigations. Taking 25,000 male and 25,000 female profile photos, we obtain the following results. Because the vast majority of LinkedIn photographs are close-up shots, the curve for faces occupying more than 1/12 of the whole area is quite similar to the curve of all faces, and so we show only the distribution of all faces. This time, the yellow curve shows all faces that occupy between 10% and 30% of the total area.

Here are the eyeline height PDF, the bivariate PDF, and the average face for 10,000 male members from LinkedIn. Based on the frequency of male first names in the US, Bing image searches restricted to the LinkedIn domain were carried out, and the images found were collected.

Eyeline height PDF, bivariate PDF, and the average face for male members

Interestingly, the global maximum of the eyeline height distribution occurs clearly below 1/ϕ, the opposite effect compared to the selfies analyzed above. The center graph shows the distribution of the eyeline height as a function of the face area. The global maximum appears at a face area of 1/5 and at eyeline height quite close to 1/ϕ. This means the low global maximum is mostly caused by photographs where the face rectangles occupy more than 30% of the total area. The most typical LinkedIn photograph has a face rectangle area of 1/5th of the total area and the eyeline height is at 1/ϕ.

The corresponding distribution over all female US first names is quite similar to the corresponding curve for males. But for faces that occupy a larger fraction of the image, the female distribution is visibly different. The average eyeline height of these photos of women on LinkedIn is a few percent smaller than the corresponding male curve.

Eyeline height PDF, bivariate PDF, and the average face for female members

With the large number of members on LinkedIn, it even becomes feasible to look for eyeline height distribution for individual names. We carry out a facial profiling for three names: Josh, Raj, and Mei. Taking 2,500 photos for each name, we obtain the following distributions and average faces.

Eyelinge height distribution for Josh, Raj, and Mei

The distributions agree quite well with the corresponding gender distributions above.

After observing the remarkable peak of the eyeline height PDF at 1/ϕ, I was wondering which of my Wolfram Research or Wolfram|Alpha coworkers obey the ϕaithful rule. And indeed I found more of my male coworkers have the 1/ϕ height than female coworkers. Not unexpectedly, our design director’s is among the ϕaithful. The next input imports photos from the LinkedIn pages of other Wolfram employees and draws a red line at height 1/ϕ.

Eyeline height distribution for Wolfram Research employees

Let us compare the peak distribution with the one from the current members of Congress. We import photos of all members of Congress.

Importing photos of members of Congress

Here are some example photos.

Photos of members of Congress

Similar to the LinkedIn profile photos, the maximum of the eyeline PDF is slightly lower than 2/3. We also show the face of the averaged member of Congress.

Eyeline height distribution, heat map, and average face for memebers of Congress

Weekly magazine covers—tending to be ϕaithful over the last three decades

After having analyzed the face positions of amateur and professional photographs, a next natural area for exploration is magazine covers: their photographs are carefully made, selected, and placed. TIME magazine maintains a special website for their 4,800 covers covering over ninety years of published issues. (For a quick view of all covers, see Manovich’s cover analysis from a few years ago.)

It is straightforward to download the covers, and then find and extract the faces.

Downloading TIME magazine covers and extracting the faces

These are the two resulting distributions for the eyelines.

Eyeline distributions for faces on TIME magazine covers

The maximum occurs at a height smaller than 1/2. This is mostly caused by the title “TIME” on top of the cover. Newer editions have partial overlaps between the magazine title and the image. The following plot shows the yearly average of the eyeline height over time. Since the 1980s, there has been a trend for higher eyeline positions on the cover.

Yearly average of eyeline height over time

If we calculate the PDFs of the eyeline positions of all issues from the last twenty-five years, we see quite a different distribution with a bimodal structure. One of the peaks is nearly exactly at 1/ϕ.

Eyeline height positions of all issues in the last 25 years

And here are the average faces per decade. We see also that the covers of the first two decades were in black and white.

Average faces per decade

For a second example, we will look at the German magazine SPIEGEL. It is again straightforward to download all the covers, locate the faces, and extract the eyelines.

Downloading covers and extracting faces from SPIEGEL

Again, because of the title text “SPIEGEL” on top of the cover, the maximum of the PDF of the eyeline height on the cover occurs at relatively low heights (≈0.56).

Eyeling height distribution for SPIEGEL magazine covers

A heat map of the face positions shows this clearly.

Heat map for SPIEGEL magazine covers

Taking into account both that the magazine title “SPIEGEL” is typically 13% of the cover height and that there is whitespace at the bottom, the renormalized peak of the eyeline height is nearly exactly at 1/ϕ.

Average faces by decade from SPIEGEL covers

For a third, not-so-politically-oriented magazine, we chose the biweekly Rolling Stone. They too have a collection of their covers (through 2013) online. The eyeline height distribution is again bimodal, with the largest peak at 1/ϕ. So Rolling Stone is a ϕaithful magazine.

Eyeline height distribution for Rolling Stone magazine

By year, the average eyeline height shows some regularities within an eight-year period.

Average eyeline height for Rolling Stone magazine

The cumulative mean of the eyeline heights is very near to 1/ϕ, and the average through 2013 deviates only 0.4% from 1/ϕ.

Cumulative mean of eyeline heights from Rolling Stone magazine

To parallel the earlier two magazines, here are the averaged faces by decade.

Average faces by decade from Rolling Stone magazine

Comic book covers—where are the eyelines of the superheros?

Comic covers are another fairly large source of images to analyze. The Comic Book Database has a large collection of comic book covers. Here we restrict ourselves to Marvel Comics and DC Comics, totaling about 72,000 covers. Because comics are not photographs, recognizing faces is now a harder job. But even so, we successfully extract about 90,000 faces.

Here are our typical characterizations (eyeline height PDF, face position heat map, average face) for Marvel Comics.

Eyeline height PDF, face position heat map, average face for Marvel

And the same for DC Comics.

Eyeline height PDF, face position heat map, average face for DC Comics

All three characteristics show remarkable consistency between the two comic publishers.

Daily newspapers, fashion magazines, …—where are the eyelines now?

Many more collections of faces can now be investigated for the eyeline positions. It is straightforward to write a small crawler function that starts with a given website and extracts images and links to pages with more images. (This is just a straightforward implementation. Many optimizations, such as parallel retrieval, could be implemented to improve this function.)

Extracting images with a given website with links to pages with more images

For example, here is the resulting average data for all images (larger than 200 pixels) from The New York Times website from February 8, 2016. The eyeline PDF maximum is between 2/3 and 1/ϕ.

Eyeline height distribution, heat map, and average face from February 8, 2016 on The New York Times website

And here from the weekly German newspaper, Die Zeit. This time, the eyeline maximum is clearly 2/3 for larger faces.

Eyeline height distribution, heat map, and average face from Die Zeit

Here is a snapshot of 1,000 images from CNN.

Eyeline height distribution, heat map, and average face from 1,000 photos from CNN

The eyeline heights in fashion magazines show a totally different distribution. Here are the results of 1,000 images from Vogue. Because many images on the site show the stylishly dressed models from head to toe, the head is small and the eyeline very high in the images. As a result, we get the strong, narrow peak of the blue curve.

Eyeline height distribution, heat map, and average face from 1,000 images from Vogue

GQ Magazine also shows a global eyeline height peak at 2/3 for large faces.

Eyeline height distribution, heat map, and average face from GQ Magazine

The maximum of the eyeline in the magazine People is again at 2/3 for large faces.

Eyeline height distribution, heat map, and average face from People

And here are the results for Ebony magazine. This time, the large face eyeline height has a peak at 1/ϕ.

Eyeline height distribution, heat map, and average face from Ebony

Using a bodybuilding magazine, as with the Vogue images, we see a very high eyeline, again because often whole-body images are shown. The average face looks different from the previous averages.

Eyeline height distribution, heat map, and average face from Flex magazine

We obtain a softer-looking face with an eyeline maximum greater than 2/3 from Allure magazine.

Eyeline height distribution, heat map, and average face from Allure magazine

And goths from the Gothic Beauty magazine are on average ROTen, but large goths are more ϕaithful.

Eyeline height distribution, heat map, and average face from Gothic Beauty

The magazine 20/20 specializes in glasses. Not unexpectedly, the average face shows pronounced sunglasses and the eyeline height as greater than 2/3.

Eyeline height distribution, heat map, and average face from 20/20

Movie posters—the eyelines of film stars

A good-sized source of a wide variety of drawn and photographed paintings are movie posters. The site Movie Posters has 35,000 posters going back to the 1920s.

Cumulative distribution for movie posters

More interesting is a plot of the mean over time. Before the 1980s, eyelines were more in the center of posters. Since then, the average eyeline position is more in the interval [1/ϕ,2/3].

Eyeline height over time in movie posters

The shift in average eyeline height in movie posters is even more clearly visible in the corresponding face heat maps.

Face heat maps from movie posters

Here is the average face from all movie posters from the last five years.

Average face from all movie posters

Movies—the eyelines in motion picture frames

In the last blog, we ended with plots of the evolution of the average movie aspect ratio, so this time we will also end by analyzing some movies. The Internet Archive has a collection of 20,000 movies that are available for download. We will look at the face positions of two well-known classics: Buster Keaton’s The General from 1926 and Fritz Lang’s Metropolis from 1927. We start with The General. The average of all faces (without taking size into account) is at 2/3, and the large faces clearly appear lower.

Eyeline distribution for faces in The General

Not every frame of a movie contains faces, so it is natural to ask if the mean (windowed) eyeline height changes as the movie progresses. Here is a different kind of heat map that shows the mean eyeline height over time. The colors indicate the number of frames that contain identified faces.

Heat map of mean eyeline height over time in The General

Because the main character in the film moves a lot, the heat map of the face position has now much more structure as compared to the above heat maps of photographs and paintings.

Heat map for The General

Fritz Lang’s Metropolis, although made only one year after The General, was shot in quite a different style. Just by quickly zooming through the movie, one observes that the majority of faces appear at a much larger height. This impression is confirmed by the actual data about the eyeline positions.

Heat map of mean eyeline height over time in Metropolis

The PDF of all eyeline positions shows that especially large faces appear high in the frames.

Eyeline distribution in Metropolis

We compare with a modern TV series production—episode nine of season nine from The Big Bang Theory, “The Platonic Permutation”. Most faces appear above the 2/3 height.

Heat map of mean eyeline height over time in The Big Bang Theory

But the PDF of the eyeline position of larger faces peaks very near to 2/3, and the average face shows characteristic facial features of the show’s main characters.

Eyeline distribution for The Big Bang Theory

Or, for a very recent example, here is the PDF of episode one of Amazon’s recent The Man in the High Castle. The peak of the eyeline of larger faces is nearer to 1/ϕ than to 2/3.

Eyeline postion for The Man in the High Castle

We end with a third TV series example, episode eight of season six of The Walking Dead. For larger faces, we see a well-pronounced bimodal eyeline height distribution, with the two maxima at 1/ϕ and 2/3.

Eyeline height distribution and average faces from The Walking Dead

Findings

In this second part of our explorations of the golden ratio in the visual arts, we looked at the height of the eyeline of human faces and the face position. Using the function FindFaces and approximate rules for determining the eyeline height in faces, we computed averages of more than a million faces and eyeline heights.

The maxima of the eyeline height distribution for photographs and paintings is predominately in the range of 0.6 to 0.67. Older paintings and modern photographs have maxima near 2/3, as the rule of thirds predicts (demands). Interestingly, modern art portraits show the eyeline height PDF peak at 1/golden ratio for large faces. (We used >1/12 of the total area to define “large” faces.) The peak eyeline position in selfies is about 0.7, higher than in paintings and many professional photographs. The magazine covers we analyzed, especially those of the past few decades, seem to have a peak of the eyeline position PDF at 1/golden ratio. Similarly, the photos from various newspaper sites show a peak at 1/golden ratio. For LinkedIn photos, clear gender differences between the positions of the eyeline height were found—men turned out to be more ϕaithful. And the analyzed movies show that faces, especially smaller ones, appear quite often significantly above the 2/3 height. But modern TV series show peaks at either the 1/golden ratio or 2/3—or even both simultaneously.

Download this post as a Computable Document Format (CDF) file.

]]>
http://blog.wolfram.com/2016/03/02/profiling-the-eyes-phiaithful-or-roten-or-both/feed/ 7
Aspect Ratios in Art: What Is Better Than Being Golden? Being Plastic, Rooted, or Just Rational? Investigating Aspect Ratios of Old vs. Modern Paintings http://blog.wolfram.com/2015/11/18/aspect-ratios-in-art-what-is-better-than-being-golden-being-plastic-rooted-or-just-rational-investigating-aspect-ratios-of-old-vs-modern-paintings/ http://blog.wolfram.com/2015/11/18/aspect-ratios-in-art-what-is-better-than-being-golden-being-plastic-rooted-or-just-rational-investigating-aspect-ratios-of-old-vs-modern-paintings/#comments Wed, 18 Nov 2015 20:17:16 +0000 Michael Trott http://blog.internal.wolfram.com/?p=28454 Paintings of the great masters are among the most beautiful human artifacts ever produced. They are treasured and admired, carefully preserved, sold for hundreds of millions of dollars, and, perhaps not coincidentally, are the prime target of art thieves. Their composition, colors, details, and themes can fascinate us for hours. But what about their outer shape—the ratio of a painting’s height to its width?

In 1876, the German scientist Gustav Theodor Fechner studied human responses to rectangular shapes, concluding that rectangles with an aspect ratio equal to the golden ratio are most pleasing to the human eye. To validate his experimental observations, Fechner also analyzed the aspect ratios of more than ten thousand paintings.

We can find out more about Fechner with the following piece of code:

Using WikipediaData to learn more about Fechner

By 1876 standards, Fechner did amazing work, and we can redo some of his analysis in today’s world of big data, infographics, numerical models, and the rise of digital humanities as a scholarly discipline.

After a review of the golden ratio and Fechner’s findings, we will study the distribution of the height/width ratios of several large painting collections and the overall distribution, as well as the most common aspect ratios for paintings. We will discover that the trend over the last century or so is to become more rationalist.

Prelude: The golden ratio, a beautiful construction in mathematics

The golden ratio ϕ=(1+square root of 5)/2≈1.618033988… is a special number in mathematics. Its base 2 or base 10 digit sequences are more or less random digit sequences:

Golden ratio

Its continued fraction representation is as simple and beautiful as a mathematical expression can get:

Continued fraction representation

Or, written more explicitly:

Golden ratio written explicitly

Another similar form is the following iterated square root:

Golden ratio as iterated square root

Although just a simple square root, mathematically the golden ratio is a special number. For instance, it is the maximally badly approximable irrational number:

Maximally badly approximate irrational number

Here is a graphic showing the sequence q *|q ϕ-round(q ϕ)|. The value of the sequence terms is always larger than 1/5^½:

Graphic showing sequence

Furthermore, we can show the approximation to the golden ratio that one obtains by truncating the continued fraction expansion:

Approximation to the golden ratio by truncating the continued fraction expression

A visualization of the defining equation 1+1/ϕ=ϕ is the ratio of the length of the following line segments:

Visualization of the defining equation

Here are a wide and a tall rectangle with aspect ratio, golden ratio, and 1/(golden ratio):

Wide and tall rectangle with aspect ratio golden ratio and 1/(golden ratio)

Not surprisingly, this mathematically beautiful number has been used to generate aesthetically beautiful visual forms. This has a long history. Mathematically described already by Euclid, da Vinci made famous drawings that are based on the golden ratio.

The Wolfram Demonstrations Project has more than 90 interactive Manipulates that make use of the golden ratio. See especially Mona Lisa and the Golden Rectangle and Golden Spiral.

Mona Lisa and the Golden Rectangle

The golden ratio is also prevalent in nature. The angle version of the golden ratio is the so-called golden angle, which splits the circumference of a circle into two parts whose lengths have a ratio equal to the golden ratio:

Golden angle

The golden angle in turn appears, for instance, in phyllotaxis models:

Golden angle in phyllotaxis models

For a long list of occurrences of the golden ratio in nature and in manmade products, see M. Akhtaruzzaman and A. Shafie.

However, the universality of the golden ratio in art is often overstated. For some common myths, see Markowsky’s paper.

Later, we will also encounter the square root of the golden ratio. If we allow for complex numbers, then another, quite simple continued fraction yields the square root of the golden ratio as a natural ingredient of its real and imaginary parts:

Square root of the golden ratio as natural ingredient of real and imaginary parts

The name golden ratio seems to go back to Martin Ohm, the younger brother of the well-known physicist Georg Ohm, who used the term for the first time in a book in 1835.

Fechner’s 1876 work on rectangle preferences and painting aspect ratios

In volume 1 of the oft-quoted work Vorschule der Aesthetik (1876), Gustav Theodor Fechner—physicist, experimental psychologist, and philosopher—discusses the relevance of the golden ratio to human perception.

Today, Fechner is probably best known for the subjective sensation law jointly named after him, the Weber–Fechner law:

Weber-Fechner law

In chapter 14.3 (volume 1) of his book, Fechner discusses the aesthetics of the size (aspect ratio) of rectangles. Carrying out experiments with 347 probands, each given 10 rectangles of different aspect ratios, the rectangle that was most often considered pleasing by his experimental audience was the one with an aspect ratio equal to 34/21, which deviates from the golden ratio by less than 0.1%. Here is the today-still-cited but rarely reproduced table of Fechner’s results:

Fechner's results

Chapter 33 in volume 2 discusses the sizes of paintings, and Chapter 44 of volume 2 contains a forty-one-page detailed analysis of 10,558 total images from 22 European art galleries. Interestingly, Fechner found that the typical ratio of painting heights and widths clearly deviated from the “expected” golden ratio.

Fechner carried out a detailed analysis of 775 hunting and war paintings, and a coarser analysis on the remaining 9,783 paintings. Here are the results for hunting and war paintings (Genre), landscapes (Landschaft), and still life (Stillleben) paintings. In the table, h indicates the painting’s height and b the width. And V.-M. is the ratio h/b or b/h:

Results for hunting and war paintings, landscapes, and still life paintings

Here in the twenty-first century, we can repeat this analysis of the aspect ratios of paintings.

For detailed discussions and modified versions of Fechner’s experiments with humans, see the works of McManus (here and here), McManus et al., Konecni, Bachmann, Stieger and Swami, Friedenberg, Ohta, Russel, Green, Davis and Jahnke, Phillips et al., and Höge. Jensen recently analyzed paintings from the CGFA database, but the discretized heights and width values used (from analyzing the pixel counts of the images) did not allow resolution of the fine-scale structure of the aspect ratios, especially the occurrence of multiple, well-resolvable maxima. (See below for the analysis of a test set of images.)

While Fechner did a detailed analysis of quantitative invariants (e.g. mean, median) of the aspect ratios of paintings, he did not study the overall shape of the aspect ratio distribution, and he also did not study the distribution of the local maxima in the distribution of the aspect ratios.

An easy start: analyzing entities from the “Artwork” domain of the Wolfram Knowledgebase

One of the knowledge domains in EntityValue is “Artwork”. Here we can retrieve the names, artists, completion dates, heights, and widths of a few thousand paintings. Paintings are conveniently available as an entity class in the “Artwork” domain of the Wolfram Knowledgebase:

Paintings in the Artwork domain of the Wolfram Knowledgebase

Here is a typical example of the retrieved data:

Example of retrieved data

Paintings come in a wide variety of height-to-width aspect ratios, ranging from very wide to quite tall. Here is a collage of 36 thumbnails of the images ordered by their aspect ratio. Each thumbnail of a painting is embedded into a gray square with a red border:

Images ordered by their aspect ratio
Images ordered by their aspect ratio

The majority of the paintings have aspect ratios between 1/4 and 4. Here are some examples of quite wide and quite tall paintings:

Examples of wide and tall paintings

We can get an idea about the most common topics depicted in the paintings by making a word cloud of words from the titles of the paintings:

WordCloud from titles of paintings

Now that we have downloaded all the thumbnails, let’s play with them. Considering their colors, we could embed the average value of all pixel colors of the image thumbnails in a color triangle:

Embedding the average value of all pixel colors of the image thumbnails in a color triangle

Before analyzing the aspect ratios h/b in more detail, let’s have a look at the product, which is to say the area of the painting. (Fechner’s aforementioned work devoted a lot of attention to the natural area of paintings too.)

We show all paintings in the aspect ratio area plane. Because paintings occur in greatly different sizes, we use a logarithmic scale for the areas (vertical axis). We also add a tooltip for each point to see the actual painting:

Tooltip for each point to see the painting

And here is a histogram of the distribution of the height/width aspect ratios.

Starting now, following the Wolfram Language definition of aspect ratio, I will use the definition aspect ratio=height/width rather than the sometimes-used definition aspect ratio=width/height. As we saw above, this convention also follows Fechner’s convention, which also used height/width.

Histogram of the distribution of the height width aspect ratios

Now let’s analyze the histogram of the aspect ratios in more detail. Qualitatively, we see a trimodal distribution. For wide paintings (width>height) we have an aspect ratio less than 1, for square paintings we have an aspect ratio of about 1, and for tall paintings (height>width) we have an aspect ratio greater than 1. The tall and the wide paintings both have a global peak, and some smaller local peaks are also visible.

The trimodal structure for wide, square, and tall paintings was to be expected. Two natural questions that arise when looking at the above distribution are:
1) what are the positions of the local peaks?
2) what is the approximate overall shape of the distribution (normal, lognormal, …)?

In 1997, Shortess, Clarke, and Shannon analyzed 594 paintings and took a closer look at the point where the maximum of the distribution occurs. In agreement with Fechner’s 1876 work, they found that 1.3 seems to be the local maximum for the distribution of max(h/b,b/h). Again, 1.3 is clearly different from the golden ratio and the authors suggest either the Pythagorean number (4/3) or the so-called plastic constant as the possible exact value for the maximum.

The plastic constant is the positive real solution of x³-x-1=0:

Plastic constant is positive real solution of x^3-X-1=0

The plastic constant was introduced by Dom Hans van der Laan in 1928 as a special number with respect to human aesthetics for 3D (rather than 2D) figures. If explicitly expressed in radicals, the plastic constant ℘ has a slightly complicated form:

Plastic constant expressed in radicals

The resolution of the graphs from the 594 analyzed paintings was not enough to discriminate between ℘ and 4/3, and as a result, Shortess, Clarke, and Shannon suggest that the value of the maximum of painting ratios occurs at the “platinum constant,” a constant whose numerical value is approximately 1.3. Their paper also did not resolve any fine-scale structure of the height/width distribution. (Note: this “platinum constant” is unrelated to the so-called “platinum ratio” used in numerical analysis.)

(There is an interesting mathematical relation between the golden ratio and the plastic constant: the golden ratio is the smallest accumulation point of Pisot numbers, and the plastic constant is the smallest Pisot number; but we will not elaborate on this connection here.)

If we use a smaller bin size for the bins of the histogram, at least two maxima for both tall and wide paintings become visible:

Two maxima visible for tall and wide paintings in histogram

If we show the cumulative distribution function, we see that the absolute number of paintings that are square is pretty small. The square paintings correspond to the small vertical step at aspect ratio=1:

Showing cumulative distribution function

Next, let us take all tall paintings and show the inverse of their aspect ratios together with the aspect ratios of the wide paintings. The two global maxima at about 0.8 map reasonably well into each other, and so does the secondary maxima at about 0.75:

Inverse aspect ratios of tall paintings with aspect ratios of wide paintings

Graphing smoothed distributions of the aspect ratios of wide paintings and the inverse of the aspect ratios for tall paintings shows how the maxima map into each other:

Graphing smoothed distributions of the aspect ratios of wide paintings and the inverse of the aspect ratios for tall paintings

A quantile plot shows the similarity of the distributions for wide and tall paintings under inversion of the aspect ratios:

Quantile plot showing similarity of distibutions

Will it be possible to resolve the maxima numerically and associate explicit numbers with them? Here are the above-mentioned constants and three further constants: the square root of the golden ratio, 5/4, and 6/5:

Square root of the golden ration, 5/4, and 6/5

Among all possible constants, we added the square root of the golden ratio because it appears naturally in the so-called Kepler triangle. Its side lengths have the ratio 1:sqrt(golden ratio):golden ratio:

Kepler triangle

The Pythagorean theorem is also important for the square root of the golden ratio. The Kepler triangle becomes the defining equation for the golden ratio:

Kepler triangle becomes the defining equation for the golden ratio

Shortess et al. included 4/3 as the Pythagorean constant because this number is the ratio of the smaller two edges of the smallest Pythagorean triangle with edge length 3, 4, 5 (3²+4²=5²).

And the rational 6/5 was included because, as we will see later, it often occurs as an aspect ratio of paintings in the last 200 years.

The distribution of the painting aspect ratios together with the selected constants shows that the largest peak seems to occur at the sqrt(golden ratio) value and a second, smaller peak at 1.32… 1.33.

Here is a list of potential constants that potentially represent the position of the maxima. We will use this list repeatedly in the following to compare the aspect ratio distributions of various painting collections. Let’s start with some visualizations showing these aspect ratios:

List of potential constants that potentially represent the position of the maxima

The next graph shows the six constants on the number line. The difference between the plastic constant and 4/3 is the smallest between all pairs of the six selected constants:

Six constants on the number line
Six constants on the number line

Here are wide rectangles with aspect ratios of the selected constants:

Wide rectangles with aspect ratios of selected constants

And for better comparison, the next graphic shows the six rectangles laid over each other:

Six rectangles laid over each other

And here is the above graphic overlaid with the positions of the constants at the horizontal axis:

Graphic overlaid with the positions of the constants at the horizontal axis

Various other fractions with small denominators will be encountered in selected painting datasets below, and various alternative rationals could be included based on aesthetically pleasing proportions of other objects, such as 55/45=11/9=1.2̅ (see here, here, here, and here) or 27/20=1.35 or the so-called “meta-golden ratio chi,” the solution of Χ²-Χ/ϕ=1 with value 1.35…

Because the resolution of a histogram is a bit limited, let us carefully count the number of paintings that are a certain aspect ratio plus or minus a small deviation. To do this efficiently, we form a Nearest function:

Forming a Nearest function

Again, we clearly see two well-separated maxima, the larger one nearer to the square root of the golden ratio than to the plastic constant or the Pythagorean number:

Plot showing two well-seperated maxima

Interlude I: Features of the probability distribution of aspect ratios

Before looking at more painters and paintings, let’s have a more detailed look at the distribution of the aspect ratios.

The most commonly used means are all larger than the tallest maximum for tall images:

Most commonly used means are all larger than the tallest maximum for tall images

Here are the means for the wide paintings:

Means for the wide paintings

What is the ratio of taller to wider paintings? Interestingly, we have nearly exactly as many tall paintings as wide paintings:

Ratio of taller to wider paintings

The averages for the paintings viewed as a rectangles (meaning the aspect ratios (max(height, width)/min(height,width)) have means that are very similar to the tall paintings:

Averages of paintings viewed as rectangles have means similar to tall paintings

As above in the plot of the two overlaid histograms, the distribution of tall paintings agrees nearly exactly with the distribution of wide paintings when we invert the aspect ratio. But what is the actual distribution for tall (or all) paintings (question 2) from above? If we ignore the multiple peaks and use a more coarse-grained view, we could try to fit the distribution of the tall paintings with various named probability distributions, e.g. a normal, lognormal, or heavy-tailed distribution.

We restrict ourselves to paintings with aspect ratios less than 4 to avoid artifacts in the fitting process due to outliers:

Restricting to paintings with aspect ratios less than 4

Using SmoothKernelDistribution allows us to smooth over the multiple maxima and obtain a smooth distribution (shown on the left). A log-log plot of the hazard function (f(a)/(1-F(a))) together with the function 1/a gives the first hint that we expect a heavy-tailed distribution to be the best approximation:

Using SmoothKernelDistibution

Here are fits with a normal and a lognormal distribution:

Fits with normal and lognormal distribution

And here are some heavy-tailed distributions:

Heavy-tailed distibutions

As the height/width ratios have a slow-decaying tail, the normal, lognormal, and extreme value distribution are a poor fit. The range of aspect ratios between about 1.4 and 2 shows this most pronounced:

Range of aspect ratios for normal, lognormal, and extreme value distribution

The four heavy-tailed distributions show a much better overall fit:

Showing heavy-tailed distributions

If we quantify the fit using a log-likelihood ratio statistic, we see that the truncated heavy-tailed distributions perform the better fits:

Quantifying the fit using a log-likelihood ratio statistic

The distribution for the aspect ratio has a curious property: we saw above that the distributions of the wide and tall paintings appropriately match after an appropriate mapping. This means their maxima agree, at least approximately. But by mapping the distribution p(x) of tall paintings with 0p̅,(x) of wide paintings with 1<x<∞, we have (x)=p(1/x)/x². Yet at the same time, for the maxima x subscript max of p(x) and x subscript max, of (x) we have the relation x subscript max ≈1/x subscript max. Interestingly, for the parameters found for the stable distribution fit, this property is fulfilled within two percent. Here we quantify this difference in maxima position for the beta prime distribution. (The results for stable distributions are nearly identical.)

Quantifying this difference in maxima position for the beta prime distribution

The aspect ratio through the ages, for movements and painters

Now, a natural question is: how reproducible is the trimodal distribution across the ages, across painting genres, and across artists?

Let’s look at time dependence by grouping all aspect ratios according to the century in which the paintings were completed. We see that at least since the fourteenth century, tall paintings have frequently had an aspect ratio of about 1.3, wide paintings an aspect ratio of about 0.76, and that square paintings became popular only relatively recently. We also see that for tall paintings the distribution is much flatter in the sixteenth, seventeenth, and eighteenth centuries as compared with the nineteenth century (we will see a similar tendency in other painting datasets later):

Time dependence by grouping all aspect ratios to the century which completed

The median of the aspect ratios of all paintings decreased over the last 500 years and is slightly higher than 1.3. (here we define “aspect ratio” as the ratio of the length of the longer side to the length of the smaller side). The mean also decreased and seems to stabilize slightly above 1.35:

Showing mean and median over 500 years

For comparison, here are the distributions of the paintings’ areas (in square meters) over the centuries:

Distributions of the painting's areas over the centuries

The median area of paintings has been remarkably stable at a value slightly above 2 square meters over the last 450 years:

Median are of painting over 450 years

What about the aspect ratios across artistic movements? WikiGallery has visually appealing pages about movements. We import the page and get a listing of movements and how many paintings are covered in each movement:

Import page from WikiGallery with how many paintings are covered in each movement

But unfortunately, width and height information is available for only a fraction of the paintings. Importing all individual painting pages and extracting the height and width data from the size of the thumbnail images allows us to make at least some quantitative histograms about the distribution of the aspect ratios for each movement.

The overwhelming majority of movements shows again strong bimodal distributions with aspect ratio peaks around 1.3 and 0.76. (The movements are sorted by the total number of paintings listed on the corresponding Wiki pages.)

Quantitative histograms about the distribution of the aspect ratios for each movement

Let’s use Wikipedia again to look at the distribution of aspect ratios of some famous painters’ works.

Using Wikipedia to look at the distribution of aspect ratios

Although the total number of paintings is now much smaller per histogram, again the bimodal (ignoring the square case) distributions are visible. And again we see clear maxima at tall paintings with aspect ratios of about 1.3 and wide paintings with aspect ratios of about 0.76:

Histograms for famous painters and their paintings

We see again relatively strongly peaked distributions. Some painters, for example Cézanne, preferred standard canvas sizes. (For a study of canvas sizes used by Francis Bacon, see here.)

Let’s also have a look at a more modern painter, Thomas Kincade, the “painter of light.” Modern paintings use standardized materials and come in a set of sizes and aspect ratios that result much more from standardization of canvases and paper rather than aesthetics. So this time we do not analyze the textual image descriptions, but rather the images themselves, and extract the pixel widths and heights. Even for thumbnails, this will yield an aspect ratio in the correct percent range:

Analyzing images to extract pixel widths and heights

In addition to our typical maximum around 1.3, we see a very pronounced maximum around 3/2—very probably a standardization artifact:

Histogram for Thomas Kincade paintings

Analyzing five old German museum catalogs

The above histograms indicate at least two maxima for tall paintings, as well as two maxima for wide paintings, with the larger peak very near to the square root of the golden ratio. As we do not know what exactly was the selection criterion for artwork included in the “Artwork” domain of Entity, we should test our conjecture on some independent collections of paintings.

An easily accessible source for widths and heights of paintings are museum catalogs. Various older catalogs, similar to the ones used by Fechner, are available in scanned and OCR forms. Examples are:

It is straightforward to directly import the OCR test versions of the catalogs. While the form of giving the heights and widths varies from catalog to catalog, within a single catalog the employed description formatting is quite uniform. As a result, specifying the string patterns that allow you to extract the heights and widths is pretty straightforward after having looked at some example descriptions of paintings in each catalog:

Specifying the string patterns that allow you to extract heights and widths

The catalog from the Kaiser-Friedrich Museum (today the Bode Museum):

Catalong from Kaiser-Friedrich Museum

The catalog from the Pinakothek München (today the Alte Pinakothek):

Pinakothek München catalog

The catalog from the Museum der bildenden Künste zu Stuttgart (today the Staatsgalerie Stuttgart):

Museum der bildenden Künste zu Stuttgart catalog

The catalog from the Gemäldegalerie Dresden (today the Gemäldegalerie Alte Meister Dresden):

Gemäldegalerie Dresden catalog

The catalog from the Gemäldegalerie zu Cassel (today the Neue Galerie Kassel):

Gemäldegalerie zu Cassel catalog

Qualitatively, the results for the aspect ratios are very similar for the five museums:

Results for aspect ratios fo the five museums

We join the data of the five catalogs and add grid lines for the above-defined six constants:

Joining data for the five catalogs for the above-defined six constants

Again, we clearly see two global maxima in the aspect ratio distribution. For tall paintings we obtain a relatively flat maximum, without clearly resolved local minima.

(The archive.org website has various even older painting catalogs, e.g. of the Schloss Schleissheim, the catalog of the collection of Berthold Zacharias, the collection of the National Gallery of Bavaria, and more. The aspect ratio distribution of the paintings of these catalogs is very similar to the five we analyze here.)

The Kress collection: four large PDF files

A famous painting collection is the Kress collection. The individual images are distributed across many museums in the US. But fortunately (for our analysis), the details of the paintings that are in the collection are available in four detailed catalogs, available as PDF documents totaling 900 pages of detailed descriptions of the paintings. (Much of the data analyzed in this blog refers nearly exclusively to Western art. For measurable aesthetic considerations of Eastern art, see, for instance, the recent paper by Zheng, Weidong, and Xuchen.)

After importing the PDF files as text and extracting the aspect ratios, we have about 700 data points. (From now on, in the following, we will not give all code to import the data from various sites to analyze the aspect ratios; the times to download all data are sometimes too large to be quickly repeated.)

Importing PDF files as text and extracting the aspect ratios

This time, we also have a local maxima near sqrt(2) as well as the golden ratio.

Current gallery collections: Metropolitan, Art Institute of Chicago, Hermitage, National Gallery, Rijks, and Tate

To confirm the existence of well-defined maxima in the aspect ratio distributions and their locations, let us now look at the distribution of selected famous art museums worldwide

The Metropolitan museum of art has a fantastic online catalog. Searching for paintings of the type “oil on canvas,” we can extract their aspect ratios.

This time, the global maximum seems to be a bit smaller than 1.27:

Oil on canvas paintings aspect ratios

The Art Institute of Chicago has a handy search that allows you to find paintings by period—for instance, paintings made between 1600 and 1800. Accumulating all the data gives about 1,200 data points, and the global maxima seems very near to the root of the golden ratio:

Paintings made between 1600 and 1800

The State Hermitage Museum has an easy-to-analyze website that has information about more than 3,400 paintings from its collection. Analyzing the aspect ratios shows again two distinct maxima for tall images:

State Heritage Museum collection

As a fourth collection, we analyze the paintings from the National Gallery. The distribution is visibly different from previous graphics:

Nation Gallery collection

The relatively unusual distribution goes together with the following age distribution. We see many more 500-year-old paintings as compared to other collections:

500 year old paintings compared to other collections

The Rijks Museum in Amsterdam is another extensive collection of old paintings. Here is the aspect ratio distribution of 4,600 paintings from the collection:

Aspect ratio for the collection in Rijks Museum

As a sixth example of analyzing current collections, we have a look at the paintings of the Tate collection. Many of the 8,000+ paintings from the Tate collection are relatively new. Here is a breakdown of their creation years:

Tate collection paintings

The aspect ratio distribution, when overlaid with our constants from above, shows a good (but not perfect) match:

Aspect ratio distribution overlaid with constants from above

But with an overlay of the rationals 6/5, 5/4, 9/7, 4/3, and 3/2, we see a good approximation of the local maxima for the tall paintings. (We use a slightly smaller bin size for better resolution in the following graphic.)

Overlay of the rationals

Using the better-resolving Nearest-based counts of paintings within a small range shows that the maxima of the wide as well as the tall paintings occur at the rationals 6/5, 5/4, 9/7, 4/3, 3/2, and their inverses. (We use an aspect ratio window of size 0.01.)

Using Nearest based count of paintings

There is little dependency of the peak positions on the window size used in Nearest:

Plot with gridlines at rational numbers

Note that we showed grid lines at rational numbers in the above plot. Within 1% of 9/7, we find the square root of the golden ratio and fractions such as 14/11. So deciding which of these numbers “are” the “real” position of the maxima cannot be answered with the precision and amount of data available:

Find the square root of the golden ratio and fractions such as 14/11

There is one thing unique about the Tate collection, and that one thing is especially relevant for this project. Here are two examples of its data:

Two examples of data from the Tate collection

Note the very precise measurements of the painting dimensions, up to millimeters. This means this is a dataset whose detailed aspect ratio distribution curve has a lot of credibility with respect to the exact values of the curve maxima.

An aspect ratio exception: the National Portrait Gallery collection

The National Portrait Gallery has tens of thousands of portrait paintings.

The individual web pages are easily imported and dimensions are extracted:

Importing web pages from the National Portrait Gallery

Not unexpectedly, portraits have on average a much more uniform aspect ratio than landscapes, hunting events, war scenes, and other types of paintings. This time, we have a much more unimodal distribution. The following histogram uses about 45k aspect ratios:

Histogram using roughly 45 thousand aspect ratios

Zooming into the region of the maximum shows that a large fraction of portrait paintings have an aspect ratio of about 6/5. A secondary maximum occurs at 5/4 and a third one at 4/3:

Zooming into the region of the maximum

While the golden ratio seems to be relevant for the center part of the human face (see e.g. here, here, and here), most portraits show the whole head. With an average height/width ratio of the human face (excluding ears and hair) of 1.48, the observed maximum at 1.2 seems not unexpected. For a more detailed investigation of faces in paintings, see de la Rosa and Suárez.

The Web Gallery of Art: a convenient database ready to use

So far, the datasets analyzed have not allowed us to uniquely resolve the position of the maxima. There are two reasons for this: the datasets do not have enough paintings, and the measurements of the paintings are often not precise enough. So let’s take a larger collection. The Web Gallery of Art, a Hungarian website, offers a downloadable tabular dataset of paintings as a CSV file.

The file uses a semicolon as the separator, so we extract the columns manually rather than using Import:

Extracting the columns manually

The following data is available:

Available data

And here is how a typical entry looks. The dimensions are in the form height x width:

Typical entry

The majority of listings of artworks are, fortunately, paintings:

Majority of listings are paintings

Extracting the paintings with dimension data (not all paintings have dimension information), we have 18.6k data points:

Extracting the paintings with dimension data

Plotting all occurring widths and lengths that are present in the data, we obtain the following graphic:

Plotting all occurring widths and lengths that are present in the data

Averaging over a length scale of one centimeter, we obtain the following histogram of all widths and lengths. One notes the many pronounced peaks and discrete lengths:

Histogram of all widths and lengths

A plot of the actual widths and heights of the paintings shows that many paintings are less than 140 cm in height and/or width:

Plot of actual widths and heights of the paintings

A contour plot of the smoothed version of the 2D density of width-height distributions shows the two “mountain ridges” of wide and tall paintings:

Contour plot of the smoothed version of the 2D density of width-height distributions

Looking at the explicit numerical values of the common-length values shows multiples of 5 cm and 10 cm, but also many numbers that seem not to arise from potentially rounding measurement values:

Explicit numerical values of the common-length values

The next graphic shows the most common length and width values cumulatively over time:

Most common length and width values cumulatively over time

Plotting the widths and heights sorted by the century shows that many of the very tall spikes come from the nineteenth century. (Note the much smaller vertical scale for paintings from the twentieth century.)

Plotting widths and heights sorted by century

For later comparison, we fit the distribution of the width of the paintings. We smooth with a bandwidth of about 5 cm to remove most of the local spikes:

Distribution of the width of the paintings

We show a distribution of the ages of the paintings from this dataset:

Distribution of the width of the paintings

We analyze this dataset by plotting all concrete occurring aspect ratios together with their multiplicities:

Plotting all concrete occurring aspect ratios together with their multiplicities

To better resolve the multiplicities of aspect ratios that are nearly identical, we plot a histogram with a bin width of 0.02:

Histogram with a bin width of 0.02

Let’s approximate each aspect ratio with a rational number such that the error is less than 1%. What will be the distribution of the resulting denominators of the fractions approximating the aspect ratios? The following plot shows the distribution in a log-plot. It is interesting to note the relatively large fraction of paintings with a max(width/height)/min(width/height) ratio and min(width/height)/max(width/height) with denominators of 3, 4, and 7, and the relative absence of denominators 6 and 18:

Distribution in a log-plot

For comparison, here are the corresponding plots for 20k uniformly (in [0,2]) distributed numbers:

Corresponding plots for 20k uniformly distributed numbers

Here are the cumulative distributions of the paintings with selected aspect ratios:

Cumulative distributions of the paintings with selected aspect ratios

If we normalize the counts to the total number of paintings, we still see the 5/4 aspect ratio increasing over time, but most of the other aspect ratios do not change significantly:

Normalize the counts to the total number of paintings

If we do not take the measurement values for face value but assume that they are precise only up to ±1%, we obtain quite a different picture. The following graphic shows the distribution of the paintings of a given aspect ratio interval with a given center value. Around 1500, all common aspect ratios were approximately equally popular. We see that the aspect ratios 5/4, 4/3, and 9/7 became much more common about 1600. And aspect ratios approximately equal to the golden ratio have become less popular since the thirteenth century. (This graphic is not sensitive to the ±1% aspect ratio width; ±0.5% to ±5% will give quite similar results.)

Distribution of the paintings of a given aspect ratio interval with a given center value

So what about the denominators of the most common aspect ratios? We form all fractions with maximal denominator 16 and map all aspect ratios to the nearest of these fractions. Because of the non-uniform gaps between the selected rationals, we normalize the counts by the distance to the nearest smaller and larger rational aspect ratios. This graphic gives a view of the occurring aspect ratios that is complementary to the histogram plot. The histogram plot uses equal bins; the following plot uses non-uniform bins and adjacent minima and maxima in the histogram bins can cancel each other out. Again, the 5/4 and the 4/5 aspect ratios are global winners:

View of the occurring aspect ratios that is complementary to the histogram plot

We again use the Nearest function approach to plot a detailed map of the aspect ratio distributions. The following function windowedMaximaPlot plots the distribution either as a 3D plot or as a contour plot for paintings from a sliding time window:

using the Nearest function approach to plot a detailed map of the aspect ratio distributions

Here are the 3D plot and the contour plot:

3D plot and contour plot

The last two images show a few noteworthy features:

  • Over the last 400 years, tall pictures often have an aspect ratio of approximately 1.2
  • The most common aspect ratio of wide pictures changes around 1750, and a relatively wide distribution shows a few pronounced maxima, e.g. at 0.8
  • Square images become more popular around 1800

Labreuche discusses the process of the standardization of canvases. In France, a first standardization happened in the seventeenth century and a second in the nineteenth century. (For a recent, more mathematical discussion, see Dinh Dang.) Simon discusses the canvas standarization in Britain.

Here are the figure, marine, and landscape sizes of the standardized canvases from nineteenth-century France. The data is in the form {width, {figure height, landscape height, marine height}}:

Figure, marine, and landscape sizes of the standardized canvases from nineteenth-century France

The aspect ratios (max(height/width, width/height)) for all canvases has the following distribution:

Aspect ratios for all canvases has the following distribution

It is not easy to find large datasets of exact measurements of old paintings. On the other hand, various websites have tens of thousands of images of paintings in both JPG and PNG formats. Could one not just use these images for finding the aspect ratio of paintings by using the image height in pixels and the image width in pixels? Above, we saw that the majority of paintings are measured with a precision of about one centimeter. With an average painting height and width of about one meter, the resulting uncertainty is in the order of 2%. Even thumbnail images are about 100 pixels, and many images of paintings are a few hundred pixels wide (and tall). So from the literal pixel dimensions, one would again expect results to be correct in the order of (1. . .2)%. But there is no guarantee that the images were not cropped, the frame is consistently included or not included, or that boundary pixels were added. The Web Gallery of Art has, in addition to the actual measurements of the paintings, images of the paintings. After downloading the images and calculating the aspect sizes of the images, we can compare with the aspect ratios calculated from the actual heights and widths of the paintings. Here is the resulting distribution of the two aspect ratios together with a fit through a CauchyDistribution[1.003,0.019]. The mean of the two pixel dimensions is 1.036 and the standard deviation is 0.38. These numbers show that the error from using images of the paintings to determine the aspect ratios is far too large to properly resolve the observed fine-scale structure of aspect ratios:

Aspect ratio fo image compared to aspect ratio of painting

In the data dataWGA, we also have information about the painters. Does the mean aspect ratio of the paintings change over the lifetimes of the painters? Here is the distribution of when during the painters’ lives the paintings were made:

Distribution of when during the painters' lives the paintings were made

Interestingly, statistically we can see a pattern of the mean aspect ratio over the lifetime of a painter. The first paintings statistically have a more extreme aspect ratio. At the end of the first third of the lifetime, the aspect ratio is minimal, and at the end of the second third the aspect ratio is maximal (left graphic). The cumulative average aspect ratio shows a minimum at about 40% the lifespan of the painters (right graphic). Both graphics show max(height/width, width/height) divided by the mean of all aspect ratios. (A general discussion of creativity vs. age can be found here.)

Aspect ratio during the lifetime of the painters
Graphics showing max divided by the mean of all aspected ratios

If the reader wants to visit some of the paintings in person and wants to perform some more precise width and height measurements, let us calculate one more statistic using the Web Gallery of Art dataset. Let’s also calculate and visualize where the paintings are in the world. We take the (current) city locations of the paintings that have width and height parameters, aggregate them by city, and display the median of max(height/width, width/height) as a function of the city. Not unexpectedly, most larger collections don’t deviate much from the median of 1.333. We use Interpreter to find the cities and derive their locations:

Using Interpreter to find the cities and locations of paintings
Using Interpreter to find the cities and locations of paintings

Interlude II: The importance of measuring precisely

Now let us look at the detailed width and height values. If we plot the counts of the fractional centimeters, we clearly see that the vast majority of paintings are measured within a precision of less than 1 cm. Only about 10% of all paintings have dimensions specified up to a millimeter (and some of the ones specified up to 5 millimeters are probably also rounded):

Plot with detailed width and height values

Now let us look at the detailed width and height values. As the majority of the paintings were made before the invention of the centimeter as a unit of measurement, the popular painting sizes are probably not a length that is an integer multiple of a centimeter. This means that the measured widths and heights are not the precise widths and heights of the actual paintings. The nearly homogeneous distribution of millimeters of the paintings that were measured up to the millimeter is comforting.

In many of the datasets analyzed, the widths and heights of the paintings are given as integers when measured in centimeters. (A notable exception was the Tate dataset, in which virtually every painting dimension is given to millimeter accuracy.) As most paintings are in the order of 100 cm width or height (give or take a factor of 2), for an accurate determination of the aspect ratio the rounding to integer-centimeter length will matter. How many of the observed maxima at various fractions with small denominators can be traced back to imprecise width and height values?

Let’s model this effect now. The function aspectRatioModelValue models the aspect ratio of a painting. We assume a stable distribution for the width of the paintings and assume the height to be normally distributed with a mean of 1.3xwidth. And we model only tall paintings by restricting the height to be at least as large as the width:

Using aspectRatioModelValue to model the aspect ratio of a painting

Now we “cut canvases” for tall paintings and look at the distribution of the aspect ratios. We do this twice, each time for 100,000 canvases. The top graphic shows the resulting distribution in the case of millimeter-resolution of the canvas measurements. The bottom graphic assumes that in 65% of all cases we measure up to a centimeter precision, in 25% up to half a centimeter precision, and in the resulting 10% up to millimeter precision. For each of the three computational experiments, we overlay the resulting distribution histograms:

Overlay the resulting distribution histograms from the three computational experiments

Comparing the upper with the lower graphic shows that the aspect ratio distribution is quite smooth if all measurements are precise to the millimeter. The lower distribution shows that painting dimension measurements up to the centimeter do indeed introduce artifacts into the resulting histograms.

Looking at the pretty smooth histogram for the millimeter-precise model and the above aspect ratio histogram for the Tate collection shows that the more common occurrences of aspect ratios that are equal to simple fractions is a real effect. Yet at the same time, as the above experiment with the weights {0.65, 0.25, 0.10} shows, the mostly centimeter-precise widths and heights do artificially amplify some simple fractions, such as 6/5, 5/4, and 3/2.

An even simpler method to demonstrate the influence of rounding errors in the width/height measurements in the Web Art Gallery dataset is to modify the width and height values. For each integer centimeter measurement, we add between -5 millimeters and 5 millimeters to mimic a more precise measurement. We again use the ratio of the longest side to the smallest side of the painting:

Influence of rounding errors

We overlay the original aspect ratio distribution with the one obtained from the modified width and height values. We see that the maxima at some rational ratios do get suppressed, but that the global maxima keeps its position around 5/4, and the second maxima around 4/3 stays, as well as the smaller, first maximum around 6/5. At the same time, we see the peaks at 3/2 and 2 get smoothed out:

Overlay original aspect ratio distribution with the one obtained from the modified width and height values

We now do the reverse with the Tate dataset: we round each width and height measurement to the nearest centimeter. Again, we plot the original aspect ratio distribution together with the modified one:

Using the Tate dataset

While the height of the local peaks changes, the peaks are still present, even quite pronounced.

WikiArt: another large web resource

Let us have a look at yet another large web resource, namely WikiArt. For computational purposes, it is a conveniently structured website. We have a list of more than nine hundred artists, with hyperlinks to pages of the artists’ works. Each individual artwork (painting) in turn has a page that has conveniently structured information. For example, here is the factual information about the Mona Lisa:

Factual information about the Mona Lisa

We note that the above data contains style and genre. This suggests using the WikiArt dataset to look for a possible dependence of the aspect ratio on genre especially (we already quickly looked at the movements above).

There are about seven thousand paintings with width-height information in the dataset. For brevity, we encoded all data into a grayscale image:

Encoded data into a grayscale image

The paintings with dimension information have the following age distribution. We see a dominance of paintings from the eighteenth and nineteenth centuries:

Age distribution of paintings with dimension information

Based on the results obtained earlier, we expect this dataset that is mostly dominated by paintings from the last 150 years to show pronounced peaks in the aspect ratio distribution at rationals. The following distribution with grid lines at 6/5, 5/4, 4/3, and 3/2 confirms this conjecture:

Paintings from the last 150 years

The genre obviously influences whether paintings are predominantly wide, square, or tall. Here are the wide vs. square vs. tall distributions for some of the popular genres:

Wide vs. square vs. tall distributions for some of the popular genres

Now let us have a look at the distribution of the aspect ratio as a function of the genre:

Distribution of the aspect ratio as a function of the genre

Hijacking the function TimelinePlot, we show the range of the second and third quartiles of the aspect ratios:

TimelinePlot to show the range of the second and third quartiles of aspect ratios
TimelinePlot to show the range of the second and third quartiles of aspect ratios

Tall landscape paintings are much scarcer than wide landscape paintings. But even if we use the definition aspect ratio—longest side/shortest side—we still see a clear dependence of the aspect ratio on the genre.

The genre frequently also influences the actual painting size. Here are the second and third quartiles in aspect ratio and area for the various genres (mouse over the opaque rectangles in the notebook to see the genre):

Second and third quartiles in aspect ratio and area for various genres

If we slice up each genre by the style, we get a more fine-grained resolution of the distribution of aspect ratios. We find the top genres and styles, requiring each relevant genre and style to be represented with at least 50 paintings:

Top genres and styles represented with at least 50 paintings

The Neoclassical nude paintings stand out with the largest median aspect ratio of about 1.85:

Neoclassical nude paintings median aspect ratio

And here is a more detailed graphic showing the median aspect ratios for all the style-genre pairs with at least five paintings. (Mouse over the vertical columns to see the genre and the aspect ratios.)

Detailed graphic showing the median aspect ratios for all the style-genre pairs with at least five paintings

France national museums’ collections

As we saw above, painting collections with a few thousand paintings allow us to resolve multiple maxima in the distribution in the range 1.24. . .1.33 for the aspect ratios. Now let’s look at a second large dataset.

The Joconde catalog of the French national museums covers more than half a million artifacts. A search for paintings gives about sixty-seven thousand results. Not all of them are paintings that are made for hanging on a wall; the collection also includes paintings on porcelain figures and other mediums. But one finds about thirty-one thousand paintings with explicit dimensions. As the information about the paintings comes from multiple museums, the dimensions can occur in a variety of formats. The extraction of the dimensions is a bit time consuming.

Extraction of dimensions

Interestingly, this time yet another maximum occurs at about 1.23.

Mapping the distribution for wide images into the one for tall images by exchanging height and width, we see that the two maxima match up very well. This makes the ratio 5/4 (or 4/5) the most common ratio:

Mapping the distribution for wide images into the one for tall images by exchanging height and width

About 11% more tall paintings than wide paintings are in the collection.

Paintings in Italian churches: tall is all

A very large database of paintings of the Catholic churches from Italy can be found here. Searching again for oil paintings gives 130k search results, about 124,000 of which have width and height measurements.

The collection contains many relatively new paintings (sixteenth century ≈4%, seventeenth century ≈23%, eighteenth century ≈36%, nineteenth century ≈24%, twentieth century ≈13%).

Here is the resulting distribution. We show grid lines at 1, 6/5, 5/4, 4/3, 7/5, 3/2, 5/3, and 2. The grid lines at these rational numbers agree remarkably well with the position of the maxima:

Oil paintings from the Catholic church database

The graphic immediately shows that inside churches we have a larger fraction of tall paintings than wide paintings. And the maxima visibly occur all at rational values with small denominators. Part of the pronounced rationality is the fact that only about 8% of the paintings have dimension measurements that are accurate below one centimeter.

The Smithsonian’s collection

The Smithsonian American Art Museum has a search site allowing one to inspect many paintings. About 286,000 paintings have dimension information. Here is the resulting distribution of aspect ratios:

Aspect ratios from Smithsonian American Art Museum search

As already noticed, the pronounced peaks at rational aspect ratios correlate with paintings from the last 200 years. A plot of the age distribution of the paintings from this collection confirms this:

Plot of age distribution for collection of paintings in the last 200 years

A large collection of paintings in the UK

A third large dataset is the Your Paintings website from the UK. It features 200k+ paintings, 56,000 of which have width and height measurements.

In contrast to earlier datasets, many of the paintings are from within the last 150 years. So, will this larger fraction of newer paintings result in a different distribution of aspect ratios?

Dataset for Your Paintings website

We again see clearly pronounced maxima. The five most pronounced maxima for tall paintings are at rational numbers with small denominators. We show grid lines at 6/5, 5/4, 9/7, 4/3, and 3/2 and their inverses. For wide paintings, we see the same (meaning inverted) maxima positions as for tall paintings:

Maxima for tall and wide paintings

Fortunately, 52% of all measurements are precise below a centimeter. This means that the maxima visible are not just artifacts of rounding, and paintings more often have aspect ratios that are approximately rational numbers with small denominators.

And here again is a higher-resolution plot of the number of paintings with a maximal distance of 0.01 from a given aspect ratio:

Higher-resolution plot of the number of paintings with a maximal distance of 0.01 from a given aspect ratio

The current art market: more rational than ever

The last section of paintings from the UK from the last 150 years showed a clear tendency toward aspect ratios that are rational numbers with small denominators. This begs the question: what aspect ratios are “in” today?

There is no museum that has thousands of paintings from recent years (at least not one that I could find). So let’s look at some dealers of recently made paintings (in the last few decades). After some searching, one is led to Saatchi Art. Searching for oil paintings yields 96,000 paintings. So, what’s their aspect ratio? Here is a plot of the PDF of the aspect ratios. The grid lines are at 1, 6/5, 5/4, 4/3, 3/2, 2, and the corresponding inverses. Note that this time we use a logarithmic vertical scale:

Paintings from delears of recently made paintings

Indeed, all trends that were already visible in the Your Paintings dataset are even more pronounced:

  • An even larger fraction of exactly square paintings
  • Pronounced maxima at aspect ratios that are rational numbers with small denominators, for wide as well as for tall paintings
  • A nearly equal amount of wide vs. tall paintings

The maxima at certain aspect ratios is reflected in a distribution of the areas of the paintings: a few tens of pronounced painting sizes are observed:

Maxima at certain aspect ratios

We can assume that they come from the size of industrially made canvases. To test this assumption, we analyze the canvases sold commercially, e.g. from the art supply store Dick Blick:

Analyzing canvases sold commercially from Dick Blick

Plotting the distribution of the about 1,600 canvases found shows an area distribution that shares key features with the above distribution:

Plotting the distribution of about 1600 canvases
Plotting the distribution of about 1600 canvases

Plotting again the aspectRatioCDFPlot used above, the most common aspect ratios are easily visible as the positions of the vertical segments:

aspectRatioCDFPlot

While one can’t buy the paintings from museums, one can buy the paintings from Saatchi. So for this dataset we can have a look at a possible relation between the price and the aspect ratio. (For various statistics on painting prices and the relation to qualitative painting properties, see Reneboog and Van Houtte, Higgs and Forster, and Bayer and Page.)

The data shows no obvious dependence of the painting price on the aspect ratio:

Data showing no obvious dependence of price on aspect ratio

At the same time, a weak correlation of the area and the price can be observed, with an average law of the form price~area^2/3. (For a detailed study of the price-area relation for Korean paintings, see Nahm.)

Weak correlation of the area and the price

Sold in the past: mostly made recently, and having a long tail

Earlier we looked at the aspect ratios of paintings from various museum collections. In the last section we looked at the aspect ratios of paintings that are waiting to be sold. So, what about the aspect ratios of paintings that have been sold recently? The Artnet website is a fantastic source of information about paintings sold at auctions. The site features about 590,000 paintings with dimension information.

While the paintings auctioned do include medieval paintings, the majority of the paintings listed were done recently. Here is the cumulative distribution of the paintings over the last millennium. Note the log-log nature of the plot. We see a Pareto principle-like distribution, with 90% of all auction-sold paintings made after 1855:

Cummulative distribution of paintings of the last millennium

Based on our earlier analysis, we expect a dataset with such a large amount of relatively new paintings to have strong pronounced peaks at small rationals, as well as many square paintings. And this is indeed the case, as the following plot shows. We show grid lines at 5/6, 4/5, 3/4, 2/3, 5/7, and 7/10, and their inverses:

Dataset with large amounts of new paintings

Even on a logarithmic scale the peaks as rationals are still clearly visible:

Peaks as rationals on a logarithmic scale

The relative number of paintings with aspect ratios near to certain simple fractions has been increasing over time. For the aspect ratios from the interval [1.1, 1.4] we plot the absolute value of the difference between the empirical CDF and a smoothed kernel CDF (smoothed with width 0.01). The relative increase in size of the maxima at 6/5, 5/4, and 4/3 is clearly visible:

Number of paitings with aspect ratios near certain simple fractions has increased over time

The majority of paintings in this dataset are oil paintings, and the above histograms are dominated by oil paintings. But it is interesting to compare the aspect ratio distribution of oil paintings, watercolor paintings, and acrylic paintings. With acrylic paintings being made only since the 1970s, the peaks at small rationals are even more pronounced than in the overall distribution. The distribution of the aspect ratios of ink drawings has a distinctly different shape, arising potentially from paper format:

Comparing various types of paintings

The large number of paintings makes it much more probable to find paintings with extreme aspect ratios. Even aspect ratios less than 1/0 and over 10 occur. Examples of very wide paintings are the the The Hussainbad Imambara Complex, the Makimono scroll of river scenes, or the Sennenstreifen. Examples of very tall paintings are La salive de dieu, Pilaster, and Exotic rain.

If we look at the cumulative fraction of all paintings that are either wide, tall, or square, we see that since 1825 wide paintings have become more popular. And we also see the dramatic rise of square paintings after 1950:

Popularity of wide paintings since 1825

The large number of paintings of this catalog, together with the occurrence of extreme aspect ratios in this dataset, suggest we should redo an overall fit of the distribution for all aspect ratios max(height/width, width/height). Using the (much smaller) data from the “Artwork” entity domain above in interlude 1, we conjectured that the distribution of aspect ratios is well approximated by a stable distribution. Fitting again a stable distribution results in a good overall fit. The blue curve representing the empirical distribution was obtained with a smoothing window width of 0.1:

Data from Artwork entity

The website of the famous auction house Sotheby’s features a searchable database of more than 100,000 paintings sold over the last fifteen years. While one does not expect the hammer prices to depend on the aspect ratios, let us check this. Here are the hammer prices for the sold paintings as a function of the aspect ratio:

Hammer prices for sold paitings as a function of the aspect ratio

Similarly, no direct relation exists between the hammer price and the areas of the paintings:

Correlation between hammer price and areas of paintings

The distribution of the hammer prices is interesting on its own, but discussing it in more detail would lead us astray, and we will continue focusing on aspect ratios:

Hammer prices and aspect ratios

Going East: all ratios will be different

While we have so far looked at many painting collections, virtually all paintings analyzed come from the Western world. What about the East? It is much harder to find a database of Eastern paintings. The most extensive I was able to locate was the catalog of Chinese paintings at the University of Tokyo.

The web pages are nicely structured and we can easily import them. For example:

Importing webpages

Here is a typical data entry that includes the dimensions:

Data entry with dimensions

The database contain about 10,500 dimension values. Here is a plot of the aspect ratios:

Plot of aspect ratios

The distribution is markedly different from Western paintings. The most pronounced maxima are now at 1/3 and 2. For a more detailed study of Chinese paintings, see Zheng, Weidong, and Xuchen. (Another, smaller online collection of Chinese paintings can be found here.)

Aspect ratios of packages, cars, labels, logos, emblems, paper, bank notes, stamps, and movies

If artists prefer certain aspect ratios for their paintings because they are more “beautiful,” then maybe one finds some similar patterns in many objects of the modern world.

Supermarket products

Let’s start with supermarket products. After all, they should appeal to potential customers. The itemMaster site has a listing of tens of thousands of products (registration is required).

Here again is the histogram of the height/width ratios. Many packages of products are square (many more than the number of square paintings we saw). And by far the most common height/width ratios are very near to 3/2:

Histogram of heigh/width ratios

(See Raghubir and Greenleaf, Salahshoor and Mojarrad, Ordabayeva and Chandon, and Koh for some discussions about optimal package shapes from an aesthetic, not production, point of view.)

Wine labels

After this quick look at the sizes of products, the next natural objects to look at are labels of products. It is difficult to find explicit dimensions of such labels, but images are relatively easy to locate. We found in the above discussion of the Web Gallery of Art that analyzing the images will introduce a certain error. This means we will not be able to make detailed statements about the most common aspect ratios of these labels, but analyzing the images will allow us to get an overall impression of the distribution. We will quickly look at red wine labels and at labels of German beers. The website wine.com has about 5,000 red wine labels:

Red wine labels from wine.com

Interestingly, the distribution of the wine label aspect ratios is not so different from the distribution of the paintings. We have wide, tall, and square labels:

Distribution of wine label aspect ratios

German beer labels

The Catawiki website has about 2,700 labels of German beers. It again takes just a few minutes to get the widths and heights of all the beer labels:

German beer labels from Catawiki website

The distribution of the beer label aspect ratios is markedly different from the wine labels. Most beer labels are nearly square:

Distribution of beer label aspect ratios

Food and drink logos

We slightly generalized the last two datasets to food and drink. The website brandsoftheworld.com has about 9k food-and-drink-related logos. Here is their aspect ratio distribution. We clearly see that most logos are either wide or square. Tall logos exist, but there are far fewer than wide logos:

Aspect ratios for logos from brandsoftheworld.com

Banknotes

What about the paper we use to pay for the products that we buy, banknotes? As banknotes are available within the Entity framework, we can quickly analyze the aspect ratios of about 800 bank notes currently in use around the world:

Analyzing the aspect ratios of 800 bank notes

Virtually all modern banknotes are wider than they are high, so we see only aspect ratios less than 1. And most banknotes are exactly twice as wide as they are tall:

Histogram of aspect ratios for bank notes

Car sizes

With enough banknotes, one can buy a nice car. So what are the height/length and height/width distributions for cars? Using about 3,600 car models from 2015, we get the following distribution:

Height/length and height/width distribution of 3600 car models from 2015

Here are some of the car models with small and large height/length aspect ratios:

Car models and aspect ratios

The strongly visible bimodalilty arises from the height distribution of cars. While lengths and widths of cars are unimodal, the height shows two clear maxima. The cars with heights above 65 inches are mostly SUVs and crossovers. Also, very small cars of average height but well-below-average length contribute to the height/width peak near 1/3:

Height distribution of cars

Paper sheets

Bank notes are made from paper-like material. So, what are the aspect ratios of commonly used paper sheets? The Wikipedia page on paper sizes has 13 tables of common paper sizes. It is easy to import the tables and to extract the columns of the tables that have the widths and heights (in millimeters):

Aspect ratios of commonly used paper sheets

Here is the resulting distribution of aspect ratios. Not unexpectedly, we see a clear clustering of aspect ratios near 1.41, which is approximately the value of square root of 2, the ratio on which most ISO-standardized paper is based. And the single most common aspect ratio is 4/3:

Distribution of aspect ratios

Stamps

What are other painting-like (in a general sense) rectangular objects that come in a wide variety? Of course, stamps are a version of a mini-paintings. The Colnect website has data on more than half a million stamps. If we restrict ourselves to French stamps, from 1849 to 2015 we have nearly 6,000 stamps to analyze. Reading in the data just takes a few minutes:

Aspect ratios for stamps

Here is the cumulative distribution of aspect ratios:

Cummulative distribution of aspect ratios for stamps

Finally, we found a product with the most common aspect ratios at least near to the golden ratio. Here are the most commonly observed aspect ratios:

Most commonly observed aspect ratios

The five-year moving average of the aspect ratio (max(width, height)/min(width, height)) shows the changing style of French stamps over time. We also show the area of the stamps over time (in cm²). And quite obviously, stamps became larger over the years:

Changing style of French stamps over time

NCAA team logos

Many people like to watch sports, especially team sports. The team logos are often prominently displayed. Let us have a look at two sport domains: NCAA teams and German soccer clubs. The former logos one can find here, and the latter here.

Here is the height/width distribution of the NCAA teams. Interestingly, we see a maximum at around 0.8, similar to some painting distributions:

Height/width distribution of NCAA team logos

German soccer club emblems

And this is the height/width distribution of 1,348 German soccer club emblems. We see a very large maxima for square emblems and a local maxima for tall emblems with an aspect ratio of about 1.15:

Height/width distribution of German soccer club emblems

Movie formats

We will end our penultimate section on aspect ratios of rectangular objects with a quick view on the evolution of movie formats. The website filmportal lists 85,000 German movies made over the last 100 years. About 27,000 of these have aspect ratio and runtime information totaling more than three years of movie runtimes. The following graphic shows the staggered cumulative distribution of aspect ratios over time. It shows that about two thirds of all movies ever released have an aspect ratio of approximately 4/3. And only in the 1960s did the trend of wider screen formats really take off:

Evolution of movie formats

We plot the time evolution of the yearly averages of aspect ratios of the movies of major US studios (Warner Bros., Paramount Pictures, Twentieth Century Fox, Universal Pictures, and Metro-Goldwyn-Mayer) made over the last 100 years. Until about 1955, an aspect ratio near 4/3 was dominant, and today the average aspect ratio is about 2.18:

Yearly averages of aspect ratios of major US movie studios

Postlude: So what is the “best” ratio?

To summarize: we analyzed the height-to-width ratios of many painting collections, totaling well over a million paintings and spanning the last millennium in time.

Using a combination of built-in and web data sources, certain qualitative features could be established:

  • The number of tall and wide paintings seems to be approximately equal in many collections.
  • Since the nineteenth century, the total number of wide paintings is larger than the total number of tall paintings.
  • The distribution of wide paintings can be accurately mapped into the distribution of tall paintings, meaning that the aspect ratio ar₁ is approximately as common as the aspect ratio 1/ar₁.
  • The aspect ratio distributions of many collections shows for both tall and wide paintings at least two clearly visible global maxima: one around 1.3 and one around 1.27 (and the reciprocal values for wide paintings).
  • Starting in the eighteenth century, aspect ratios that are rational numbers with small denominators become more and more popular; this trend is still ongoing—the timing coincides with the French standardization of canvas sizes.
  • Nineteenth- and twentieth-century paintings show pronounced maxima in their aspect ratio distributions at the aspect ratios 6/5, 5/4, 9/7, 4/3, and 3/2.
  • The overall distribution of the aspect ratios of large collections of paintings is well described by a Lévy alpha-stable distribution, meaning a distribution that has heavy tails.
  • The golden ratio is not an aspect ratio that occurs prominently in paintings (for its occurrence in architecture, see for instance Shekhawat, Huylebrouck and Labarque, Birkett and Jurgenson, and Foutakis).
  • The distribution of paintings is unique and quite distinct from the distribution of rectangular objects from the modern world (such as labels, stamps, logos, and so on).

The causes of the transition to aspect ratios with small denominators in the seventeenth century remains an open question. Was the transition initiated and fueled by aesthetic principles, or by more mundane industrial production and standardization of materials? We leave this question to art historians.

To more clearly resolve the question of whether the maxima correspond to certain well-known constants (square root of the golden ratio, plastic constant, 4/3, or 5/4), more accurate data for the dimensions of pre-eighteenth-century paintings are needed. Many catalogs give dimensions without discussing the precision of the measurement or if the frame is included in the reported dimensions. The precision of the width and height measurements is often one centimeter. With typical painting dimensions of the order of 100 centimeters, the rounding of full centimeter measurements introduces a certain amount of artifacts into the distribution. On the other hand, using digital images to analyze aspect ratios is also not feasible—the errors due to cropping and perspective are far too large. We intentionally did not join the data from various collections. In addition to the issue of identifying duplicates, one would have to carefully analyze if the measurements are with and without frame, as well as look in even more detail into the reliability of accuracy of the stated dimension measurements. The expertise of an art historian is needed to carry out such an agglomeration properly.

One larger collection that we did not analyze here and that might be helpful in the precise value of the pre-1750 maxima of aspect ratio distributions are the 178,000 older paintings in an online catalog of 645 museums in Germany, Austria, and Switzerland published online by De Gruyter. At the time of writing this blog, I had not succeeded in getting permission to access the data of this catalog. (There are also various smaller databases of paintings, including lost ones, that could be analyzed, but they will probably give results similar to those of the catalogs shown above.)

Interestingly, recent studies show that not just humans but other mammals seem to prefer aspect ratios around 1.2 (see the recent research of Winne et al.).

Many more quantitative investigations can be carried out on the actual images of paintings—for example, analyzing the spectral power distribution of the spatial frequencies that are in the Fourier components of the colors and brightnesses, left-right lighting analysis, structure and composition (here, here, and here), psychological basis of color structures, and automatic classification. Time permitting, we will carry out such analyses in the future. A very nice analysis of many aspects of the 2,229 paintings at MoMA was recently carried out by Roder.

And, of course, more manmade objects could be analyzed to see if the golden ratio was used in their designs, for instance cars. Modern extensions of paintings, such as graffiti, could be aspect-ratio analyzed. And the actual content of paintings could be analyzed to look for the appearance or non-appearance of the golden ratio (here and here). We leave these subjects for the reader to explore.

Download this post as a Computable Document Format (CDF) file.

]]>
http://blog.wolfram.com/2015/11/18/aspect-ratios-in-art-what-is-better-than-being-golden-being-plastic-rooted-or-just-rational-investigating-aspect-ratios-of-old-vs-modern-paintings/feed/ 7
Dates Everywhere in Pi(e)! Some Statistical and Numerological Musings about the Occurrences of Dates in the Digits of Pi http://blog.wolfram.com/2015/06/23/dates-everywhere-in-pie-some-statistical-and-numerological-musings-about-the-occurrences-of-dates-in-the-digits-of-pi/ http://blog.wolfram.com/2015/06/23/dates-everywhere-in-pie-some-statistical-and-numerological-musings-about-the-occurrences-of-dates-in-the-digits-of-pi/#comments Tue, 23 Jun 2015 18:03:09 +0000 Michael Trott http://blog.internal.wolfram.com/?p=26537 In a recent blog post, Stephen Wolfram discussed the unique position of this year’s Pi Day of the Century and gave various examples of the occurrences of dates in the (decimal) digits of pi. In this post, I’ll look at the statistics of the distribution of all possible dates/birthdays from the last 100 years within the (first ten million decimal) digits of pi. We will find that 99.998% of all digits occur in a date, and that one finds millions of dates within the first ten million digits of pi.

Here I will concentrate on dates than can be described with a maximum of six digits. This means I’ll be able to uniquely encode all dates between Saturday, March 14, 2015, and Sunday, March 15, 1915—a time range of 36,525 days.

We start with a graphical visualization of the topic at hand to set the mood.

Graphic visualization of pi

Get all dates for the last 100 years

This year’s Pi Day was, like every year, on March 14.

This year's pi day

Since the centurial Pi Day of the twentieth century, 36,525 days had passed.

Number of days between centurial pi days

We generate a list of all the 36,525 dates under consideration.

List of dates under consideration

For later use, I define a function dateNumber that for a given date returns the sequential number of the date, with the first date, Mar 15 1915, numbered 1.

Defining function dateNumber

I allow the months January to September to be written without a leading zero—9 instead of 09 for September—and similarly for days. So, for some dates, multiple digit sequences represent them. The function makeDateTuples generates all tuples of single-digit integers that represent a date. One could use slightly different conventions and minimal changes of the code and always enforce short dates or always enforce zeros. With the optional inclusion of zeros for days and months, I get more possible matches and a richer result, so I will use these in the following. (And, if you prefer a European date format of day-month-year, then some larger adjustments have to be made to the definition of makeDateTuples.)

Using makeDateTuples to generate tuples

Some examples with four, two, and one representation:

Examples of tuples with four, two, and one representation

The next plot shows which days from the last year are representable with four, five, and six digits. The first nine days of the months January to September just need four or five digits to be represented, and the last days of October, November, and December need six.

Which days from last year are representable with four, five, and six digits

For a fast (constant time), repeated recognition of a tuple as a date, I define two functions dateQ and dateOf. dateOf gives a normalized form of a date digit sequence. We start with generating pairs of tuples and their date interpretations.

Generating pairs of tuples and their data interpretations

Here are some examples.

RandomSample of tuplesAndDates

Most (77,350) tuples can be uniquely interpreted as dates; some (2,700) have two possible date interpretations.

Tuples interpreted as dates

Here are some of the digit sequences with two date interpretations.

Digit sequences with two date interpretations

Here are the two date interpretations of the sequence {1,2,1,5,4} as Jan 21 1954 or as Dec 1 1954 recovered by using the function datesOf.

Two date interpretations of the sequence 1,2,1,5,4

These are the counts for the four-, five-, and six-digit representations of dates.

Counts for the four-, five-, and six-digit representations of dates

And these are the counts for the number of definitions set up for the function datesOf.

Counts for the number of definitions set up for the function datesOf

Find all dates in the digits of pi

For all further calculations, I will use the first ten million decimal digits of pi (later I will see that ten million digits are enough to find any date). We allow for an easy substitution of pi by another constant.

Allowing for an easy substitution of pi by another constant

Instead of using the full digit sequence as a string, I will use the digit sequence split into (overlapping) tuples. Then I can independently and quickly operate onto each tuple. And I index each tuple with the index representing the digit number. For example:

Using the digit sequence split into overlapping tuples

Using the above-defined dateQ and dateOf functions, I can now quickly find all digit sequences that have a date interpretation.

Finding all digit sequences that have a date interpretation

Here are some of the date interpretations found. Each sublist is of the form {date, startingDigit, digitSequenceRepresentingTheDate}.

Sublist with the form date, startingDigit, digitSequenceRepresentingTheDate

We have found about 8.1 million dates represented as four digits, about 3.8 million dates as five digits, and about 365,000 dates represented as six digits, totaling more than 12 million dates altogether.

Dates represented at four, five, and six digits

Note that I could have used string-processing functions (especially StringPosition) to find the positions of the date sequences. And, of course, I would have obtained the same result.

Using string-processing functions to find the positions of the date sequences

While the use of StringPosition is a good approach to deal with a single date, dealing with all 35,000 sequences would have taken much longer.

Time to deal with 35,000 sequences

We pause a moment and have a look at the counts found for the 4-tuples. Out of the 10,000 possible 4-tuples, the 8,100 used appear each on average (1/10)⁴*10⁷=10⁴ times based on the randomness of the digits of pi. And approximately, I expect a standard deviation of about 100010^½≈31.6. Some quick calculations and a plot confirm these numbers.

Counts for the 4-tuples

The histogram of the counts shows the expected bell curve.

Histogram showing the expected bell curve

And the following graphic shows how often each of the 4-tuples that represent dates were found in the ten million decimal digits. We enumerate the 4-tuples by concatenating the digits; as a result, I see “empty” vertical stripes in the region where no 4-tuples are represented by dates.

4-tuples that represent dates were found in the ten million decimal digits

Now I continue to process the found date positions. We group the results into sublists of identical dates.

Grouping the results into sublists of identical dates

Every date does indeed occur in the first 10 million digits, meaning I have 36,525 different dates found. (We will see later that I did not calculate many more digits than needed.)

36,525 different dates found in the first 10 million digits

Here is what a typical member of dateGroups looks like.

What a typical member of a dateGroups look like

Statistics of all dates

Now let’s do some statistics on the found dates. Here is the number of occurrences of each date in the first ten million digits of pi. Interestingly, and in the first moment maybe unexpectedly, many dates appear hundreds of times. The periodically occurring vertical stripes result from the October-November-December month quarters.

Number of occurrences of each date in the first ten million digits of pi

The mean spacing between the occurrences also clearly shows the early occurrence of four-digit years with average spacings below 10,000, the five-digit dates with spacings around 100,000, and the six-digit dates with spacings around one million.

Mean spacing between the occurrences

For easier readability, I format the triples {date, startingPosition, dateDigitSequence} in a customized manner.

Formating triples for easier readability

The most frequent date in the first ten million digits of pi is Aug 6 1939—it occurs 1,362 times.

Most frequent date in the first ten million digits

Now let’s find the least occurring dates in the first ten million digits of pi. These three dates occur only once in the first ten million digits.

Least occurring dates in the first ten million digits

And all of these dates occur only twice in the first ten million digits.

Dates that occur only twice in the first ten million digits

Here is the distribution of the number of the date occurrences. The three peaks corresponding to the six-, five-, and four-digit date representations (from left to right) are clearly distinct. The dates that are represented by 6-tuples each occur only a very few times, and, as I have already seen above, appear on average about 1,200 times.

Distribution of the number of the date occurrences

We can also accumulate by year and display the date interpretations per year (the smaller values at the beginning and end come from the truncation of the dates to ensure uniqueness.) The distribution is nearly uniform.

Display the date interpretations per year

Let’s have a look at the dates with some “neat” date sequences and how often they occur. As the results in dateGroups are sorted by date, I can easily access a given date. When does the date 11-11-11 occur?

Dates with date sequences and how often they occur

And where does the date 1-23-45 occur?

Where does the date 1-23-45 occur

No date starts on its “own” position (meaning there is no example such as January 1, 1945 [1-1-4-5] in position 1145).

No date starts on its "own" position

But one palindromic case exists: March 3, 1985 (3.3.8.5), which occurs at palindromic position 5833.

One palindromic case exists

A very special date is January 9, 1936: 1.9.3.6 appears at the position of the 1,936th prime, 16,747.

1.9.3.6 appears at the position of the 1,936th prime

Let’s see what anniversaries happened on this day in history.

Anniversaries on January 9, 1936

While no date appeared at its “own” position, if I slightly relax this condition and search for all dates that overlap with its digits’ positions, I do find some dates.

All dates that overlap with its digits' positions

And at more than 100 positions within the first ten million digits of pi, I find the famous pi starting sequence 3,1,4,1 5 again.

Finding pi again within the first ten million digits

Within the digits of pi I do not just find birthday dates, but also physical constant days, such as the ħ-day (the reduced Planck constant day), which was celebrated as the centurial instance on October 5, 1945.

Finding physical constant days within pi

Here are the positions of the matching date sequences.

Positions of the matching date sequences using ListLogLinearPlot

And here is an attempt to visualize the appearance of all dates. In the date-digit plane, I place a point at the beginning of each date interpretation. We use a logarithmic scale for the digit position, and as a result, the number of points is much larger in the upper part of the graphic.

 Visualizing the appearance of all dates

For the dates that appear early on in the digit sequence, the finite extension of the date over the digits can be visualized too. A date extends over four to six digits in the digit sequence. The next graphic shows all digits of all dates that start within the first 10,000 digits.

All digits of all dates that start within the first 10,000 digits

After coarse-graining, the distribution is quite uniform.

Distribution is uniform using coarse-graining

So far I have taken a date and looked at where this date starts in the digit sequence of pi. Now let’s look from the reverse direction: how many dates intersect at a given digit of pi? To find the total counts of dates for each digit, I loop over the dates and accumulate the counts for each digit.

Finding the total counts of dates for each digit

A maximum of 20 dates occur at a given digit.

A maximum of 20 dates occur at a given digit.

Here are two intervals of 200 digits each. We see that most digits are used in a date interpretation.

Two intervals of 200 digits each

Above, I noted that I have about 12 million dates in the digit sequence under consideration. The digit sequence that I used was only ten million digits long, and each date needs about five digits. This means the dates need about 60 million digits. It follows that many of the ten million digits must be shared and used on average about five times. Only 2,005 out of the first ten million digits are not used in any of the date interpretations, meaning that 99.98% of all digits are used for date interpretations (not all as starting positions).

2,005 out of the first ten million digits are not used in any of the date interpretations

And here is the histogram of the distribution of the number of dates present at a certain digit. The back-of-the-envelope number of an average of six dates per digits is clearly visible.

Histogram of the distribution of the number of dates present at a certain digit

The 2,005 positions that are not used are approximately uniformly distributed among the first ten million digits.

The 2,005 positions that are not used are approximately uniformly distributed

If I display the concrete positions of the non-used digits versus their expected average position, I obtain a random walk–like graph.

Random walk-like graph

So, what are the neighboring digits around the unused digits? One hundred sixty two different five-neighborhoods exist. Looking at them immediately shows why the center digits cannot be part of a date: too many sequences of zeros before, at, or after.

Neighboring digits around the unused digits

And the largest unused block of digits that appears are the six digits between position 8,127,088 and 8,127,093.

Largest unused block of digits are the six digits between position 8,127,088 and 8,127,093

At a given digit, dates from various years overlap. The next graphic shows the range from the earliest to the latest year as a function of the digit position.

These are the unused digits together with three left- and three right-neighboring digits.

Unused digits together with three left- and three right-neighboring digits

Because the high coverage seems, in the first moment, maybe unexpected, I select a random digit position and select all dates that use this digit.

Random digit position and select all dates that use this digit

And here is a visualization of the overlap of the dates.

Code for visualization of the overlap of the dates
Visualization of the overlap of the dates

The most-used digit is the 1 at position 2,645,274: 20 possible date interpretations meet at it.

Most-used digit is the 1 at position 2,645,274: 20 possible date interpretations meet at it

Here are the digits in its neighborhood and the possible date interpretations.

Digits in its neighborhood and the possible date interpretations

If I plot the years starting at a given digit for a larger amount of digits (say the first 10,000), then I see the relatively dense coverage of date interpretations in the digits-date plane.

Plot of years starting at a given digit for a larger amount of digits

Let’s now build a graph of dates that are “connected”. We’ll consider two dates connected if the two dates share a certain digit of the digit sequence (not necessarily the starting digit of a date).

Graph of dates that are connected

Here is the same as the graph for the first 600 digits with communities emphasized.

Graph for the first 600 digits with communities emphasized

We continue with calculating the mean distance between two occurrences of the same date.

Calculating the mean distance between two occurrences of the same date

The first occurrences of dates

The first occurrences of dates are the most interesting, so let’s extract these. We will work with two versions, one sorted by the date (the list firstOccurrences) they represent, and one sorted by the starting position (the list firstOccurrencesSortedByOccurrence) in the digits of pi.

Using firstOccurrences and firstOccurrencesSortedByOccurrence

Here are the possible date interpretations that start within the first 10 digits of pi.

Possible date interpretations that start within the first 10 digits of pi

And here are the other extremes: the dates that appear deepest into the digit expansion.

Dates that appear deepest into the digit expansion

We see that Wed Nov 23 1960 starts only at position 9,982,546(=2 7 713039)—so by starting with the first ten million digits, I was a bit lucky to catch it. Here is a quick direct check of this record-setting date.

Direct check of this record-setting date

So, who are the lucky (well-known) people associated with this number through their birthday?

People associated with November 23 1960 as their birthday

And what were the Moon phases on the top dozen out-pi-landish dates?

Moon phases on the top dozen out-pi-landish dates

And while Wed Nov 23 1960 is furthest out in the decimal digit sequence, the last prime date in the list is Oct 22 1995.

The last prime date

In general, less than 10% of all first date appearances are prime.

Percentage of first date appearances being prime

Often one maps the digits of pi to a direction in the plane and forms a random walk. We do the same based on the date differences between consecutive first appearances of dates. We obtain typically looking 2D random walk images.

Date differences between consecutive first appearances of dates

Here are the first-occurring date positions for the last few years. The bursts in October, November, and December of each year are caused by the need for five or six consecutive digits, while January to September can be encoded with fewer digits if I skip the optional zeros.

First-occurring date positions for the last few years

If I include all dates, I get, of course, a much denser filled graphic.

All date positions for the last few years

A logarithmic vertical axis shows that most dates occur between the thousandth and millionth digits.

Logarithmic vertical axis shows that most dates occur between the thousandth and millionth digits

To get a more intuitive understanding of overall uniformity and local randomness in the digit sequence (and as a result in the dates), I make a Voronoi tessellation of the day-digit plane based on points at the first occurrence of a date. The decreasing density for increasing digits results from the fact that I only take first-date occurrences into account.

Voronoi tessellation of the day-digit plane based on points at the first occurrence of a date

Easter Sunday positions are a good date to visualize, as the date varies over the years.

Visualizing Easter Sunday dates

The mean first occurrence as a function of the number of digits needed to specify a date depends, of course, on the number of digits needed to encode a date.

Finding mean first occurrence

The mean occurrence is at 239,083, but due to the outliers at a few million digits, the standard deviation is much larger.

The mean occurrence is at 239,083

Here are the first occurrences of the “nice” dates that are formed by repetition of a single digit.

First occurrences of the nice dates that are formed by repetition of a single digit

The detailed distribution of the number of occurrences of first dates has the largest density within the first few 10,000 digits.

Detailed distribution of the number of occurrences of first dates

A logarithmic axis shows the distribution much better, but because of the increasing bin sizes, the maximum has to be interpreted with care.

Logarithmic axis showing the distribution

The last distribution is mostly a weighted superposition of the first occurrences of four-, five-, and six-digit sequences.

The last distribution is mostly a weighted superposition of the first occurrences of four-, five-, and six-digit sequences

And here is the cumulative distribution of the dates as a function of the digits’ positions. We see that the first 1% of the ten million digits covers already 60% of all dates.

Cumulative distribution of the dates as a function of the digits' positions

Slightly more dates start at even positions than at odd positions.

More dates start at even positions than at odd positions

We could do the same with mod 3, mod 4, … . The left image shows the deviation of each congruence class from its average value, and the right image shows the higher congruences, all considered again mod 2.

Deviation from congruences from average value and higher congruances

The actual number of first occurrences per year fluctuates around the mean value.

The number of first occurrences per year fluctuates around the mean value

The mean number of first-date interpretations sorted by month clearly shows the difference between the one-digit months and the two-digit months.

The mean number of first-date interpretations sorted by month

The mean number by day of the month (ranging from 1 to 31) is, on average, a slowly increasing function.

The mean number by day of the month

Finally, here are the mean occurrences by weekday. Most first date occurrences happen for dates that are Wednesdays.

The mean occurrences by weekday

Above I observed that most numbers participate in a possible date interpretation. Only relatively few numbers participate in a first-occurring date interpretation: 121,470.

Few numbers participate in a first-occurring date interpretation

Some of the position sequences overlap anyway, and I can form network chains of the dates with overlapping digit sequences.

Network chains of the dates with overlapping digit sequences

The next graphic shows the increasing gap sizes between consecutive dates.

Increasing gap sizes between consecutive dates

Distribution of the gap sizes:

Distribution of the gap sizes

Here are pairs of consecutively occurring date-interpretations that have the largest gap between them. The larger gaps were clearly visible in the penultimate graphic.

Pairs of consecutively occurring date-interpretations that have the largest gap between them

Dates in other expansions and in other constants

Now, the very special dates are the ones where the concatenated continued fraction (cf) expansion position agrees with the decimal expansion position. By concatenated continued fraction expansion, I mean the digits on the left at each level of the following continued fraction.

Concatenated continued fraction expansion

This gives the following cf-pi string:

Cf-pi string

And, interestingly, there is just one such date.

One date in cf-pi string

None of the calculations carried out so far were special to the digits in pi. The digits of any other irrational numbers (or even sufficiently long rational numbers) contain date interpretations. Running some overnight searches, it is straightforward to find many numeric expressions that contain the dates of this year (2015). Here they are put together in an interactive demonstration.

We now come to the end of our musings. As a last example, let’s interpret digit positions as seconds after this year’s pi-time at March 14 9:26:53. How long would I have to wait until seeing the digit sequence 3·1·4·1·5 in the decimal expansion of other constants? Can one find a (small) expression such that 3·1·4·1·5 does not occur in the first million digits? (The majority of the elements of the following list ξs are just directly written down random expressions; the last elements were found in a search for expressions that have the digit sequence 3·1·4·1·5 as far out as possible.)

Digit positions as seconds after this year's pi-time

Here are two rational numbers whose decimal expansions contain the digit sequence:

Two rational numbers whose decimal expansions contain the digit sequence

And here are two integers with the starting digit sequence of pi.

Two integers with the starting digit sequence of pi

Using the neat new function TimelinePlot that Brett Champion described in his last blog post, I can easily show how long I would have to wait.

Using TimelinePlot with pi

We encourage readers to explore the dates in the digits of pi more, or replace pi with another constant (for instance, Euler’s constant E, to justify the title of this post), and maybe even 10 by another base. The overall, qualitative structures will be the same for almost all irrational numbers. (For a change, try ChampernowneNumber[10].) Will ten million digits be enough to find every date in, say, E (where is October 21, 2014?) Which special dates are hidden in other constants? These and many more things are left to explore.

Download this post as a Computable Document Format (CDF) file.

]]>
http://blog.wolfram.com/2015/06/23/dates-everywhere-in-pie-some-statistical-and-numerological-musings-about-the-occurrences-of-dates-in-the-digits-of-pi/feed/ 5
Which Is Closer: Local Beer or Local Whiskey? http://blog.wolfram.com/2014/08/19/which-is-closer-local-beer-or-local-whiskey/ http://blog.wolfram.com/2014/08/19/which-is-closer-local-beer-or-local-whiskey/#comments Tue, 19 Aug 2014 19:14:42 +0000 Michael Trott http://blog.internal.wolfram.com/?p=20875 In today’s blog post, we will use some of the new features of the Wolfram Language, such as language processing, geometric regions, map-making capabilities, and deploying forms to analyze and visualize the distribution of beer breweries and whiskey distilleries in the US. In particular, we want to answer the core question: for which fraction of the US is the nearest brewery further away than the nearest distillery?

Disclaimer: you may read, carry out, and modify inputs in this blog post independent of your age. Hands-on taste tests might require a certain minimal legal age (check your countries’ and states’ laws).

We start by importing two images from Wikipedia to set the theme; later we will use them on maps.

Image of beer vs. image of whiskey

We will restrict our analysis to the lower 48 states. We get the polygon of the US and its latitude/longitude boundaries for repeated use in the following.

Polygon of the US and its latitude/longitude boundaries

And we define a function that tests if a point lies within the continental US.

We define a function that tests if a point lies within the continental US

We start with beer. Let’s have a look at the yearly US beer production and consumption over the last few decades.

Yearly US beer production and consumption

This production puts the US in second place, after China, on the world list of beer producers. (More details about the international beer economy can be found here.)

This production puts the US in second place, after China, on the world list of beer producers

And here is a quick look at the worldwide per capita beer consumption.

Worldwide per capita beer consumption

The consumption of the leading 30 countries in natural units, kegs of beer:

Consumption of the leading 30 countries in natural units, kegs of beer

Some countries prefer drinking wine (see here for a detailed discussion of this subject). The following graphic shows (on a logarithmic base 2 scale) the ratio of beer consumption to wine consumption. Negative logarithmic ratios mean a higher wine consumption compared to beer consumption. (See the American Association of Wine Economists’ working paper no. 79 for a detailed study of the correlation between wine and beer consumption with GDP, mean temperature, etc.)

Ratio of beer consumption to wine consumption

We start with the beer breweries. To plot and analyze, we need a list of breweries. The Wolfram Knowledgebase contains data about a lot of companies, organizations, food, geographic regions, and global beer production and consumption. But breweries are not yet part of the Wolfram Knowledgebase. With some web searching, we can more or less straightforwardly find a web page with a listing of all US breweries. We then import the data about 2600+ beer breweries in the US as a structured dataset. This is an all-time high over the last 125 years. (For a complete list of historical breweries in the US, you can become a member of the American Breweriana Association and download their full database, which also covers long-closed breweries.)

Beer breweries

Here are a few randomly selected entries from the dataset.

Random selections from dataset

We see that for each brewery, we have their name, the city where they are located, their website URL, and their phone number (the BC, BP, and similar abbreviations stand for if and what you can eat with your beer, which is irrelevant for today’s blog post).

Next, we process the data, remove breweries no longer in operation, and extract brewery names, addresses, and ZIP codes.

Processing the data

We now have data for 2600+ breweries.

Data for over 2600+ breweries

For a geographic analysis, we resolve the ZIP codes to actual lat/long coordinates using the EntityValue function.

Resolve ZIP codes to actual lat/long coordinates using EntityValue function

Unfortunately, not all ZIP codes were resolved to actual latitudes and longitudes. These are the ones where we did not successfully find a geographic location.

Unsuccessful geographic location resolving

Why did we not find coordinates for these ZIP codes? As frequently happens with non-programmatically curated data, there are mistakes in the data, and so we will have to clean it up. The easiest way would be to simply ignore these breweries, but we can do better. These are the actual entries of the breweries with missing coordinates.

Actual entries of the breweries with missing coordinates
Actual entries of the breweries with missing coordinates

A quick check at the USPS website shows that, for instance, the first of the above ZIP codes, 54704, is not a ZIP code that the USPS recognizes and/or delivers mail to.

So no wonder the Wolfram Knowledgebase was not able to find a coordinate for this “ZIP code”. Fortunately, we can make progress in fixing the incorrect ZIP codes programmatically. Assume the nonexistent ZIP code was just a typo. Let’s find a ZIP code in Madison, WI that has a small string distance to the ZIP code 54704.

Find a ZIP code in Madison, WI that has a small string distance to the ZIP code 54704

The ZIP code 53704 is in string (and Euclidean) distance as near as possible to 54704.

ZIP code 53704 is in string (and Euclidean) distance as near as possible to 54704

And taking a quick look at the company’s website confirms that 53704 is the correct ZIP code. This observation, together with the programmatic ZIP code lookups, allows us to define a function to programmatically correct the ZIP codes in case they are just simple typos.

Define function to programmatically correct the ZIP codes in case they are just simple typos

For instance, for Black Market Brewing in Temecula, we find that the corrected ZIP code is 92590.

Corrected ZIP code example

So, to clean the data, we perform some string replacements to get a dataset that has ZIP codes that exist.

Cleaning data to get dataset with existing ZIP codes

We now acquire coordinates again for the corrected dataset.

We now acquire coordinates again for the corrected dataset

Now we have coordinates for all breweries.

Coordinates for all breweries

And all ZIP codes are now associated with a geographic position. (At least when I wrote the blog post; because the used website gets regularly updated, at a later point in time new typos could have occurred and the fixDataRules would have to be updated appropriately.)

All ZIP codes are now associated with a geographic position

Now that we have coordinates, we can make a map with all the breweries indicated.

Map with all the breweries indicated

Let’s pause for a moment and think about what goes into beer. According to the Reinheitsgebot from November 1487, it’s just malted barley, hops, and water (plus yeast). The detailed composition of water has an important influence on a beer’s taste. The water composition in turn relates to hydrogeology. (See this paper for a detailed discussion of the relation.) Carrying out a quick web search lets us find a site showing important natural springs in the US. We import the coordinates of the springs and plot them together with the breweries.

Import the coordinates of the springs and plot them together with the breweries

We redraw the last map, but this time add the natural springs in blue. Without trying to quantify the correlation here between breweries and springs, a visual correlation is clearly visible.

Visual correlation is clearly visible

We quickly calculate a plot of the distribution of the distances of a brewery to the nearest spring from the list springPositions.

Calculate a plot of the distribution of the distances of a brewery to the nearest spring

And if we connect each brewery to the nearest spring, we obtain the following graphic.

Connect each brewery to the nearest spring

We can also have a quick look at which regions of the US can use their local barley and hops, as the Wolfram Knowledgebase knows in which US states these two plants can be grown.

US regions that use local barley and hops

(For the importance of spring water for whiskey, see this paper.) Most important for a beer’s taste is the hops (see this paper and this paper for more details). The Alpha symbol-acids of hops give the beer its bitter taste. The most commonly occurring Alpha symbol-acid in hops is humulone. (To refresh your chemistry knowledge, see the Step-by-step derivation for where to place the dots in the below diagram.)

Humulone

But let’s not be sidetracked by chemistry and instead focus in this blog post on geographic aspects relating to beer.

Historically, a relationship has existed between beer production and the church (in the form of monasteries; see “A Comprehensive History of Beer Brewing” for details). Today we don’t see a correlation (other than through population densities) between religion and beer production. Just to confirm, let’s draw a map of major churches in the US together with the breweries. At the website of the Hartford Institute, we find a listing of major churches. (Yes, it would have been fun to really draw all 110,000+ churches of the US on a map, but the blog team did not want me to spend $80–$100 to buy a US church database and support spam-encouraging companies, e.g from here or here.)

Beers vs. churches

Back to the breweries. Instead of a cloud of points of individual breweries we can construct a continuous brewery probability field and plot it. This more prominently shows the hotspots of breweries in the US. To do so, we calculate a smooth kernel distribution for the brewery density in projected coordinates. We use the Sheather–Jones bandwidth estimator, which relieves us from needing to specify an explicit bandwidth. Determining the optimal bandwidth is a nontrivial calculation and will take a few minutes.

Sheather–Jones bandwidth estimator

We plot the resulting distribution and map the resulting image onto a map of the US. Blue denotes a low brewery density and red a high one. Denver, Oregon, and Southern California clearly stand out as local hotspots.

We plot the resulting distribution and map the resulting image onto a map of the US

The black points on top of the brewery density map are the actual brewery locations.

Brewery density map

Using the brewery density as an elevation, we can plot the beer topography of the US. Previously unknown (beer-density) mountain ranges and peaks become visible in topographically flat areas.

Beer topography of the US

The next graphic shows a map where we accumulate the brewery counts by latitude and longitude. Similar to the classic wheat belt, we see two beer belts running East to West and two beer belts running North to South.

Brewery longitude-latitude

Let’s determine the elevations of the breweries and make a histogram to see whether there is more interest in a locally grown beer at low or high elevations.

Elevations of breweries

It seems that elevations between 500 and 1500 ft are most popular for places making a fresh cold barley pop (with an additional peak at around 5000 ft caused by the many breweries in the Denver region).

Brewer elevation histogram

For further use, we summarize all relevant information about the breweries in breweryData.

Summarize relevant information about breweries

We define some functions to find the nearest brewery and the distance to the nearest brewery.

Define functions to find the nearest brewery

Here are the nearest breweries from the Wolfram headquarters In Champaign, IL.

Breweries close to Champaign, IL

And here is a plot of the distances from Champaign to all breweries, sorted by size. After accounting for the breweries in the immediate neighborhood of Champaign, for the first nearly 1000 miles we see a nearly linear increase in the number of breweries with a slope of approximately 2.1 breweries/mile.

Plot of the distances from Champaign to all breweries

Now that we know where to find a freshly brewed beer, let’s switch focus and concentrate on whiskey distilleries. Again, after some web searching we find a web page with a listing of all distilleries in the continental US. Again, we read in the data, this time in unstructured form, extract the distillery and cities named, and carry out some data cleanup as we go.

Whiskey data extraction

This time, we have the name of the distillery, their website, and the city as available data. Here are some example distilleries.

Example distilleries

A quick check shows that we did a proper job in cleaning the data and now have locations for all distilleries.

Example distilleries
Example distilleries

We now have a list of about 500 distilleries.

502 distilleries

We retrieve the elevations of the cities with distilleries.

Elevations of cities with distilleries

The average elevation of a distillery does not deviate much from the one for breweries.

Little deviation between elevation of distilleries and breweries

We summarize all relevant information about the distilleries in distilleryData.

We summarize all relevant information about the distilleries in distilleryData

Define functions to find the nearest brewery and the distance to the nearest brewery.

Define functions to find the nearest brewery and the distance to the nearest brewery

We now use the function nearestDistilleries to locate the nearest distillery and make a map of the bearings to take to go to the nearest distillery.

nearestDistilleries

Let’s come back to breweries. What’s the distribution by state? Here are the states with the most breweries.

Brewery distribution by state

If we normalize by state population, we get the following ranking.

Normalizing for state population

And which city has the most breweries? We accumulate the ZIP codes by city. Here are the top dozen cities by brewery count.

Cities with most breweries

And here is a more visual representation of the top 25 brewery cities. We show a beer glass over the top brewery cities whose size is proportional to the number of breweries.

Visual representation of top 25 brewery cities
Visual representation of top 25 brewery cities

Oregon isn’t a very large state, and it includes beer capital Portland, so let’s plan a trip to visit all breweries. To minimize driving, we calculate the shortest tour that visits all of the state’s breweries. (All distances are along geodesics, not driving distances on roads.)

Calculate the shortest tour that visits all of Oregon's breweries

A visit to all Oregon breweries will be a 1,720-mile drive.

A visit to all Oregon breweries will be a 1720-mile drive

And here is a sketch of the shortest trips that hit all breweries for each of the lower 48 states.

Sketch of the shortest trips that hit all breweries for each of the lower 48 states

Let’s quickly make a website that lets you plan a short beer tour through your state (and maybe some neighboring states). The function makeShortestTourDisplay calculates and visualizes the shortest path. For comparison, the length of a tour with the breweries chosen in random order is also shown. The shortest path often allows us to save a factor 5…15 in driving distances.

Map the breweries

Tour through the state

Shortest tour display

Drive responsibly on brewery tours!

We deploy the function makeShortestTourDisplay to let you easily plan your favorite beer state tours.

Deploy the function makeShortestTourDisplay
Making beer tour plan
Making beer tour plan

And if the reader has time to take a year off work, a visit to all breweries in the continental US is just a 41,000-mile trip.

Brief 41,000-mile trip

The collected caps from such a trip could make beautiful artwork! Here is a graphic showing one of the possible tours. The color along the tour changes continuously with the spectrum, and we start in the Northeast.

Possible tour

On average, we would have to drive just 15 miles between two breweries.

Fifteen-minute drive between two breweries

Here is a distribution of the distances.

Distribution of the distances

Such a trip covering all breweries would involve driving nearly 300 miles up and down.

Driving distance of 300 miles up and down

Here is a plot of the height profile along the trip.

Height profile along the trip

We compare the all-brewery trip with the all-distillery trip, which is still about 21,000 miles.

All brewery vs. all distillery

To calculate the distribution function for the average distance from a US citizen to the nearest brewery and similar facts, we build a list of coordinates and the population of all ZIP code regions. We will only consider the part of the population that is older than 21 years. We retrieve this data for the ~30,000 ZIP codes.

List of coordinates and the population of all ZIP code regions

We exclude the ZIP codes that are in Alaska, Hawaii, and Guam and concentrate on the 48 states of the continental US.

Exclude ZIP codes in Alaska, Hawaii, and Guam

We will take into account adults from the ~29,000 populated ZIP code areas with a non-vanishing number of adults totaling about 214 million people.

Adults from the ~29,000 populated ZIP code areas with a non-vanishing number of adults totaling about 214 million people

Now that we have a function to calculate the distance to the nearest brewery at hand and a list of positions and populations for all ZIP codes, let’s do some elementary statistics using this data.

Elementary statistics using this data

Here is a plot of the distribution of distances from all ZIP codes to the nearest brewery.

Distribution of distances from all ZIP codes to nearest brewery

More than 32 million Americans have a local brewery within their own ZIP code region.

Over 32 million Americans have local brewery within their ZIP code region

While ~15% of the above-drinking-age population is located in the same ZIP code as a brewery, this does not imply zero distance to the next brewery. As a rough estimation, we will model the distribution within a ZIP code as the distance between two random points. In the spirit of the famous spherical cow, the shape of a ZIP code we will approximate as a disk. Thus, we need the size distribution of the ZIP code areas.

The average distance between two randomly selected points from a disk is approximately the radius of the disk itself.

Average distance between two randomly selected points from a disk is approximately the radius of the disk itself

Within our crude model, we take the areas of the cities and calculate the radius of the corresponding disk. We could do a much more refined Monte Carlo model using the actual polygons of the ZIP code regions, but for the qualitative results that we are interested in, this would be overkill.

Calculate areas of cities and radius of corresponding disk

Now, with a more refined treatment of the same ZIP code data, on average, for a US citizen in the lower 48 states, the nearest brewery is still only about 13.5 miles away.

Nearest brewery 13.5 miles away for most US citizens

And, modulo a scale factor, the distribution of distances to the nearest brewery is the same as the distribution above.

Same distribution as above

Let’s redo the same calculation for the distilleries.

Same calculation for distilleries

The weighted average distance to the nearest distillery is about 30 miles for the above-drinking-age customers of the lower 48 states.

Weighted average distance to the nearest distillery is about 30 miles for the above-drinking-age customers of the lower 48 states

And for about 1 in 7 Americans the nearest distillery is closer then the nearest brewery.

~16% of Americans live closer to distillery than brewery

We define a function that, for a given geographic position, calculates the distance to the nearest brewery and the nearest distillery.

Calculate the distance to nearest brewery and nearest distillery

E.g. if you are at Mt. Rushmore, the nearest brewery is just 18 miles away, while the nearest distillery is nearly 160 miles away.

Mt. Rushmore example

For some visualizations to be made below, for a dense grid of points in the US, find the distance to the nearest brewery and the nearest distillery. It will take 20 minutes to calculate these 320,000 distances, so we have time to visit the nearest espresso machine in the meantime.

Find distance to nearest brewery and nearest distillery

So, how far away can the nearest brewery be from an adult US citizen (within the lower 48 states)? We calculate the maximal distance to a brewery.

Calculate the maximal distance to a brewery

We find that the city furthest away from a freshly brewed beer is Ely in Nevada–about 170 miles away.

Furthest away from freshly brewed beer is Ely

And here is the maximal distance to a distillery. From Redford, Texas it is about 335 miles to the nearest distillery.

Maximal distance to a distillery

Of the inhabitants of these two cities, the people from Ely have “only” a 188-mile distance to a distillery and the people from Redford are 54 miles from the next brewery.

Ely vs. Redford

After having found the external distance cities, the next natural question is for the city that has the maximal distance to either a brewery or a distillery.

Maximal distance to brewery or distillery

Let’s have a look at the situation in the middle of Kansas. The ~100 adult citizens of Manter, Kansas are quite far away from a local alcoholic drink.

Alcohol situation in Manter, Kansas

And here is a detailed look at the breweries/distilleries situation near Manter.

Breweries/distilleries situation near Manter

Now that we have the detailed distances for a dense grid of points over the continental US, let’s visualize this data. First, we make plots showing the distance, where blue indicates small distances and red dangerously large distances.

Visualizing alcohol data

Using these distance plots properly projected into the US yields a more natural-looking image.

Natural-looking image of distance plots

And here is the corresponding image for distilleries. Note the clearly visible great Distillery Ridge mountain range between Eastern US distilleries and Western US distilleries.

Corresponding image for distilleries

For completeness, here is the maximum of either the distance to the nearest brewery or the distance to the nearest distillery.

Maximum of either the distance to the nearest brewery or the distance to the nearest distillery

And here is the equivalent 3D image with the distance to the next brewery or distillery shown as vertical elevation. We also use a typical elevation plot coloring scheme for this graphic.

Distance to the next brewery or distillery shown as vertical elevation

We can also zoom into the Big Dry Badlands mountain range to the East of Denver as an equal-distance-to-freshly-made-alcoholic-drink contour plot. The regions with a distance larger than 100 miles to the nearest brewery or distillery are emphasized with a purple background.

Zoom into the Big Dry Badlands mountain range to the East of Denver as an equal-distance-to-freshly-made-alcoholic-drink contour plot
Zoom into the Big Dry Badlands mountain range to the East of Denver as a equal-distance-to-freshly-made-alcoholic-drink contour plot

Or, more explicit graphically, we can use the beer and whiskey images from earlier to show the regions that are closer to a brewery than to a distillery and vice versa. In the first image, the grayed-out regions are the ones where the nearest distillery is at a smaller distance than the nearest brewery. The second image shows regions where the nearest brewery is at a smaller distance than the nearest distillery in gray.

Use the beer and whiskey images from earlier to show the regions that are closer to a brewery than to a distillery

There are many more bells and whistles that we could add to these types of graphics. For instance, we could add some interactive elements to the above graphic that show details when hovering over the graphic.

Add interactive elements
Adding interactive features

Earlier in this blog post, we constructed an infographic about beer production and consumption in the US over the last few decades. After having analyzed distillery locations, a natural question is what role whiskey plays among all spirits. This paper analyzes the average alcohol content of spirits consumed in the US over a 50+ year time span at the level of US states. If you have a subscription, you can easily import the main findings of the study, which is Table 1.

Imported findings from study

Here is a snippet of the data. The average alcohol content of the spirits consumed decreased substantially from 1950 to 2000, mainly due to a decrease in whiskey consumption.

Here is a graphical representation of the data from 1950 to 2000.

Graphical representation of the data from 1950 to 2000
Graphical representation of the data from 1950 to 2000

So far we have concentrated on beer- and whiskey-related issues on a geographic scale. Let’s finish with some stats and infographics on the kinds of beer produced in the breweries mapped above. Again, after some web searching, we find a page that lists the many types of beer, 160+ different styles to be precise. (See also the Handbook of Brewing and the “Brewers Association 2014 Beer Style Guidelines” for a detailed discussion of beer styles.)

Stats and infographics on kinds of beer produced

We again import the data. The web page is perfectly maintained and checked, so this time we do not have to carry out any data cleanup.

Importing data

How much beer one can drink depends on the alcohol content. Here is the distribution of beer styles by alcohol content. Hover over the graph to see the beer styles in the individual bins.

Distribution of beer styles by alcohol content

Beer colors are defined on a special scale called Standard Reference Method (SRM). Here is a translation of the SRM values to RGB colors.

Translation of the SRM values to RGB colors

How do beer colors correlate with alcohol content and bitterness? The following graphic shows the parameter ranges for the 160+ beer styles. Again, hover over the graph to see the beer style categories highlighted.

Parameter ranges for the over 160 beer styles

In an interactive 3D version, we can easily restrict the color values.

3D version

After visualizing breweries in the US and analyzing the alcohol content of beer types, what about the distribution of the actual brewed beers within the US? After doing some web searching again, we can find a website that lists breweries and the beers they brew.

So, let’s read in the beer data from the site for 2,600 breweries. We start with preparing a list of the relevant web pages.

Preparing a list of the relevant web pages

Next, we prepare for processing the individual pages.

Prepare for processing the individual pages

As this will take a while, we can display the breweries, their beers, and a link to the brewery website to entertain us in the meantime. Here is an example of what to display while waiting.

Breweries, beers, and a link to the brewery website

Now we process the data for all breweries. Time for another cup of coffee. To have some entertainment while processing the beers of 2,000+ breweries, we again use Monitor to display the last-analyzed brewery and their beers. We also show a clickable link to the brewery website so that the reader can choose a beer of their liking.

Process data for breweries

Here is a typical data entry. We have the brewery name, its location, and, if available, the actual beers, their classification as Lager, Bock, Doppelbock, Stout, etc., together with their alcohol content.

Typical data entry

Here is the distribution of the number of different beers made by the breweries. To get a feeling, we will quickly import some example images.

Distribution of the number of different beers made by the breweries

Concretely, more than 24,400 US-made beers were listed in the just-imported web pages.

24,400 US-made beers were listed in the just-imported web pages

Accumulating all beers gives the following cumulative distribution of the alcohol content.

Accumulating all beers gives cumulative distribution of the alcohol content

On average, a US beer has an alcohol content (by volume) of (6.7Plus-minus2.1)%.

US beer alcohol content average of 6.7%

If we tally up by beer type, we get the following distribution of types. India Pale Ale is the winner, followed by American Pale Ale.

Distribution of beer types
Distribution of beer types

Now let’s put the places where a Hefeweizen is freshly brewed on a map.

Where to find freshest Hefeweizen brew

And here are some healthy breakfast beers with oatmeal or coffee (in the name).

Breakfast beers

For the carnivorous beer drinkers, there are plenty of options. Here are samples of beers with various mammals and fish in their name. (Using Select[#&@@@Flatten[Last/@Take[brewerBeerDataUS,All],2],
DeleteCases[Interpreter["Species"][
StringSplit[#]],_Failure]=!={}&]
, we could get a complete list of all animal beers.)

Beers with various mammals and fish in their name
Beers with various mammals and fish in their name
Beers with various mammals and fish in their name

What about the names of the individual beers? Here is the distribution of their (string) lengths. Hover over the columns to see the actual names.

Beer name string lengths

Presume you plan a day trip up to 125 miles in radius (meaning not longer than about a two-hour drive in each direction). How many different beers and beer types would you encounter as a function of your starting location? Building a fast lookup for the breweries up to distance d, you can calculate these numbers for a dense set of points across the US and visualize the resulting data geographically. (For simplicity, we assume a spherical Earth for this calculation.)

Calculate for a dense set of points across US

In the best-case scenario, you can try about 80 different beer types realized through more than 2000 different individual beers within a 125-mile radius.

80 different beer types realized through more than 2,000 different individual beers within a 125-mile radius
80 different beer types realized through more than 2000 different individual beers within a 125-mile radius

After so much work doing statistics on breweries, beer colors, beer names, etc., let’s have some real fun: let’s make some fun visualizations using the beers and logos of breweries.

Many of the brewery homepages show images of the beers that they make. Let’s import some of these and make a delicious beer (bottle, can, glass) collage.

Beer bottle collage
Beer bottle collage

We continue by making a reduced version of brewerBeerDataUS that contains the breweries and URLs by state.

brewerBeerDataUS

Fortunately, many of the brewery websites have their logo on the front page, and in many cases the image has logo in the image filename. This means a possible automated way to get images of logos is to read in the front page of the web presences of the breweries.

Find logo via web presence of brewery

We will restrict our logo searches to logos that are not too wide or too tall, because we want to use them inside graphics.

Restrict logo search

We also define a small list of special-case lookups, especially for states that have only a few breweries.

Define a small list of special-case lookups

Now we are ready to carry out an automated search for brewery logos. To get some variety into the vizualizations, we try to get about six different logos per state.

Automated search for brewery logos

After removing duplicates (from breweries that brew in more than one state), we have about 240 images at hand.

247 images

A simple collage of brewery logos does not look too interesting.

Simple brewery collage

So instead, let’s make some random and also symmetrized kaleidoscopic images of brewery logos. To do so, we will map the brewery logos into the polygons of a radial-symmetric arrangement of polygons. The function kaleidoscopePolygons generates such sets of polygons.

Random and symmetrized kaleidoscopic images of brewery logos

The next result shows two example sets of polygons with threefold and fourfold symmetry.

Two example sets of polygons with threefold and fourfold symmetry

And here are two random beer logo kaleidoscopes.

Two random beer logo kaleidoscopes

Here are four symmetric beer logo kaleidoscopes of different rotational symmetry orders.

Four symmetric beer logo kaleidoscopes of different rotational symmetry orders

Or we could add brewery stickers onto the faces of the Wolfram|Alpha Spikey, the rhombic hexecontahedron. As the faces of a rhombic hexecontahedron are quadrilaterals, the images don’t have to be distorted very much.

Add brewery stickers onto the faces of the Wolfram|Alpha spikey

Let’s end with randomly selecting a brewery logo for each state and mapping it onto the polygons of the state.

Randomly selecting a brewery logo for each state and mapping it onto the polygons of the state

The next graphic shows some randomly selected logos from states in the Northeast.

Randomly selected logos from Northeast states

And we finish with a brewery logo mapped onto each state of the continental US.

Brewery logo mapped onto each state of the continental US

We will now end and leave the analysis of wineries for a future blog post. For a more detailed account of the distribution of breweries throughout the US over the last few hundred years, and a variety of other beer-related geographical topics, I recommend reading the recent book The Geography of Beer, especially the chapter “Mapping United States Breweries 1612 to 2011″. For deciding if a bottle of beer, a glass of wine, or a shot of whiskey is right for you, follow this flowchart.


Download this post as a Computable Document Format (CDF) file.

To comment, please visit the original post at the Wolfram|Alpha Blog »

]]>
http://blog.wolfram.com/2014/08/19/which-is-closer-local-beer-or-local-whiskey/feed/ 0
Musing about Rectangular Bar Magnets http://blog.wolfram.com/2013/08/27/musing-about-rectangular-bar-magnets/ http://blog.wolfram.com/2013/08/27/musing-about-rectangular-bar-magnets/#comments Tue, 27 Aug 2013 13:56:30 +0000 Michael Trott http://blog.internal.wolfram.com/?p=16495 (This is the third post in a three-part series about electrostatic and magnetostatic problems involving sharp edges.)

In the first blog post of this series, we looked at magnetic field configurations of piecewise straight wires. In the second post, we discussed charged cubes and orbits of test particles in their electric field. Today we will look at magnetic systems, concretely, mainly at a rectangular bar magnet with uniform magnetization.

As a warm-up, let’s use Wolfram|Alpha for the visualization of the magnetic induction of a magnetizable ball in a constant magnetic field. While this is a standard exercise for calculations involving spherical harmonic expansions, it is very convenient to have the magnetic field inside and outside the ball ready at one’s fingertips.

MagnetizableBallData

We define the magnetic induction in a cross section through the center of the sphere parallel to the outer field.

MagnetizableBallInCrossSection

Using the StreamPlot function, we can quickly and conveniently show the magnetic field direction in a cross section of the sphere.

StreamPlot

And here is a plot of the magnitude of the field strength.

ContourPlot

What is the energy stored in magnetic induction of the sphere outside the sphere? We use the total magnetic field and subtract the external field and integrate the magnetic energy density in spherical coordinates.

BOutsideGeneral

Now let’s come to the rectangular bar magnet. To visualize the field of a bar magnet, we first need a closed-form expression for the magnetic field strength. Again we use Wolfram|Alpha through the WolframAlpha[] function to obtain the needed formulas. Already the scalar magnetic vector potential is a pretty large expression; we display a scaled-down version.

ImageResize

As a side note, the last expression is intimately related to the above used potential of a cube. The scalar magnetic vector potential equals the derivative of the electric potential with respect to z. We also now use the more general form for arbitrary edge lengths a, b, and c to be able to model a bar magnet of realistic edge length ratios. Here is the corresponding expression for the magnetic field HRight(x, y, z). (While elementary, a closed-form expression for the magnetic field around a rectangular bar magnet was only published in 2004 by Engel-Herbert and Hesjedal.)

BarMagnetHeld

For the following examples, we choose a bar magnet of dimensions 1 x 1 x 2. Plotting the magnetic field strength in the vicinity of an edge (in the x = 0 plane) shows a pronounced singularity of the magnetic field strength along the edge.

Plot3D

Calculating a series expansion around the center of an edge shows that the type of the singularity of the field strength is logarithmic.

SeriesHBarMagnet

Magnetic materials with smooth boundaries do not exhibit arbitrarily large field strength. The existence of the edges on the cube boundary causes the logarithmic singularities in the field strengths. Practically, one cannot measure arbitrarily high field strength at the edge of a bar magnet because of the discrete atomic nature of the magnet. But the fields near an edge of a magnet are used to produce larger magnetic fields; see for instance the two papers by Samofalov ([1] [2]).

Compared with magnetic field value at the face center, in a distance one millionth of the size of the magnet from the corner, the field strength is about four times larger.

NormHBarMagnet

A look at the magnetic field in the cross section of the magnet shows the upper and lower faces as the source of the field. (These are quantitative correct versions of the typical qualitative textbook sketches for the magnetic lines of a bar magnet.)

Cross section of the magnet

Here is a slightly different view—a 3D plot of the surfaces of constant magnitude of the HRight-field.

ContourPlot3D

While the magnetic field HRight has sources, the magnetic induction BRight is source-free (remember the Maxwell equation div BRight = 0). We obtain the magnetic field BRight by adding the magnetization MRight to the magnetic field HRight. (Here we use all dimensions set to 1; of course, in SI units the magnetic field strength and the magnetic induction have different units.) The right plot shows the field lines, which now are closed. The field lines of HRight and BRight coincide outside the magnet.

BBarMagnet

Using the function LineIntegralConvolutionPlot, we can make a visualization of the field lines quite similar to the images one obtains using a real magnet and iron filings.

LineIntegralConvolutionPlot

We know that the normal components of BRight and the tangential components of HRight should be continuous functions across the boundary of a magnet. We can quickly verify this for the fields along the right vertical and top face of the magnet in the equatorial plane.

tangentialNormalPlotSide

tangentialNormalPlotTop

The magnetic induction within the magnet is approximately constant in direction and magnitude. Here is a closed form for the value at the center.

BCenter

The next two plots show the magnetic field and the magnetic induction over the magnet.

GraphicsRow

GraphicsRowVisual

We can even calculate the series expansion at the center of the magnet to get a quantitative estimation of how constant the magnetic induction is at the center for symbolic edge lengths. The first nonvanishing contributions for the deviation from a constant field value are of second order. (Meaning the constancy is less than that of a Helmholtz coil discussed in the first blog post of this series.)

Series-D

We will end this blog post with an interactive example of two rectangular bar magnets. We assume both magnets are in a plane, and we can move them freely around. What does the resulting magnetic field look like? And, more important for many applications (such as for magnetic drug delivery), what is the force on a small magnetizable particle in the field of two magnets? Because the force will be proportional to UpsideDownDelta|BRight|^2, an interesting and, in the first moment, counterintiuitive situation can arise:

The force always points toward the magnet, either to the North Pole or to the South Pole, but never away from it. By properly aligning two magnets, we can have a point outside of the two magnets where the magnetic induction BRight vanishes due to the two superimposed fields. On the side of this point toward the magnets, the force on a test particle is toward one of the two magnets, but on the other half-space, the force is away from the magnets. This effect, that the combined magnetic field of two simple magnets can be used for pushing a magnetic object away from the magnets, is of great relevance for controlled magnetic drug delivery in the human body. The following interactive demonstration lets us study the position of these points of vanishing field strength by changing the positions, orientations, and pole strengths of the two magnets.

ManipulateColumn

We end here and let the reader continue, for instance, by calculating the trajectories of a magnetic point dipole mRightSmall in the field of a rectangular bar magnet. The force on the dipole is given either by FRight ~ grad(mRightSmall.BRight) or through FRight ~ (mRightSmall.UpsideDownDelta)BRight).

Many more interesting and novel calculations can be carried out for charged cubes and bar magnets, for example, what is the equivalent to Coulomb’s (Newton’s law) of attraction between two cubes, which leads to some interesting and challenging six-dimensional integrals. We will continue our cube investigations in a later blog post.

Download this post as a Computable Document Format (CDF) file.

]]>
http://blog.wolfram.com/2013/08/27/musing-about-rectangular-bar-magnets/feed/ 6
Even More Formulas… for Everything—From Filled Algebraic Curves to the Twitter Bird, the American Flag, Chocolate Easter Bunnies, and the Superman Solid http://blog.wolfram.com/2013/08/15/even-more-formulas-for-everything-from-filled-algebraic-curves-to-the-twitter-bird-the-american-flag-chocolate-easter-bunnies-and-the-superman-solid/ http://blog.wolfram.com/2013/08/15/even-more-formulas-for-everything-from-filled-algebraic-curves-to-the-twitter-bird-the-american-flag-chocolate-easter-bunnies-and-the-superman-solid/#comments Thu, 15 Aug 2013 15:21:13 +0000 Michael Trott http://blog.internal.wolfram.com/?p=16019 This blog post is the continuation of my last two posts (1, 2) about formulas for curves. So far, we have discussed how to make plane curves that are sketches of animals, faces, fictional characters, and more. In this post, we will discuss the constructions of some filled curves (laminae).

Here are some of the non-mathematical laminae that Wolfram|Alpha knows closed-form equations for:

shape lamina

Assume we want a filled curve instead of just the curve itself. For closed curves, say the James Bond logo, we could just take the curve parametrizations and fill the curves. As a graphics object, filling a curve is easy to realize by using the FilledCurve function.

James Bond curve

007

For the original curves, we had constructed closed-form Fourier series-based parametrizations. While the FilledCurve function yields a visually filled curve, it does not give us a closed-form mathematical formula for the region enclosed by the curves. We could write down contour integrals along the segment boundaries in the spirit of Cauchy’s theorem to differentiate the inside from the outside, but this also does not result in “nice” closed forms. So, for filled curves, we will use another approach, which brings us to the construction of laminae for various shapes.

The method we will use is based on constructive solid geometry. We will build the laminae from simple shaped regions that we first connect with operations such as AND or OR. In a second step, we will convert the logical operations by mathematical functions to obtain formulas of the form f(x, y) > 0 for the region that we want to describe. The method of conversion from the logical formula to an arithmetic function is based on Rvachev’s R-function theory.

Let’s now construct a geometrically simple shape using the just-described method: a Mitsubishi logo-like lamina, here shown as a reminder of how it looks.

Mitsubishi logo-like lamina

As this sign is obviously made from three rhombi, we define a function polygonToInequality that describes the interior of a single convex polygon. A point is an interior point if it lies on the inner side of all the line segments that are the boundaries of the polygon. We test the property of being inside by forming the scalar product of the normals of the line segments with the vector from a line segment’s end point to the given point.

polygonToInequality

It is simple to write down the vertices of the three rhombi, and so a logical formula for the whole logo.

threeRhombi

RegionPlot

The last equation can be considerably simplified.

Simplify[threeRhombi]

The translation of the logical formula into a single inequality is quite simple: we first write all inequalities with a right-hand side of zero and then translate the Or function to the Max function and the And function to the Min function. This is the central point of the Rvachev R-function theory. By using more complicated translations, we could build right-hand sides of a higher degree of smoothness, but for our purposes Min and Max are sufficient. The points where the right-hand side of the resulting inequality is greater than zero we consider part of the lamina, otherwise the points are outside. In addition to just looking nicer and more compact, the single expression, as compared to the logical formula, evaluates to a real number everywhere. This means, in addition to just a yes/no membership of a point {x,y}, we have actual function values f(x, y) available. This is an advantage, as it allows for plotting f(x, y) over an extended region. It also allows for a more efficient plotting than the logical formula because function values around f(x, y) = 0 can be interpolated.

toRvachevRForm

So we obtain the following quite simple right-hand side for the inequality that characterizes the Mitsubishi logo.

(threeRhombiiImplicit =     toRvachevRForm[Simplify[threeRhombi]]) // TraditionalForm

And the resulting image looks identical to the one from the logical formula.

RegionPlot

Plotting the right-hand side for the inequality as a bivariate function in 3D shows how the parts of the inequality that are positive emerge from the overall function values.

Show[{Plot3D[Evaluate[N@threeRhombiiImplicit]

Now, this type of construction of a region of the plane through logical formulas of elementary regions can be applied to more regions and to regions of different shapes, not necessarily polygonal ones. In general, if we have n elementary building block regions, we can construct as many compound regions as there are logical functions in n variables. The function BooleanFunction enumerates all these 2^2^n possibilities. The following interactive demonstration allows us to view all 65,536 configurations for the case of four ellipses. We display the logical formula (and some equivalent forms), the 2D regions described by the formulas, the corresponding Rvachev functions, and the 3D plot of the Rvachev R-function. The region selected is colored yellow.

ManipulateAManipulateBManipulateC

Cutting out a region from not just four circles, but seven, we can obtain the Twitter bird. Here is the Wolfram|Alpha formula for the Twitter bird. (Worth a tweet?)

twitterBirdInequality

twitterRegionPlot

By drawing the zero-curves of all of the bivariate quadratic polynomials that appear in the Twitter bird inequality as arguments of max and min, the disks of various radii that were used in the construction become obvious. The total bird consists of points from seven different disks. Some more disks are needed to restrict the parts used from these six disks.

Show[{twitterRegionPlot, ContourPlot

Here are two 3D versions of the Twitter bird as 3D plots. As the left-hand side of the Rvachev R-equation evaluates to a number, we use this number as the z value (possibly modified) in the plots.

Plot3D[Evaluate[ArcTan[twitterBirdInequality

We can also use the closed-form equation of the Twitter bird to mint a Twitter coin.

RegionPlot3D

The boundary of the laminae described by Rvachev-R functions has the form f(x, y) = 0. Generalizing this to f(x, y) = g(z) naturally extrudes the 2D shape into 3D, and by using a function that increases with |z|, we obtain closed 3D surfaces. Here this is done for g(z) ~ (z^2) for the Twitter bird (we also add some color and add a cage to confine the bird). The use g(z) ~ (z^2) means z ~ ± f(x y)^(1/2) at the boundaries, and the infinite slope of the square root functions gives a smooth bird surface near the z = 0 plane.

SeedRandom["twitter"]; ContourPlot3D

Now we will apply the above-outlined construction idea to a slightly more complicated example: we will construct an equation for the United States’ flag. The most complicated-looking part of the construction is a single copy of the star. Using the above function polygonToInequality for the five triangle parts of the pentagram and the central pentagon, we obtain after some simplification the following form for a logical function describing a pentagram.

inPentagram

Here is the pentagram shown, as well the five lines that occur implicitly in the defining expression of the pentagram.

GraphicsRow[{RegionPlot[Evaluate[N@inPentagram

The detailed relative sizes of the stars and stripes are specified in Executive Order 10834 (“Proportions and Sizes of Flags and Position of Stars”) of the United States government. Using the data from this document and assuming a flag of height 1, it is straightforward to encode the non-white parts of the US flag in the following manner. For the parallel horizontal stripes we use a sin(α y) (with appropriately chosen α) construction. The grid of stars in the left upper corner of the flag is made from two square grids, one shifted against the other (a 2D version of an fcc lattice). The Mod function allows us to easily model the lattice arrays.

inUSFlagInequalities

This gives the following closed-form formula for the US flag. Taking the visual complexity of the flag into account, this is quite a compact description.

inUSFlagImplicit = toRvachevRForm[inUSFlagInequalities[{x, y}]]

American flag formula AAmerican flag formula BAmerican flag formula C

Making a plot of this formula gives—by construction—the American flag.

flagUS = RegionPlot

We can apply a nonlinear coordinate transformation to the inequality to let the flag flow in the wind.

SeedRandom[100]; RegionPlot

And using a more quickly varying map, we can construct a visual equivalent of Jimi Hendrix‘s “Star-Spangled Banner” from the Rainbow Bridge album.

SeedRandom[140]; RegionPlot

As laminae describe regions in 2D, we can identify the plane with the complex plane and carry out conformal maps on the complex plane, such as for the square root function or the square.

{WolframAlpha["sqrt(z)", {{"ComplexMap", 1}, "Content"}],   WolframAlpha["z^2", {{"ComplexMap", 1}, "Content"}]}

Here are the four maps that we will apply to the flag.

Column[transformations

And these are the conformally mapped flags.

GraphicsGrid[Partition[Function[{map, mapC},    Show[flagUS /.

The next interactive demonstration applies a general power function z -> (shift + scale z)α to the plane containing the flag. (For some parameter values, the branch cut of the power function can lead to folded over polygons.)

Manipulate[Show[flagUS /.

So far we have used circles and polygons as the elementary building blocks for our lamina. It is straightforward to use more complicated shapes. Let’s model a region of the plane that approximates the logo of the largest US company—the apple from Apple. As this is a more complicated shape, calculating an equation that describes it will need a bit more effort (and code). Here is an image of the shape to be approximated.

appleImage

So, how could we describe a shape like an apple? For instance, one could use osculating circles and splines (see this blog entry). Here we will go another route. Algebraic curves can take a large variety of shapes. Here are some examples:

Stone–Weierstrass theorem, which guarantees that any continuous function can be approximated by polynomials.

Style[GraphicsGrid[Partition[Tooltip[ImageCrop[Rasterize[WolframAlpha

We look for an algebraic curve that will approximate the central apple shape. To do so, we first extract again the points that form the boundary of the apple. (To do this, we reuse the function pointListToLines from the first blog post of this series, mentioned previously.)

pointListToLines ApointListToLines B

{dx, dy} = ImageDimensions[appleImage]

We assume that the core apple shape is left-right symmetric and select points from the left side (meaning the side that does not contain the bite). The following Manipulate allows us to quickly to locate all points on the left side of the apple.

Manipulate appleLikeLogoLines

To find a polynomial p (x, y) = 0 that describes the core apple, we first use polar coordinates (with the origin at the apple’s center) and find a Fourier series approximation of the apple’s boundary in the form Code. The use of only the cosine terms guarantees the left-right symmetry of the resulting apple.

\[CurlyPhi]rData = 1.

fit = Fit[\[CurlyPhi]rData, {1, Cos[\[CurlyPhi]],     Cos[2 \[CurlyPhi]],   Cos[3 \[CurlyPhi]], Cos[4 \[CurlyPhi]],     Cos[5 \[CurlyPhi]], Cos[6 \[CurlyPhi]], Cos[7 \[CurlyPhi]],     Cos[8 \[CurlyPhi]]}, \[CurlyPhi]]

We rationalize the resulting approximation and find the corresponding bivariate polynomial in Cartesian coordinates using GroebnerBasis. After expressing the cos(kφ) terms in terms of just cos(φ) and sin(φ), we use the identity cos(φ)^2+sin(φ)^2 = 1 to eliminate cos(φ) and sin(φ) to obtain a single polynomial equation in x and y.

{X, Y} = Rationalize[fit {Sin[\[CurlyPhi]], Cos[\[CurlyPhi]]}, 10^-3]

gb = GroebnerBasis[    Append[TrigExpand[{x, y} - {X, Y}],      Cos[\[CurlyPhi]]^2 + Sin[\[CurlyPhi]]^2 -       1], {}, {Cos[\[CurlyPhi]], Sin[\[CurlyPhi]]}] // Factor

As we rounded the coefficients, we can safely ignore the last digits in the integer coefficients of the resulting polynomial and so shorten the result.

Slightly simplified version

Here is the resulting apple as an algebraic curve.

RegionPlot[Evaluate[N[appleCore < 0 /.

Now we need to take a bite on the right-hand side and add the leaf on top. For both of these shapes, we will just use circles as the geometric shapes. The following interactive Manipulate allows the positioning and sizing of the circles, so that they agree with the Apple logo. The initial values are chosen so that the circles match the original image boundaries. (We see that the imported image is not exactly left-right symmetric.)

Manipulate with circles

So, we finally arrive at the following inequality describing the Apple logo.

(appleImplicit =  toRvachevRForm[

bunnyEquation = WolframAlpha

Easter bunny lamina ComputableData AEaster bunny lamina ComputableData B

bunnyRegionPlot

We can easily extract the polygons from this 2D graphic and construct a twisted bunny in 3D.

Module

Twisted bunny in 3D

The Rvachev R-form allows us to immediately make a 3D Easter bunny made from milk chocolate and strawberry-flavored chocolate. By applying the logarithm function, the parts where the defining function is negative are not shown in the 3D plot, as they give rise to complex-valued function values.

Easter bunny made from milk chocolate and strawberry-flavored chocolate

We can also make the Easter bunny age within seconds, meaning his skin gets more and more wrinkles as he ages. We carry out this aging process by taking the polygons that form the lamina and letting them undergo a Brownian motion in the plane.

Aging Easter bunny

Let’s now play with some car logo-like laminae. We take a Yamaha-like shape; here are the corresponding region and 3D plot.

yamahaEquation = WolframAlpha

GraphicsRow[{yamahaRegionPlot = RegionPlot

We could, for instance, take the Yamaha lamina and place 3D cones in it.

makeCones

volkswagenEquation = WolframAlpha

volkswagenRegionPlot = RegionPlot

Module

By forming a weighted mixture between the Yamaha equation and the Volkswagen equation, we can form the shapes of Yamawagen and Volksmaha.

yamawagen

Next we want to construct another, more complicated, 3D object from a 2D lamina. We take the Superman insignia.

Superman lamina

The function superman[{x, y}] returns True if a point in the x, y plane is inside the insignia.

superman[{x_, y_}] = WolframAlpha

RegionPlot superman

And here is the object we could call the Superman solid. (Or, if made from the conjectured new supersolid state of matter, a super(man)solid.) It is straightforwardly defined through the function superman. The Superman solid engraves the shape of the Superman logo in the x, y plane as well as in the x, y plane into the resulting solid.

supermanSolid[{x_, y_, z_}] := superman[{x, y}] && superman[{x, z}]

supermanSolidGraphics3D

Viewed from the front as well as from the side, the projection is the Superman insignia.

Superman GraphicsRow

Superman 3

To stay with the topic of Superman, we could take the Bizarro curve and roll it around to form a Bizarro-Superman cake where Superman and Bizarro face each other as cake cross sections.

bizarro[{x_, y_}] = WolframAlpha

bizarroCake =   Module

This cake we can then refine by adding some kryptonite crystals, here realized through elongated triangular dipyramid polyhedra.

kryptoniteCrystal = Map

Show[{bizarroCake, kryptoniteCrystals}]

Next, let’s use a Batman insignia-shaped lamina and make a quantum Batman out of it.

Batman lamina

We will solve the time-dependent Schrödinger equation for a quantum particle in a 2D box with the Batman insignia as the initial condition. More concretely, assume the initial wave function is 1 within the Batman insignia and 0 outside. So, the first step is the calculation of the 2D Fourier coefficients.

batman[{x_, y_}] = WolframAlpha

Numerically integrating a highly oscillating function over a domain with sharp boundaries using numerical integration can be challenging. The shape of the Batman insignia suggests that we first integrate with respect to y and then with respect to x. The lamina can be conveniently broken up into the following subdomains.

gcd = GenericCylindricalDecomposition[     batman[28 {x, y} - {14, 7}], {x, y}][[1]];

Show[RegionPlot

integralsWRTy = Monitor

All of the integrals over y can be calculated in closed form. Here is one of the integrals shown.

integralsWRTy[[1]]

To calculate the integrals over x, we need to multiply the integralsWRTy with sin(k Π x) and then integrate. Because k is the only parameter that is changing, we use the (new in Version 9) function ParametricNDSolveValue.

Do[{x1, x2} = {integralsWRTy[[j]][[1, 1, 2, 1]]

We calculate 200^2 Fourier coefficients. This relatively large number is needed to obtain a good solution of the Schrödinger equation. (Due to the discontinuous nature of the initial conditions, for an accurate solution, even more modes would be needed.)

With[{kmMax = 200},  Monitor

Using again the function xyArray from above, here is how the Batman logo would look if it were to quantum-mechanically evolve.

quantumBatmanArray = xyArray

ReliefPlot

We will now slowly end our brief overview on how to equationalize shapes through laminae. As a final example, we unite the Fourier series approach for curves discussed in the first blog post of this series with the Rvachev R-function approach and build an apple where the bite has the form of the silhouette of Steve Jobs, the Apple founder who suggested the name Mathematica. The last terms of the following inequality result from the Fourier series of Jobs’ facial profile.

appleWithSteveJobsSilhouetteInequality = WolframAlpha

Apple with Steve Jobs bite equation ComputableDataApple with Steve Jobs bite equation ComputableData B

RegionPlot

Download this post as a Computable Document Format (CDF) file.

]]>
http://blog.wolfram.com/2013/08/15/even-more-formulas-for-everything-from-filled-algebraic-curves-to-the-twitter-bird-the-american-flag-chocolate-easter-bunnies-and-the-superman-solid/feed/ 3