Wolfram Blog http://blog.wolfram.com News, views, and ideas from the front lines at Wolfram Research. Mon, 27 Feb 2017 17:30:26 +0000 en hourly 1 http://wordpress.org/?v=3.2.1 Hidden Figures: Modern Approaches to Orbit and Reentry Calculations http://blog.wolfram.com/2017/02/24/hidden-figures-modern-approaches-to-orbit-and-reentry-calculations/ http://blog.wolfram.com/2017/02/24/hidden-figures-modern-approaches-to-orbit-and-reentry-calculations/#comments Fri, 24 Feb 2017 18:08:39 +0000 Jeffrey Bryant http://blog.internal.wolfram.com/?p=35113 The movie Hidden Figures was released in theaters recently and has been getting good reviews. It also deals with an important time in US history, touching on a number of topics, including civil rights and the Space Race. The movie details the hidden story of Katherine Johnson and her coworkers (Dorothy Vaughan and Mary Jackson) at NASA during the Mercury missions and the United States’ early explorations into manned space flight. The movie focuses heavily on the dramatic civil rights struggle of African American women in NASA at the time, and these struggles are set against the number-crunching ability of Johnson and her coworkers. Computers were in their early days at this time, so Johnson and her team’s ability to perform complicated navigational orbital mechanics problems without the use of a computer provided an important sanity check against the early computer results.

Row[{Show[    Entity["Movie", "HiddenFigures::k39bj"][     EntityProperty["Movie", "Image"]], ImageSize -> 101], "  ",    Show[Entity["PopularCurve", "KatherineJohnsonCurve"][     EntityProperty["PopularCurve", "Image"]], Axes -> False,     Background -> LightBlue, ImageSize -> 120]}]

I will touch on two aspects of her scientific work that were mentioned in the film: orbit calculations and reentry calculations. For the orbit calculation, I will first exactly follow what Johnson did and then compare with a more modern, direct approach utilizing an array of tools made available with the Wolfram Language. Where the movie mentions the solving of differential equations using Euler’s method, I will compare this method with more modern ones in an important problem of rocketry: computing a reentry trajectory from the rocket equation and drag terms (derived using atmospheric model data obtained directly from within the Wolfram Language).

The movie doesn’t focus much on the math details of the types of problems Johnson and her team dealt with, but for the purposes of this blog, I hope to provide at least a flavor of the approaches one might have used in Johnson’s day compared to the present.

Placing a Satellite over a Selected Position

One of the earliest papers that Johnson coauthored, “Determination of Azimuth Angle at Burnout for Placing a Satellite over a Selected Earth Position,” deals with the problem of making sure that a satellite can be placed over a specific Earth location after a specified number of orbits, given a certain starting position (e.g. Cape Canaveral, Florida) and orbital trajectory. The approach that Johnson’s team used was to determine the azimuthal angle (the angle formed by the spacecraft’s velocity vector at the time of engine shutoff with a fixed reference direction, say north) to fire the rocket in, based on other orbital parameters. This is an important step in making sure that an astronaut is in the correct location for reentry to Earth.

NASA Technical Note

Constants and Initial Processing

In the paper, Johnson defines a number of constants and input parameters needed to solve the problem at hand. One detail to explain is the term “burnout,” which refers to the shutoff of the rocket engine. After burnout, orbital parameters are essentially “frozen,” and the spacecraft moves solely under the Earth’s gravity (as determined, of course, through Newton’s laws). In this section, I follow the paper’s unit conventions as closely as possible.

v1 = 25761.345 60; (* satellite velocity in ft/min *) r1 = 21637933.;(* circular orbit radius in feet *) \[Gamma]1 =    0.5 ; (* elevation angle between local horizon and velocity vector \ in degrees *) vc = 25506.28 60; (* circular orbit velocity in ft/min *) poverr1 =    1.020022269; (* p is the semilatus rectum of the orbit ellipse *) a = 22081775.57; (* semimajor axis of orbit in feet *) t\[Theta]1 =    5.842 ;(* t[\[Theta]1] is the time from perigee for \[Theta]1 in \ min, where \[Theta]1 is angle in orbit plane between perigee and \ burnout *) g0 = 115991.595; (* acceleration due to gravity ft/min^2 *) R = 2.090226 10^7; (* Earth radius in feet *) (* launch coordinates *)  \[Phi]1 = 28.50 ; (* launch latitude  *)  \[Lambda]1 = 279.45 ; (* launch longitude *)   \[Phi]2 = 34.00 ;(* intended pass over latitude *) \[Lambda]2 = 241.00 ; (* intended pass over longitude *) n = 3; (* number of orbits *) \[Omega]Capitale =    0.25068  (* angular velocity of Earth in degrees/min *) ;

For convenience, some functions are defined to deal with angles in degrees instead of radians. This allows for smoothly handling time in angle calculations:

SinDegree[x_] := Sin[x Degree] CosDegree[x_] := Cos[x Degree] TanDegree[x_] := Tan[x Degree] SecDegree[x_] := Tan[x Degree]

ArcTanDegree[x_, y_] := ArcTan[x, y]/Degree

Johnson goes on to describe several other derived parameters, though it’s interesting to note that she sometimes adopted values for these rather than using the values returned by her formulas. Her adopted values were often close to the values obtained by the formulas. For simplicity, the values from the formulas are used here.

Semilatus rectum of the orbit ellipse:

p = r1*(v1/vc)^2 CosDegree[\[Gamma]1];

Angle in orbit plane between perigee and burnout point:

\[Theta]1 = ArcTanDegree[TanDegree[\[Gamma]1] ((p/r1)/(p/r1 - 1))];

Orbit eccentricity:

e = (1/CosDegree[\[Theta]1]) (p/r1 - 1);

Orbit period:

T = 2 Pi Sqrt[R/g0] Sqrt[(a/R)^3];

Eccentric anomaly:

Eanomaly[\[Theta]_] := 2 ArcTan[Tan[\[Theta]/2] Sqrt[(1 - e)/(1 + e)]]

To describe the next parameter, it’s easiest to quote the original paper: “The requirement that a satellite with burnout position φ1, λ1 pass over a selected position φ2, λ2 after the completion of n orbits is equivalent to the requirement that, during the first orbit, the satellite pass over an equivalent position with latitude φ2 the same as that of the selected position but with longitude λ2e displaced eastward from λ2 by an amount sufficient to compensate for the rotation of the Earth during the n complete orbits, that is, by the polar hour angle n ωE T. The longitude of this equivalent position is thus given by the relation”:

\[Lambda]2e = \[Lambda]2 + n \[Omega]Capitale T;

Time from perigee for angle θ:

t[\[Theta]_] :=   T/(2 Pi) (Eanomaly[\[Theta] Degree] -      e Sin[Eanomaly[\[Theta] Degree]])

Computation

Part of the final solution is to determine values for intermediate parameters δλ1-2e and θ2e. This can be done in a couple of ways. First, I can use ContourPlot to obtain a graphical solution via equations 19 and 20 from the paper:

Clear[\[CapitalDelta]\[Lambda]1minus2e, \[Theta]2e]; ContourPlot[  Evaluate[{\[CapitalDelta]\[Lambda]1minus2e  == \[Lambda]2e - \ \[Lambda]1 + \[Omega]Capitale (t[\[Theta]2e ] - t[\[Theta]1]),     CosDegree[\[Theta]2e - \[Theta]1] ==      SinDegree[\[Phi]2] SinDegree[\[Phi]1] +       CosDegree[\[Phi]2] CosDegree[\[Phi]1] CosDegree[\[CapitalDelta]\ \[Lambda]1minus2e ] }],                                               {\[CapitalDelta]\ \[Lambda]1minus2e, 0, 60 }  , {\[Theta]2e, 0, 60 },   Prolog -> {Red, PointSize[Large],     Point[{31.948028324349036`, 51.7127788640602`}]}]

FindRoot can be used to find the solutions numerically:

Clear[\[CapitalDelta]\[Lambda]1minus2e, \[Theta]2e]; FindRoot[Evaluate[{\[CapitalDelta]\[Lambda]1minus2e  == \[Lambda]2e - \ \[Lambda]1 + \[Omega]Capitale (t[\[Theta]2e ] - t[\[Theta]1]),     CosDegree[\[Theta]2e - \[Theta]1] ==      SinDegree[\[Phi]2] SinDegree[\[Phi]1] +       CosDegree[\[Phi]2] CosDegree[\[Phi]1] CosDegree[\[CapitalDelta]\ \[Lambda]1minus2e ] }],                                          { \ {\[CapitalDelta]\[Lambda]1minus2e, 30 }, {\[Theta]2e, 55 }}]

{\[CapitalDelta]\[Lambda]1minus2e -> 32.1445, \[Theta]2e -> 51.8351}

Of course, Johnson didn’t have access to ContourPlot or FindRoot, so her paper describes an iterative technique. I translated the technique described in the paper into the Wolfram Language, and also solved for a number of other parameters via her iterative method. Because the base computations are for a spherical Earth, corrections for oblateness are included in her method:

\[CapitalDelta]\[Omega] = 0; \[CapitalDelta]\[Phi]2 = 0; \[CapitalDelta]\[CapitalOmega] = 0; \[CapitalDelta]\[Lambda]2 = 0; iO = 2;  Table[   \[Lambda]2e = \[Lambda]2 - \[CapitalDelta]\[Lambda]2 +      n \[Omega]Capitale T;   \[CapitalDelta]\[Lambda]1minus2e = \[Lambda]2e - \[Lambda]1 + \ \[Omega]Capitale*      If[iter == 1,        T/360 (\[Lambda]2e - \[Lambda]1) +         t[\[Theta]1], (tOf\[Theta]2e - t[\[Theta]1])];   \[Theta]2e =     ArcCosDegree[      SinDegree[\[Phi]2 - \[CapitalDelta]\[Phi]2] SinDegree[\[Phi]1] +       CosDegree[\[Phi]2 - \[CapitalDelta]\[Phi]2] CosDegree[\[Phi]1] \ CosDegree[\[CapitalDelta]\[Lambda]1minus2e]] + \[Theta]1;    \[Psi]1 =     ArcSinDegree[(SinDegree[\[CapitalDelta]\[Lambda]1minus2e] \ CosDegree[\[Phi]2 - \[CapitalDelta]\[Phi]2])/      SinDegree[\[Theta]2e - \[Theta]1]];   i = ArcCosDegree[     Piecewise[{{CosDegree[\[Phi]1] SinDegree[\[Psi]1],         0 < \[Psi]1 < 180}, {-CosDegree[\[Phi]1] SinDegree[\[Psi]1],         180 <= \[Psi]1 < 360}}]];   \[Omega] = ArcSinDegree[SinDegree[\[Phi]1]/SinDegree[i]] - \[Theta]1;   (* after the first iteration,         correct \[Lambda]2 and \[Phi]2 for oblateness and     then keep this correction| page 18, 19 *)   If[iter == iO, \[CapitalDelta]\[Omega] =       3.4722*^-3 (R/p)^2 (R/a)^(3/2) (5 CosDegree[i]^2 - 1) (n T +         t[\[Theta]2e] - t[\[Theta]1])];   If[iter ==      iO, \[CapitalDelta]\[Phi]2 = \[CapitalDelta]\[Omega] SinDegree[       i] CosDegree[\[Omega] + \[Theta]2e]/CosDegree[\[Phi]2]];    If[iter ==      iO, \[CapitalDelta]\[CapitalOmega] = -6.9444*^-3 (R/p)^2 (R/a)^(3/         2) CosDegree[i] (n T + t[\[Theta]2e] - t[\[Theta]1])];   If[iter ==      iO, \[CapitalDelta]\[Lambda]2 = \[CapitalDelta]\[Omega] CosDegree[        i] SecDegree[\[Omega] + \[Theta]2e]^2/        (1 +           CosDegree[             i]^2 TanDegree[\[Omega] + \[Theta]2e]^2) + \ \[CapitalDelta]\[CapitalOmega]];   \[CapitalDelta]\[Lambda]N1 =     ArcTanDegree[SinDegree[\[Phi]1] TanDegree[\[Psi]1]];   \[Lambda]Nref = \[Lambda]1 - \[CapitalDelta]\[Lambda]N1;   tOf\[Theta]2e = t[\[Theta]2e];   {\[Theta]2e, tOf\[Theta]2e}, {iter, 1, 8, 1}];

Graphing the value of θ2e for the various iterations shows a quick convergence:

ListPlot[%[[All, 2]], PlotRange -> {10, 15}, Filling -> Axis,                    AxesLabel -> {"iteration", "\[Theta]2e"},   ImageSize -> 360]

ListPlot[%[[All, 2]], PlotRange -> {10, 15}, Filling -> Axis,                    AxesLabel -> {"iteration", "\[Theta]2e"},   ImageSize -> 360]

ksol = {   "\[CapitalDelta]\[Lambda]1minus2e" -> \ \[CapitalDelta]\[Lambda]1minus2e,    "\[CapitalDelta]\[Lambda]2" -> \[CapitalDelta]\[Lambda]2,    "\[Theta]2e" -> \[Theta]2e , "\[Psi]1" -> \[Psi]1,    "\[Omega]" -> \[Omega],    "\[CapitalDelta]\[Omega]" -> \[CapitalDelta]\[Omega],    "\[CapitalDelta]\[CapitalOmega]" -> \[CapitalDelta]\[CapitalOmega],    "i" -> N@i, "\[CapitalDelta]\[Phi]2" -> \[CapitalDelta]\[Phi]2 }

{"\[CapitalDelta]\[Lambda]1minus2e" -> 31.0713,   "\[CapitalDelta]\[Lambda]2" -> 1.01789, "\[Theta]2e" -> 50.9419,   "\[Psi]1" -> 70.556, "\[Omega]" -> 34.559,   "\[CapitalDelta]\[Omega]" -> 1.96276,   "\[CapitalDelta]\[CapitalOmega]" -> -1.33792, "i" -> 34.035,   "\[CapitalDelta]\[Phi]2" -> 0.0841712}

I can convert the method in a FindRoot command as follows (this takes the oblateness effects into account in a fully self-consistent manner and calculates values for all nine variables involved in the equations):

fr := FindRoot[Evaluate[    {\[CapitalDelta]\[Lambda]1minus2e  == (\[Lambda]2 - \ \[CapitalDelta]\[Lambda]2 +          n \[Omega]Capitale T) - \[Lambda]1 + \[Omega]Capitale*(t[\ \[Theta]2e] - t[\[Theta]1]),      CosDegree[\[Theta]2e - \[Theta]1] ==       SinDegree[\[Phi]2 - \[CapitalDelta]\[Phi]2] SinDegree[\[Phi]1] +        CosDegree[\[Phi]2 - \[CapitalDelta]\[Phi]2] CosDegree[\[Phi]1] \ CosDegree[\[CapitalDelta]\[Lambda]1minus2e ],     SinDegree[\[Psi]1] == \ (SinDegree[\[CapitalDelta]\[Lambda]1minus2e] CosDegree[\[Phi]2 - \ \[CapitalDelta]\[Phi]2])/SinDegree[\[Theta]2e - \[Theta]1],     CosDegree[i] == CosDegree[\[Phi]1] SinDegree[\[Psi]1],     SinDegree[\[Omega] + \[Theta]1] == SinDegree[\[Phi]1]/SinDegree[i],     \[CapitalDelta]\[Omega] ==        3.4722*^-3 (R/p)^2 (R/a)^(3/2) (5 CosDegree[i]^2 - 1) (n T +          t[\[Theta]2e] - t[\[Theta]1]),     \[CapitalDelta]\[Phi]2 == \[CapitalDelta]\[Omega] SinDegree[        i] CosDegree[\[Omega] + \[Theta]2e]/        CosDegree[\[Phi]2 - \[CapitalDelta]\[Phi]2],     \[CapitalDelta]\[CapitalOmega] == -6.9444*^-3 (R/p)^2 (R/a)^(3/          2) CosDegree[i] (n T + t[\[Theta]2e] - t[\[Theta]1]),     \[CapitalDelta]\[Lambda]2 == \[CapitalDelta]\[Omega] CosDegree[         i] SecDegree[\[Omega] + \[Theta]2e]^2/(1 +            CosDegree[              i]^2 TanDegree[\[Omega] + \[Theta]2e]^2) + \ \[CapitalDelta]\[CapitalOmega]}],   {{\[CapitalDelta]\[Lambda]1minus2e, 31}, {\[CapitalDelta]\[Lambda]2,      1}, {\[Theta]2e, 51}, {\[Psi]1, 70}, {\[Omega],      34}, {\[CapitalDelta]\[Omega],      2}, {\[CapitalDelta]\[CapitalOmega], -1.3}, {i,      34}, {\[CapitalDelta]\[Phi]2, 0.1}}]

Clear[\[CapitalDelta]\[Lambda]1minus2e, \[CapitalDelta]\[Lambda]2, \ \[Theta]2e, \[Psi]1, \[Omega], \[CapitalDelta]\[Omega], \ \[CapitalDelta]\[CapitalOmega], i, \[CapitalDelta]\[Phi]2]; wlsol = fr

{\[CapitalDelta]\[Lambda]1minus2e ->    31.0769, \[CapitalDelta]\[Lambda]2 -> 1.01251, \[Theta]2e ->    50.9455, \[Psi]1 -> 70.5971, \[Omega] ->    34.6109, \[CapitalDelta]\[Omega] ->    1.96531, \[CapitalDelta]\[CapitalOmega] -> -1.33781,   i -> 34.0135, \[CapitalDelta]\[Phi]2 -> 0.102616}

Interestingly, even the iterative root-finding steps of this more complicated system converge quite quickly:

(OwnValues[fr] /.      HoldPattern[FindRoot[args__]] :>       Reap[FindRoot[args, StepMonitor :> Sow[          {\[CapitalDelta]\[Lambda]1minus2e, \ \[CapitalDelta]\[Lambda]2, \[Theta]2e, \[Psi]1, \[Omega], \ \[CapitalDelta]\[Omega], \[CapitalDelta]\[CapitalOmega],            i, \[CapitalDelta]\[Phi]2}]]])[[1, 2]][[2]]

{{{31.0765, 1.01284, 50.9452, 70.5874, 34.6042, 1.96527, -1.3378,     34.0142, 0.102565}, {31.0769, 1.01251, 50.9455, 70.5971, 34.6109,     1.96531, -1.33781, 34.0135, 0.102616}, {31.0769, 1.01251, 50.9455,     70.5971, 34.6109, 1.96531, -1.33781, 34.0135, 0.102616}, {31.0769,     1.01251, 50.9455, 70.5971, 34.6109, 1.96531, -1.33781, 34.0135,     0.102616}}}

Plotting

With the orbital parameters determined, it is desirable to visualize the solution. First, some critical parameters from the previous solutions need to be extracted:

{\[Theta]2e, \[Omega], i} = {"\[Theta]2e", "\[Omega]", "i"} /. ksol;

Next, the latitude and longitude of the satellite as a function of azimuth angle need to be derived:

\[CapitalDelta]\[Lambda]Ns[\[Theta]s_] :=   ArcTanDegree[CosDegree[\[Omega] + \[Theta]s],    CosDegree[-i] SinDegree[\[Omega] + \[Theta]s]]

\[Lambda]N[\[Theta]s_] := \[Lambda]Nref - \[Omega]Capitale \ (t[\[Theta]s] - t[\[Theta]1]) + \[CapitalDelta]\[Lambda]Ns[\[Theta]s]

φs and λs are the latitudes and longitudes as a function of θs:

\[Phi]s[\[Theta]s_] :=   ArcSinDegree[SinDegree[i] SinDegree[\[Omega] + \[Theta]s]]

satpts[n_] :=    Table[{\[Phi]s[\[Theta]s], \[Lambda]s[\[Theta]s, n]}, {\[Theta]s, 0,      360, .01}];

The satellite ground track can be constructed by creating a table of points:

points = {Table[{\[Lambda]s[\[Theta]s,        0], \[Phi]s[\[Theta]s]}, {\[Theta]s, \[Theta]1, 180, .1}],       Table[{\[Lambda]s[\[Theta]s,        1], \[Phi]s[\[Theta]s]}, {\[Theta]s, 0, 360, .1}],       Table[{\[Lambda]s[\[Theta]s,        2], \[Phi]s[\[Theta]s]}, {\[Theta]s, 0, 360, .1}],       Table[{\[Lambda]s[\[Theta]s,        3], \[Phi]s[\[Theta]s]}, {\[Theta]s, -180, \[Theta]2e, .1}]};

Johnson’s paper presents a sketch of the orbital solution including markers showing the burnout, selected and equivalent positions. It’s easy to reproduce a similar plain diagram here:

ListPlot[points,  PlotStyle -> Gray, PlotRange -> {{0, 360}, {-60, 60}},  Epilog -> {Black, Text[Style[#1, 32, Bold], #2] & @@@                {{"\[EmptySmallCircle]", {\[Lambda]1, \[Phi]1}}, {"\ \[EmptyDiamond]", {\[Lambda]2, \[Phi]2}, {"\[EmptySmallSquare]", {\ \[Lambda]2e, \[Phi]2}}} }} ,  Prolog -> {Arrow[{{333, 33}, {342, 31}}],     Arrow[{{212, 29}, {215, 30} }],                          Text[Style[      "Burnout\n\!\(\*SubscriptBox[\(\[Phi]\), \ \(1\)]\),\!\(\*SubscriptBox[\(\[Lambda]\), \(1\)]\)", 10], {280, 18}],                          Text[Style[      "Selected\nposition\n\!\(\*SubscriptBox[\(\[Phi]\), \ \(2\)]\),\!\(\*SubscriptBox[\(\[Lambda]\), \(2\)]\)", 10], {240, 48}],                          Text[Style[      "Equivalent\nposition\n\!\(\*SubscriptBox[\(\[Phi]\), \ \(2\)]\),\!\(\*SubscriptBox[\(\[Lambda]\), \(2  e\)]\)", 10], {312,       46}]},  GridLines -> {Table[\[Lambda], {\[Lambda], 0, 360, 40}],     Table[\[Phi], {\[Phi], -60, 60, 20}]},  Axes -> False, Frame -> True, ImageSize -> 600, AspectRatio -> 0.7,  FrameLabel -> {"Longitude, \[Lambda], deg", "Latitude, \[Phi], deg"},  FrameTicks -> {{Table[{\[Phi], \[Phi]}, {\[Phi], -60, 60, 20}],      None}, {Table[{\[Lambda], \[Lambda]}, {\[Lambda], 0, 360, 40}],      None}}]

ListPlot[points,  PlotStyle -> Gray, PlotRange -> {{0, 360}, {-60, 60}},  Epilog -> {Black, Text[Style[#1, 32, Bold], #2] & @@@                {{"\[EmptySmallCircle]", {\[Lambda]1, \[Phi]1}}, {"\ \[EmptyDiamond]", {\[Lambda]2, \[Phi]2}, {"\[EmptySmallSquare]", {\ \[Lambda]2e, \[Phi]2}}} }} ,  Prolog -> {Arrow[{{333, 33}, {342, 31}}],     Arrow[{{212, 29}, {215, 30} }],                          Text[Style[      "Burnout\n\!\(\*SubscriptBox[\(\[Phi]\), \ \(1\)]\),\!\(\*SubscriptBox[\(\[Lambda]\), \(1\)]\)", 10], {280, 18}],                          Text[Style[      "Selected\nposition\n\!\(\*SubscriptBox[\(\[Phi]\), \ \(2\)]\),\!\(\*SubscriptBox[\(\[Lambda]\), \(2\)]\)", 10], {240, 48}],                          Text[Style[      "Equivalent\nposition\n\!\(\*SubscriptBox[\(\[Phi]\), \ \(2\)]\),\!\(\*SubscriptBox[\(\[Lambda]\), \(2  e\)]\)", 10], {312,       46}]},  GridLines -> {Table[\[Lambda], {\[Lambda], 0, 360, 40}],     Table[\[Phi], {\[Phi], -60, 60, 20}]},  Axes -> False, Frame -> True, ImageSize -> 600, AspectRatio -> 0.7,  FrameLabel -> {"Longitude, \[Lambda], deg", "Latitude, \[Phi], deg"},  FrameTicks -> {{Table[{\[Phi], \[Phi]}, {\[Phi], -60, 60, 20}],      None}, {Table[{\[Lambda], \[Lambda]}, {\[Lambda], 0, 360, 40}],      None}}]

ListPlot[points,  PlotStyle -> Gray, PlotRange -> {{0, 360}, {-60, 60}},  Epilog -> {Black, Text[Style[#1, 32, Bold], #2] & @@@                {{"\[EmptySmallCircle]", {\[Lambda]1, \[Phi]1}}, {"\ \[EmptyDiamond]", {\[Lambda]2, \[Phi]2}, {"\[EmptySmallSquare]", {\ \[Lambda]2e, \[Phi]2}}} }} ,  Prolog -> {Arrow[{{333, 33}, {342, 31}}],     Arrow[{{212, 29}, {215, 30} }],                          Text[Style[      "Burnout\n\!\(\*SubscriptBox[\(\[Phi]\), \ \(1\)]\),\!\(\*SubscriptBox[\(\[Lambda]\), \(1\)]\)", 10], {280, 18}],                          Text[Style[      "Selected\nposition\n\!\(\*SubscriptBox[\(\[Phi]\), \ \(2\)]\),\!\(\*SubscriptBox[\(\[Lambda]\), \(2\)]\)", 10], {240, 48}],                          Text[Style[      "Equivalent\nposition\n\!\(\*SubscriptBox[\(\[Phi]\), \ \(2\)]\),\!\(\*SubscriptBox[\(\[Lambda]\), \(2  e\)]\)", 10], {312,       46}]},  GridLines -> {Table[\[Lambda], {\[Lambda], 0, 360, 40}],     Table[\[Phi], {\[Phi], -60, 60, 20}]},  Axes -> False, Frame -> True, ImageSize -> 600, AspectRatio -> 0.7,  FrameLabel -> {"Longitude, \[Lambda], deg", "Latitude, \[Phi], deg"},  FrameTicks -> {{Table[{\[Phi], \[Phi]}, {\[Phi], -60, 60, 20}],      None}, {Table[{\[Lambda], \[Lambda]}, {\[Lambda], 0, 360, 40}],      None}}]

For comparison, here is her original diagram:

Original diagram

Katherine Johnson's Diagram

A more visually useful version can be constructed using GeoGraphics, taking care to convert the geocentric coordinates into geodetic coordinates:

path = {#1,  QuantityMagnitude[GeodesyData["Krassovsky",        {"FromGeocentricLatitude", #2}]]} & @@@ Flatten[points, 1];

GeoGraphics[{Blue, PointSize[0.015],    Point[GeoPosition[{\[Phi]1, \[Lambda]1}]], Red,    Point[GeoPosition[{\[Phi]2, \[Lambda]2}]],   Black, PointSize[0.002], Point[GeoPosition[Reverse /@ path]]},   GeoRange -> "World", GeoZoomLevel -> 3]

GeoGraphics[{Blue, PointSize[0.015],    Point[GeoPosition[{\[Phi]1, \[Lambda]1}]], Red,    Point[GeoPosition[{\[Phi]2, \[Lambda]2}]],   Black, PointSize[0.002], Point[GeoPosition[Reverse /@ path]]},   GeoRange -> "World", GeoZoomLevel -> 3]

How to Calculate Orbits Today

Today, virtually every one of us has, within immediate reach, access to computational resources far more powerful than those available to the entirety of NASA in the 1960s. Now, using only a desktop computer and the Wolfram Language, you can easily find direct numerical solutions to problems of orbital mechanics such as those posed to Katherine Johnson and her team. While perhaps less taxing of our ingenuity than older methods, the results one can get from these explorations are no less interesting or useful.

To solve for the azimuthal angle ψ using more modern methods, let’s set up parameters for a simple circular orbit beginning after burnout over Florida, assuming a spherically symmetric Earth (I’ll not bother trying to match the orbit of the Johnson paper precisely, and I’ll redefine certain quantities from above using the modern SI system of units). Starting from the same low-Earth orbit altitude used by Johnson, and using a little spherical trigonometry, it is straightforward to derive the initial conditions for our orbit:

\[Phi]1 = 28.50 Degree (* latitude of burnout point*); \[Lambda]1 = (279.45 - 360) Degree (* longitude of burnout point *);

(* assuming a circular orbit, and leaving azimuth angle \[Psi] \ undetermined *)  r0 = QuantityMagnitude[Quantity[21637933., "Feet"],     "Meters"] (* radius of the orbit *); v0 = Sqrt[G M/r0] (* initial velocity of the circular orbit *); \[ScriptCapitalR] =    RotationTransform[\[Lambda]1, {0, 0,      1}] (* used to rotate initial conditions *); {x0, y0, z0} = \[ScriptCapitalR][{r0 Cos[\[Phi]1], 0,      r0 Sin[\[Phi]1]}] (* initial x,y,z at burnout point *); {vx0, vy0, vz0} = \[ScriptCapitalR][    v0 {-Cos[\[Phi]1] Sin[\[Gamma]] -        Cos[\[Gamma]] Cos[\[Psi]] Sin[\[Phi]1],      Cos[\[Gamma]] Sin[\[Psi]],       Cos[\[Gamma]] Cos[\[Phi]1] Cos[\[Psi]] -        Sin[\[Gamma]] Sin[\[Phi]1]}] (* initial velocity at the burnout \ point *);

initCond[\[Psi]_, \[Gamma]_] = {x[0] == x0, y[0] == y0,     z[0] == z0,                                                           x'[0] == vx0, y'[0] == vy0,     z'[0] == vz0};

The relevant physical parameters can be obtained directly from within the Wolfram Language:

M, R, G} = QuantityMagnitude[Flatten[{Entity["Planet",                 "Earth"][{"Mass", "Radius"}],       Quantity[1, "GravitationalConstant"]}], "SIBase"];

Next, I obtain a differential equation for the motion of our spacecraft, given the gravitational field of the Earth. There are several ways you can model the gravitational potential near the Earth. Assuming a spherically symmetric planet and utilizing a Cartesian coordinate system throughout, the potential is merely:

pot[{x_, y_, z_}] := -G M /Sqrt[x^2 + y^2 + z^2];

Alternatively, you can use a more realistic model of Earth’s gravity, where the planet’s shape is taken to be an oblate ellipsoid of revolution. The exact form of the potential from such an ellipsoid (assuming constant mass-density over ellipsoidal shells), though complicated (containing multiple elliptic integrals), is available through EntityValue:

evg = EntityValue[    Entity["PhysicalSystem", "MassiveTriaxialEllipsoid"],     "GravitationalPotential"];

For a general homogeneous triaxial ellipsoid, the potential contains piecewise functions:

(pw = Cases[evg, _ _Piecewise, \[Infinity]][[1]]) // TraditionalForm //   Style[#, 6] &

Traditional form output

Here, κ is the largest root of x2/(a2+κ)+y2/(b2+κ)+z2/(c2+κ)=1. In the case of an oblate ellipsoid, the previous formula can be simplified to contain only elementary functions…

Limit[pw[[1, -1]],    QuantityVariable["b","Radius"] -> QuantityVariable[    "a","Radius"]] // FullSimplify

Traditional form output

… where κ=((2 z2 (a2-c2+x2+y2)+(-a2+c2+x2+y2)2+z4)1/2-a2-c2+x2+y2+z2)/2.

A simpler form that is widely used in the geographic and space science community, and that I will use here, is given by the so-called International Gravity Formula (IGF). The IGF takes into account differences from a spherically symmetric potential up to second order in spherical harmonics, and gives numerically indistinguishable results from the exact potential referenced previously. In terms of four measured geodetic parameters, the IGF potential can be defined as follows:

{a, b} = GeodesyData["ITRF00", #] & /@ {"SemimajorAxis",      "SemiminorAxis"} ;  gPole = 9.8321849378(* g at Earth's pole in m/s^2*); gEquator = 9.78903267715(* g at Earth's equator in m/s^2*); With[{k = b gPole/(a gEquator) - 1, e = Sqrt[1 - b^2/a^2]},  potIGF[{x_, y_, z_}] :=   With[{(* latitude *) \[Phi] = ArcTan[Sqrt[x^2 + y^2], z]},    -G M/Sqrt[x^2 + y^2 + z^2] (1 + k Sin[\[Phi]]^2)/      Sqrt[1 - e^2 Sin[\[Phi]]^2]]]

I could easily use even better values for the gravitational force through GeogravityModelData. For the starting position, the IGF potential deviates only 0.06% from a high-order approximation:

{potIGF[{x0, y0, z0}],  GeogravityModelData[GeoPosition[GeoPositionXYZ[{x0, y0, z0}]],     "Potential"] // QuantityMagnitude[#, "SIBase"] &}

{-6.04963*10^7, -6.05363*10^7}

With these functional forms for the potential, finding the orbital path amounts to taking a gradient of the potential to get the gravitational field vector and then applying Newton’s third law. Doing so, I obtain the orbital equations of motion for the two gravity models:

grad = -Grad[pot[{x[t], y[t], z[t]}], {x[t], y[t], z[t]}]; gradIGF = -Grad[potIGF[{x[t], y[t], z[t]}], {x[t], y[t], z[t]}];

Fma = {{x''[t], y''[t], z''[t]} == grad}; FmaIGF = {{x''[t], y''[t], z''[t]} == gradIGF};

I am now ready to use the power of NDSolve to compute orbital trajectories. Before doing this, however, it will be nice to display the orbital path as a curve in three-dimensional space. To give these curves context, I will plot them over a texture map of the Earth’s surface, projected onto a sphere. Here I construct the desired graphics objects:

(*define orbit burnoutDot and burnoutArrow graphics*) burnoutCoords = {x0, y0, z0}; burnoutDirection = {vx0, vy0, vz0}/v0; burnoutDot = {Red, Sphere[burnoutCoords, R/30]}; burnoutArrow = {Red, Arrow[{burnoutCoords,       burnoutCoords + (R/3) burnoutDirection}]}; burnoutLabels[\[Psi]_, \[Gamma]_] =    Graphics3D[{burnoutDot, burnoutArrow}];

(*define globe texture and markings*) earthTexture = Lighter[#, 0.75] &@ ImageReflect[PlanetData["Earth",       "CylindricalEquidistantTexture"], Bottom]; globe = ParametricPlot3D[   R {-Cos[p] Sin[t], -Sin[p] Sin[t], Cos[t]}, {p, 0, 2 Pi}, {t, 0,     Pi}, Mesh -> None, PlotStyle -> Texture[earthTexture],    TextureCoordinateFunction -> Automatic]

(*define globe texture and markings*) earthTexture = Lighter[#, 0.75] &@ ImageReflect[PlanetData["Earth",       "CylindricalEquidistantTexture"], Bottom]; globe = ParametricPlot3D[   R {-Cos[p] Sin[t], -Sin[p] Sin[t], Cos[t]}, {p, 0, 2 Pi}, {t, 0,     Pi}, Mesh -> None, PlotStyle -> Texture[earthTexture],    TextureCoordinateFunction -> Automatic]

pole = Graphics3D[{Thick, Red, Line[{{0, 0, -1.3 R}, {0, 0, 1.3 R}}]}]; equator =    ParametricPlot3D[R {Cos[t], Sin[t], 0}, {t, 0, 2 Pi},     PlotStyle -> Yellow]; globeMarkings = {pole, equator};

While the orbital path computed in an inertial frame forms a periodic closed curve, when you account for the rotation of the Earth, it will cause the spacecraft to pass over different points on the Earth’s surface during each subsequent revolution. I can visualize this effect by adding an additional rotation term to the solutions I obtain from NDSolve. Taking the number of orbital periods to be three (similar to John Glenn’s flight) for visualization purposes, I construct the following Manipulate to see how the orbital path is affected by the azimuthal launch angle ψ, similar to the study in Johnson’s paper. I’ll plot both a path assuming a spherical Earth (in white) and another path using the IGF (in green) to get a sense of the size of the oblateness effect (note that the divergence of the two paths increases with each orbit):

With[{\[Omega] = \[Omega]Capitale/(360/(2 Pi ) 60), T = 3 2 Pi r0/v0,             \[Rho] = Sqrt[x[t]^2 + y[t]^2], \[Phi] =     ArcTan[x[t], y[t]],   \[Gamma] = 0.5 Degree},  Manipulate[   Block[{\[Psi] = \[Psi]s},     sol = NDSolve[      Join[Fma, initCond[\[Psi]s, \[Gamma]]], {x, y, z}, {t,        0, \[Tau] T}];    solIGF =      NDSolve[Join[FmaIGF, initCond[\[Psi]s, \[Gamma]]], {x, y, z}, {t,        0, \[Tau] T}];    Show[ParametricPlot3D[       Evaluate[{\[Rho] Cos[\[Phi] - \[Omega] t], \[Rho] Sin[\[Phi] - \ \[Omega] t],          z[t]} /. {sol[[1]], solIGF[[1]]}], {t, 0, \[Tau] T},        PlotStyle -> {White, Darker[Green]}] /.       l_Line :> Tube[l, 35000],      globe, globeMarkings, burnoutLabels[\[Psi]s, \[Gamma]],     Frame -> None, Ticks -> None,      PlotRange -> {All, All, {-1.2 R, 1.2 R}},      RotationAction -> "Clip", ViewPoint -> {0, -2, 0.4}]],   {{\[Psi]s, 70 Degree, "start angle"}, -Pi, Pi},   {{\[Tau], 1.025, "flight time"}, 0.001, 3},    SaveDefinitions -> True]]

globe manipulate

In the notebook attached to this blog, you can see this Manipulate in action, and note the speed at which each new solution is obtained. You would hope that Katherine Johnson and her colleagues at NASA would be impressed!

Now, varying the angle ψ at burnout time, it is straightforward to calculate the position of the spacecraft after, say, three revolutions:

posis = Block[{\[Gamma] =       0.5 Degree, \[Omega] = \[Omega]Capitale/(360/(2 Pi ) 60),      T = 3.05 2 Pi r0/v0},    Table[Map[(Most[#] - {0, \[Omega] T 360/(2 Pi)}) &,       GeoPosition[GeoPositionXYZ[{x[T], y[T], z[T]} /.         NDSolve[           Join[FmaIGF, initCond[\[Psi], \[Gamma]]], {x, y, z}, {t, 0,             T}][[1]]]]],     {\[Psi], 50 Degree, 90 Degree, 1 Degree}]];

GeoGraphics[{Gray, GeoPath[{"Parallel", 28, {-135, -120}}],   Yellow, Point[posis]}, GeoRange -> Quantity[450, "Miles"],  GeoBackground -> "StreetMap", ImageSize -> 400]

GeoGraphics[{Gray, GeoPath[{"Parallel", 28, {-135, -120}}],   Yellow, Point[posis]}, GeoRange -> Quantity[450, "Miles"],  GeoBackground -> "StreetMap", ImageSize -> 400]

Modeling the Reentry of a Satellite

The movie also mentions Euler’s method in connection with the reentry phase. After the initial problem of finding the azimuthal angle has been solved, as done in the previous sections, it’s time to come back to Earth. Rockets are fired to slow down the orbiting body, and a complex set of events happens as the craft transitions from the vacuum of space to an atmospheric environment. Changing atmospheric density, rapid deceleration and frictional heating all become important factors that must be taken into account in order to safely return the astronaut to Earth. Height, speed and acceleration as a function of time are all problems that need to be solved. This set of problems can be solved with Euler’s method, as done by Katherine Johnson, or by using the differential equation-solving functionality in the Wolfram Language.

For simple differential equations, one can get a detailed step-by-step solution with a specified quadrature method. An equivalent of Newton’s famous F = m a for a time-dependent mass m(t) is the so-called ideal rocket equation (in one dimension)…

DSolve[{(m0 - \[Beta] t) v'[t] == Subscript[\[ScriptV], ex] \[Beta],    v[0] == vi}, v[t], t]

… where m(t) is the rocket mass, ve the engine exhaust velocity and mp(t) the time derivative of the propellant mass. Assuming a constant mp(t), the structure of the equation is relatively simple and easily solvable in closed form:

{{v[t] ->     vi + Log[m0] Subscript[\[ScriptV], ex] -      Log[m0 - t \[Beta]] Subscript[\[ScriptV], ex]}}

With initial and final conditions for the mass, I get the celebrated rocket equation (Tsiolkovsky 1903):

FormulaLookup["rocket equation"]

{"RocketEquation"}

FormulaData["RocketEquation"]

QuantityVariable[ \!\(\*SubscriptBox[\("v"\), \("f"\)]\),"Speed"] ==   Log[QuantityVariable[ \!\(\*SubscriptBox[\("m"\), \("i"\)]\),"Mass"]/QuantityVariable[ \!\(\*SubscriptBox[\("m"\), \("f"\)]\),"Mass"]] QuantityVariable[ \!\(\*SubscriptBox[\("v"\), \("e"\)]\),"Speed"] + QuantityVariable[ \!\(\*SubscriptBox[\("v"\), \("i"\)]\),"Speed"]

FormulaData["RocketEquation", "QuantityVariableTable"]

FormulaData["RocketEquation", "QuantityVariableTable"]

FormulaData@FormulaLookup["rocket equation"][[1]]

FormulaData@FormulaLookup["rocket equation"][[1]]

The details of solving this equation with concrete parameter values and e.g. with the classical Euler method I can get from Wolfram|Alpha. Here are those details together with a detailed comparison with the exact solution, as well as with other numerical integration methods:

WolframAlpha["use Euler method (2-t)v'(t)=4, v(0)=0, from t=0 to \ 0.95"]

WolframAlpha["use Euler method (2-t)v'(t)=4, v(0)=0, from t=0 to \ 0.95"]

Following the movie plot, I will now implement a minimalistic ODE model of the reentry process. I start by defining parameters that mimic Glenn’s flight:

(* Glenn's flight *) mCapsule =    QuantityMagnitude[    Entity["MannedSpaceMission", "MercuryAtlas6"][     EntityProperty["MannedSpaceMission", "Mass"]],     "SIBase"]; (* mass of Mercury Atlas 6  *) \[Phi]0 = 34.00; \[Lambda]0 =   241.00 Degree;  (* initial latitude and longitude *) h = QuantityMagnitude[    Entity["Satellite", "00240"][     EntityProperty["Satellite", "AverageElevationWGS84"]], "Meters"];

X0 = (R + h) {Cos[\[Lambda]0] Cos[\[Phi]0],      Sin[\[Lambda]0] Cos[\[Phi]0],      Sin[\[Phi]0]}; (* position at start of reentry *) V0 = Sqrt[G M/(R + h)] {-Sin[\[Lambda]0], Cos[\[Lambda]0],      0};(* velocity at start of reentry *)

I assume that the braking process uses 1% of the thrust of the stage-one engine and runs, say, for 60 seconds. The equation of motion is:

Traditional output

Here, Fgrav is the gravitational force, Fexhaust(t) the explicitly time-dependent engine force and Ffriction(x(t),v(t)) the friction force. The latter depends via the air density explicitly on the position x(t) and via the friction law on v(t).

For the height-dependent air density, I can conveniently use the StandardAtmosphereData function. I also account for a height-dependent area because of the parachute that opened about 8.5 km above ground:

Cd = 1 (* drag coefficient *);  airDensity[X : {_Real, __}] :=   QuantityMagnitude[   StandardAtmosphereData[Quantity[Norm[X] - R, "Meters"], "Density"],    "SIBase"] airFriction[X_List,    V_List] := -1/2 area[Norm[X - R]] V.V Cd airDensity[X] V/Sqrt[V.V] brake[V_List,    t_, {\[Alpha]_, T_}] := -If[t < T, \[Alpha] Normalize[V], 0]

This gives the following set of coupled nonlinear differential equations to be solved. The last WhenEvent[...] specifies to end the integration when the capsule reaches the surface of the Earth. I use vector-valued position and velocity variables X and V:

area[height_] := If[h > 8500, 10, (* parachute *)2000];

With these definitions for the weight, exhaust and air friction force terms…

Fgrav[X_, V_, t_] := -G mCapsule M X/Sqrt[X.X]^3  Fexhaust[X_, V_, t_] := brake[V, t, {3000, 60}] Fairfraction[X_, V_, t_] := airFriction[X , V]

… total force can be found via:

Ftotal[X_, V_, t_]  :=   Fgrav[X, V, t] + Fexhaust[X, V, t] + Fairfraction[X, V, t]

odeSystem = { X'[t] == V[t],                          mCapsule V'[t] == Ftotal[X[t] , V[t], t],                              WhenEvent[Norm[X[t]] - R == 0, "StopIntegration"]};

In this simple model, I neglected the Earth’s rotation, intrinsic rotations of the capsule, active flight angle changes, supersonic effects on the friction force and more. The explicit form of the differential equations in coordinate components is the following. The equations that Katherine Johnson solved would have been quite similar to these:

(odeSystem /. {X -> ({x[#], y[#], z[#]} &),       V -> ({vx[#], vy[#], vz[#]} &)} /.                              w_WhenEvent :> Evaluate //@ w) //   Column[#, Dividers -> Center] &

Traditional output

Supplemented by the initial position and velocity, it is straightforward to solve this system of equations numerically. Today, this is just a simple call to NDSolve. I don’t have to worry about the method to use, step size control, error control and more because the Wolfram Language automatically chooses values that guarantee meaningful results:

tMax = 5000; inits = {X[0] == X0, V[0] == V0}; nds = NDSolve[Join[odeSystem, inits], {X, V}, {t, 0, 60, tMax}]

Output

Here is a plot of the height, speed and acceleration as a function of time:

plotXVAT[nds_] :=   With[{T = nds[[1, 1, 2, 1, 1, 2]]/60,     opts = Sequence[ImageSize -> 220, PlotRange -> All]},   GraphicsRow[{     Plot[(Norm[X[60 t]] - R)/1000 /. nds[[1]], {t, 0, T}, opts,       AxesLabel -> { "time (min)", "height (km)"}],     Plot[Norm[V[60 t]]/1000 /. nds[[1]], {t, 0, T}, opts,       AxesLabel -> { "time (min)", "speed (km/s)"}],      Plot[      Norm[Ftotal[X[60 t], V[60 t], 60 t]]/(9.81 mCapsule) /.        nds[[1]], {t, 0, T}, opts,       AxesLabel -> { "time (min)", "acceleration (g)"}]},    Spacings -> 0]]

plotXVAT[nds]

plotXVAT[nds]

Plotting as a function of height instead of time shows that the exponential increase of air density is responsible for the high deceleration. This is not due to the parachute, which happens at a relatively low altitude. The peak deceleration happens at a very high altitude as the capsule goes from a vacuum to an atmospheric environment very quickly:

With[{T = nds[[1, 1, 2, 1, 1, 2]]/60,    opts = Sequence[ImageSize -> 220, PlotRange -> All]},   ParametricPlot[   Evaluate[{(Norm[X[60 t]] - R)/1000,       Norm[Ftotal[X[60 t], V[60 t], 60 t]]/(9.81 mCapsule)} /.      nds[[1]]], {t, 0, T}, opts,    AxesLabel -> {"height (km)", "acceleration (g)"},    AspectRatio -> 0.6]]

With[{T = nds[[1, 1, 2, 1, 1, 2]]/60,    opts = Sequence[ImageSize -> 220, PlotRange -> All]},   ParametricPlot[   Evaluate[{(Norm[X[60 t]] - R)/1000,       Norm[Ftotal[X[60 t], V[60 t], 60 t]]/(9.81 mCapsule)} /.      nds[[1]]], {t, 0, T}, opts,    AxesLabel -> {"height (km)", "acceleration (g)"},    AspectRatio -> 0.6]]

And here is a plot of the vertical and tangential speed of the capsule in the reentry process:

With[{T = nds[[1, 1, 2, 1, 1, 2]],    opts = Sequence[ImageSize -> 300, PlotRange -> All]},   {Plot[V[t].Normalize[X[t]]/1000 /. nds[[1]], {t, 0, T}, opts,     AxesLabel -> { "time (s)", "vertical speed (km/s)"}],   Plot[(Norm[V[t] - V[t].Normalize[X[t]]  Normalize[V[t]]])/1000 /.      nds[[1]], {t, 0, T}, opts,     AxesLabel -> { "time (s)", "tangential speed (km/s)"}]}]

With[{T = nds[[1, 1, 2, 1, 1, 2]],    opts = Sequence[ImageSize -> 300, PlotRange -> All]},   {Plot[V[t].Normalize[X[t]]/1000 /. nds[[1]], {t, 0, T}, opts,     AxesLabel -> { "time (s)", "vertical speed (km/s)"}],   Plot[(Norm[V[t] - V[t].Normalize[X[t]]  Normalize[V[t]]])/1000 /.      nds[[1]], {t, 0, T}, opts,     AxesLabel -> { "time (s)", "tangential speed (km/s)"}]}]

Now I repeat the numerical solution with a fixed-step Euler method:

ndsEuler =   NDSolve[Join[odeSystem, inits], {X, V}, {t, 0, tMax},    Method -> {"FixedStep", Method -> "ExplicitEuler"},    StartingStepSize -> 0.05]

ndsEuler =   NDSolve[Join[odeSystem, inits], {X, V}, {t, 0, tMax},    Method -> {"FixedStep", Method -> "ExplicitEuler"},    StartingStepSize -> 0.05]

Qualitatively, the solution looks the same as the previous one:

plotXVAT[ndsEuler]

plotXVAT[ndsEuler]

For the used step size of the time integration, the accumulated error is on the order of a few percent. Smaller step sizes would reduce the error (see the previous Wolfram|Alpha output):

With[{T = nds[[1, 1, 2, 1, 1, 2]]},  Plot[100 ((Norm[X[t]] - R /. nds[[1]])/       (Norm[X[t]] - R /. ndsEuler[[1]]) - 1), {t, 0, T},                AxesLabel -> { "time (s)", "height error (%)"}]]

With[{T = nds[[1, 1, 2, 1, 1, 2]]},  Plot[100 ((Norm[X[t]] - R /. nds[[1]])/       (Norm[X[t]] - R /. ndsEuler[[1]]) - 1), {t, 0, T},                AxesLabel -> { "time (s)", "height error (%)"}]]

Note that the landing time predicted by the Euler method deviates only 0.11% from the previous time. (For comparison, if I were to solve the equation with two modern methods, say "BDF" vs. "Adams", the error would be smaller by a few orders of magnitude.)

Now, the reentry process generates a lot of heat. This is where the heat shield is needed. At which height is the most heat per area q generated? Without a detailed derivation, I can, from purely dimensional grounds, conjecture Overscript[q, .]~\[Rho] v^3:

DimensionalCombinations[{"Speed", "MassDensity"}, "HeatFlux"]

{QuantityVariable["MassDensity","MassDensity"] QuantityVariable[   "Speed","Speed"]^3}

With[{T = nds[[1, 1, 2, 1, 1, 2]]},  ParametricPlot[   Evaluate[{(Norm[X[t]] - R)/1000, airDensity[X[t]] Norm[V[t]]^3} /.      nds[[1]]], {t, 0, T}, PlotRange -> All, Ticks -> {True, False},               AxesLabel -> { "height (km)", "heat generated (a.u.)"},    AspectRatio -> 0.6]]

With[{T = nds[[1, 1, 2, 1, 1, 2]]},  ParametricPlot[   Evaluate[{(Norm[X[t]] - R)/1000, airDensity[X[t]] Norm[V[t]]^3} /.      nds[[1]]], {t, 0, T}, PlotRange -> All, Ticks -> {True, False},               AxesLabel -> { "height (km)", "heat generated (a.u.)"},    AspectRatio -> 0.6]]

Many more interesting things could be calculated (Hicks 2009), but just like the movie had to fit everything into two hours and seven minutes, I will now end my blog for the sake of time. I hope I can be pardoned for the statement that, with the Wolfram Language, the sky’s the limit.

To download this post as a Computable Document Format (CDF) file, click here. New to CDF? Get your copy for free with this one-time download.

]]>
http://blog.wolfram.com/2017/02/24/hidden-figures-modern-approaches-to-orbit-and-reentry-calculations/feed/ 5
How Many Animals and Arp-imals Can One Find in a Random 3D Image? http://blog.wolfram.com/2017/02/23/how-many-animals-and-arp-imals-can-one-find-in-a-random-3d-image/ http://blog.wolfram.com/2017/02/23/how-many-animals-and-arp-imals-can-one-find-in-a-random-3d-image/#comments Thu, 23 Feb 2017 15:16:56 +0000 Michael Trott http://blog.internal.wolfram.com/?p=34823 And How Many Animals, Animal Heads, Human Faces, Aliens and Ghosts in Their 2D Projections?

Introduction

In my recent Wolfram Community post, “How many animals can one find in a random image?,” I looked into the pareidolia phenomenon from the viewpoints of pixel clusters in random (2D) black-and-white images. Here are some of the shapes I found, extracted, rotated, smoothed and colored from the connected black pixel clusters of a single 800×800 image of randomly chosen, uncorrelated black-and-white pixels.

arpimals

For an animation of such shapes arising, changing and disappearing in a random gray-level image with slowly time-dependent pixel values, see here. By looking carefully at a selected region of the image, at the slowly changing, appearing and disappearing shapes, one frequently can “see” animals and faces.

The human mind quickly sees faces, animals, animal heads and ghosts in these shapes. Human evolution has optimized our vision system to recognize predators and identify food. Our recognition of an eye (or a pair of eyes) in the above shapes is striking. For the neuropsychological basis of seeing faces in a variety of situations where actual faces are absent, see Martinez-Conde2016.

A natural question: is this feature of our vision specific to 2D silhouette shapes, or does the same thing happen for 3D shapes? So here, I will look at random shapes in 3D images and the 2D projections of these 3D shapes. Various of the region-related functions that were added in the last versions of the Wolfram Language make this task possible, straightforward and fun.

I should explain the word Arp-imals from the title. With the term “Arp-imals” I refer to objects in the style of the sculptures by Jean Arp, meaning smooth, round, randomly curved biomorphic forms. Here are some examples.

personOverview[person_] :=   With[{props = {"Entity", EntityProperty["Person", "Image"],       EntityProperty["Person", "BirthDate"],       EntityProperty["Person", "BirthPlace"],       EntityProperty["Person", "DeathDate"]}},   TextGrid[DeleteMissing[Transpose[{props, person[props]}], 1, 2],                      Dividers -> All, Background -> GrayLevel[0.9]]]

artworkOverview[art_] :=   With[{props = {"Entity", EntityProperty["Artwork", "Image"],       EntityProperty["Artwork", "Artist"],       EntityProperty["Artwork", "StartDate"],       EntityProperty["Artwork", "Owner"]}},   TextGrid[    DeleteMissing[     Transpose[{props, Item[#, ItemSize -> 15] & /@ art[props]}], 1, 2],                      Dividers -> All, Background -> GrayLevel[0.9]]]

Forms such as these hide frequently in 3D images made from random black-and-white voxels. Here is a quick preview of shapes we will extract from random images.

Quick Preview of Shapes

We will also encounter what I call Moore-iens, in the sense of the sculptures by the slightly later artist Henry Moore.

personOverview[Entity["Person", "HenryMoore::96psy"]]

artworkOverview /@ {Entity["Artwork",     "LargeInteriorForm::HenryMoore"],    Entity["Artwork", "KnifeEdgeTwoPiece::HenryMoore"],    Entity["Artwork", "OvalWithPointsPrinceton::HenryMoore"]}

With some imagination, one can also see forms of possible aliens in some of the following 2D shapes. (See Domagal-Goldman2016 for a discussion of possible features of alien life forms.)

As in the 2D case, we start with a random image: this time, a 3D image of voxels of values 0 and 1. For reproducibility, we will seed the random number generator. The Arp-imals are so common that virtually any seed produces them. And we start with a relatively small image. Larger images will contain many more Arp-imals.

Shapes from Random 3D Images

SeedRandom[1]; randomImage =   Image3D[Table[RandomChoice[{6, 1} -> {0, 1}], {20}, {20}, {20}]]

Hard to believe at first, but the blueprints of the above-shown 3D shapes are in the last 3D cube. In the following, we will extract them and make them more visible.

As in the 2D case, we again use ImageMesh to extract connected regions of white cells. The regions still look like a random set of connected polyhedra. After smoothing the boundaries, nicer shapes will arise.

Show[imesh = ImageMesh[randomImage, Method -> "MarchingSquares"],   ImageSize -> 400]

Here are the regions, separated into non-touching ones, using the function ConnectedMeshComponents. The function makeShapes3D combines the image creation, the finding of connected voxel regions, and the region separation.

makeShapes3D[{dimz_, dimy_, dimx_}, {black_, white_}] :=  Module[{randomImage, imesh},   randomImage =     Image3D[Table[      RandomChoice[{black, white} -> {0, 1}], {dimx}, {dimy}, {dimz}]];    imesh = ImageMesh[randomImage, Method -> "MarchingSquares"];                Select[ConnectedMeshComponents@imesh, 10 < Volume[#] < 200 &]]

For demonstration purposes, in the next example, we use a relatively low density of white voxels to avoid the buildup of a single large connected region that spans the whole cube.

SeedRandom[333]; shapes = makeShapes3D[{20, 20, 20}, {7, 1}]

Here are the found regions individually colored in their original positions in the 3D image.

Show[HighlightMesh[#, Style[2, RandomColor[]]] & /@ shapes,   Boxed -> True]

To smooth the outer boundaries, thereby making the shapes more animal-, Arp-imal- and alien-like, the function smooth3D (defined in the accompanying notebook) is a quick-and-dirty implementation of the Loop subdivision algorithm. (As the 3D shapes might have a higher genus, we cannot use BSplineSurface directly, which would have been the direct equivalent to the 2D case.) Here are successive smoothings of the third of the above-extracted regions.

{sampleRegion,    Graphics3D[{EdgeForm[],     sampleRegionSmooth1 = smooth3D[sampleRegion, 1]},                         ImageSize -> {{320}, {320}}]}  {Graphics3D[{EdgeForm[],     sampleRegionSmooth2 = smooth3D[sampleRegion, 2]},                           ImageSize -> {{320}, {320}}],  Graphics3D[{EdgeForm[],     sampleRegionSmooth3 = smooth3D[sampleRegion, 3]},                         ImageSize -> {{320}, {320}}]}

Using the region plot theme "SmoothShading" of the function BoundaryMeshRegion, we can add normals to get the feeling of a genuinely smooth boundary.

shapeF = With[{sr = sampleRegionSmooth3},   BoundaryMeshRegion[sr[[1]],     Style[sr[[2, 1]] ,      Directive[GrayLevel[0.4],       Specularity[RGBColor[0.71, 0.65, 0.26], 12]]],     PlotTheme -> "SmoothShading"]]

And for less than $320 one can obtain this Arp-inspired piece in brass. A perfect, unique, stunning post-Valentine’s gift. For hundreds of alternative shapes to print, see below. We use ShellRegion to reduce the price and save some internal material by building a hollow region.

thinreg = ShellRegion[shapeF]; Printout3D[thinreg, "IMaterialise",   RegionSize -> Quantity[10, "Centimeters"]]

Here is the smoothing procedure shown for another of the above regions.

sampleRegion2 = {sampleRegion2,   Graphics3D[{EdgeForm[],     sampleRegionSmooth21 = smooth3D[sampleRegion2, 1]},                           ImageSize -> {{360}, {360}}],  Graphics3D[{EdgeForm[],     sampleRegionSmooth22 = smooth3D[sampleRegion2, 2]},                          ImageSize -> {{360}, {360}}]}

And for three more.

With[{sf = Directive[#, Specularity[ColorNegate[#], 10]] &},  Row[{Graphics3D[{EdgeForm[], sf[Red], smooth3D[shapes[[4]], 3]},      ImageSize -> {{360}, {360}},                                 ViewPoint -> {0.08, -3.31, 0.67},      ViewVertical -> {0.00, -0.85, 0.90}],    Graphics3D[{EdgeForm[], sf[Blue], smooth3D[shapes[[8]], 3]},      ImageSize -> {{360}, {360}},                    ViewPoint -> {2.99, 0.66, 1.43},      ViewVertical -> {1.07, 0.90, 0.23}],    Graphics3D[{EdgeForm[], sf[Green], smooth3D[shapes[[13]], 3]},      ImageSize -> {{360}, {360}},                    ViewPoint -> {-2.53, 2.18, 0.49},      ViewVertical -> {-0.93, 0.598, 0.76}]}]]

Many 3D shapes can now be extracted from random and nonrandom 3D images. The next input calculates the region corresponding to lattice points with coprime coordinates.

Graphics3D[{EdgeForm[], Directive[Gray, Specularity[Pink, 12]],             smooth3D[ConnectedMeshComponents[ImageMesh[       Image3D[        Table[Boole@CoprimeQ[x, y, z], {x, -6, 6}, {y, -6, 6}, {z, -6,           6}]],       Method -> "MarchingSquares"]][[1]], 2]},  ViewPoint -> {2, -3, 2}, ViewVertical -> {1, 0, 1}, Boxed -> False]

The Importance of Coarse Rasterization and Smoothing

In the above example, we start with a coarse 3D region, which feels polyhedral due to the obvious triangular boundary faces. It is only after the smoothing procedure that we obtain “interesting-looking” 3D shapes. The details of the applied smoothing procedure do not matter, as long as sharp edges and corners are softened.

Human perception is optimized for smooth shapes, and most plants and animals have smooth boundaries. This is why we don’t see anything interesting in the collection of regions returned from ImageMesh applied to a 3D image. This is quite similar to the 2D case. In the following visualization of the 2D case, we start with a set of randomly selected points. Then we connect these points through a curve. Filling the curve yields a deformed checkerboard-like pattern that does not remind us of a living being. Rasterizing the filled curve in a coarse-grained manner still does not remind us of organic shapes. The connected region, and especially the smoothed region, do remind most humans of living beings.

Smoothed Region

The following Manipulate (available in the notebook) allows us to explore the steps and parameters involved in an interactive session.

smooth2D[reg_, col_, d_] :=   Graphics[{col, (ToExpression[ToString[InputForm@reg], StandardForm,         Hold] /.       HoldPattern[BoundaryMeshRegion[v_, b__, ___Rule]] :>         GraphicsComplex[v,         FilledCurve[{b} /. Line[l_] :>                         BSplineCurve[DeleteDuplicates[Flatten[l, 1]],              SplineClosed -> True, SplineDegree -> d]]])[[1]]}]

Manipulate[  Module[{randomFunction, f1, f2, filledPolygon, ras, im, imesh,     shapes, toShow, map},   Block[{$PerformanceGoal = "Quality"},    randomFunction[m_] :=      Interpolation[      MapIndexed[{(#2[[1]] - 1)/(m + 1), #} &,        Join[#, Take[#, 2]] &@ RandomReal[{0, 1}, {m, 2}]],       InterpolationOrder -> 3];    SeedRandom[seed]; f1 = randomFunction[deg];       f2 = randomFunction[deg];    pp = ParametricPlot[Evaluate[(1 - s) f1[t] + s f2[t]], {t, 0, 1},       PlotStyle -> Directive[Opacity[1], Black], Axes -> False,       PlotRange -> {{-0, 1}, {-0, 1}}] ;    filledPolygon = pp /. Line :> Polygon;    ras = Rasterize[filledPolygon, RasterSize -> {rs, rs},       ImageSize -> {rs, rs}];      im = Image[ras];                   imesh = ImageMesh[ColorNegate[im], Method -> m];     II = imesh;     shapes =      Reverse[SortBy[ConnectedMeshComponents@imesh,        Length[MeshCells[#, 1]] &]];    map[{x_, y_}] := rs {x, y} + {1/2, -1/2};    toShow = {If[sI, Graphics[ras], {}],      If[sP,        Graphics[{Opacity[0.8],          filledPolygon[[1]] /.           Polygon[l_] :> Polygon[Map[map, l, {-2}]]}], {}],      If[sO, Graphics[{Opacity[0.8], Blue, Show[imesh][[1]]}], {}],      If[sR,        Table[smooth2D[shapes[[k]], Directive[Opacity[0.7], rC],          d], {k, Length[shapes]}], {}],      If[sC,        pp /. Line[l_] :> {ColorNegate[rC],           Line[Map[map, l, {-2}]]}, {}],      If[sIP,        Graphics[{ Gray, PointSize[Medium],          Point[map /@ ((1 - s) f1[[4, All, 1]] +              s  f2[[4, All, 1]])]}], {}]};    If[toShow === {{}, {}, {}, {}, {}}, Text["nothing to show" ] ,         Graphics[ Rotate[First /@ Flatten[toShow], \[CurlyPhi]],      PlotRangePadding -> 0, ImagePadding -> 0,       PlotRange -> {{-0.05 rs, 1.05 rs}, {-0.05 rs, 1.05 rs}},       ImageSize -> 400]]]],   {{seed, 595}, 1, 10000, 1},  {{deg, 24, "curve degree"}, 2, 36, 1},  {{s, 0.961, "transition"}, 0, 1},  Delimiter,  {{rs, 24, "raster size"}, 10, 60, 1},  Row[{"show: ",     Control[{{sR, True,        "smoothed region" <> FromCharacterCode[62340]}, {True,        False}}], "|  ",                       Control[{{sO, False, "region" <> FromCharacterCode[62340]}, {True,        False}}], "|  ",                      Control[{{sI, False, "raster" <> FromCharacterCode[62340]}, {True,        False}}], "|\n          ",                         Control[{{sP, False,        "polygon" <> FromCharacterCode[62340]}, {True, False}}],     "|  ",                        Control[{{sC, False, "curve" <> FromCharacterCode[62340]}, {True,        False}}], "|  ",                        Control[{{sIP, False,        "points" <> FromCharacterCode[62340]}, {True, False}}]}],  Delimiter,  {{d, 3, "smoothness"}, 0, 8, 1, SetterBar},   {{m, "DualMarchingSquares", "method"}, {"MarchingSquares",     "DualMarchingSquares", "Exact"}},   {{rC, Darker[Green, 0.6], "region color"}, Red, ImageSize -> Small},  Delimiter,  {{\[CurlyPhi], -2.06, "rotation"}, -Pi, Pi},  Delimiter,  Button["random shape", seed = RandomInteger[{1, 1000}];                                                         deg = RandomInteger[{2, 36}];                                                        s = RandomReal[{0, 1}]],  ControlPlacement -> Left,  TrackedSymbols :> True,   SaveDefinitions -> True]
3D Manipulate

And here is a corresponding 3D example.

SeedRandom[1]; Module[{deg = 3, pp = 16, L = 3, \[Delta], p, pts, sol, p1, cp,    pointsGraphic3D, pointsAndSurface, im2,               imesh, sm, ccs, bmr},   \[Delta] = 2 L/pp;  p[x_, y_, z_] = (x^2 + y^2 + z^2)^(2 deg) +     Sum[c[i, j, k] x^i y^j z^k, {i, 0, deg}, {j, 0, deg}, {k, 0, deg}];   pts = RandomReal[{-1, 1}, {Length@Cases[p[x, y, z], _c, \[Infinity]],      3}];    sol = Solve[(p @@@ pts) == 0, Cases[p[x, y, z], _c, \[Infinity]]];    p1 = p[x, y, z] /. sol[[1]];     cp = ContourPlot3D[    Evaluate[p1], {x, -L, L}, {y, -L, L}, {z, -L, L}, Contours -> {0}];  L = Ceiling[    Max[Abs[Transpose[       Cases[cp, _GraphicsComplex, \[Infinity]][[1, 1]]]]], 0.2];    pointsGraphic3D =    Graphics3D[{Red, Sphere[#, 0.05] & /@ pts}, PlotRange -> L];   pointsAndSurface =    Show[{cp =       ContourPlot3D[Evaluate[p1], {x, -L, L}, {y, -L, L}, {z, -L, L},        Contours -> {0},       ContourStyle -> Gray, Lighting -> "Neutral",        MeshFunctions -> {Norm[{#1, #2, #3}] & }], pointsGraphic3D},     Axes -> False];  im2 = Graphics3D[    Table[If[p1 < 0, {Opacity[0.3], EdgeForm[Blue], Gray, Opacity[0.3],                                                                       \  Cuboid[{x, y, z}/\[Delta] + pp/2, {x, y, z}/\[Delta] + pp/2 +          1]}, {}],                                                  {x, -L, L,       2 L/pp}, {y, L, -L, -2 L/pp}, {z, L, -L, -2 L/pp}],                   Lighting -> "Neutral", Axes -> False];   imesh = ImageMesh[Image3D[Table[Boole[p1 < 0],                                                          {x, -L, L,        2 L/pp}, {y, L, -L, -2 L/pp}, {z, L, -L, -2 L/pp}]],                                                     Method -> "MarchingCubes"];  ccs = Reverse[    SortBy[ConnectedMeshComponents[imesh], Length[MeshCells[#, 2]] &]];    sm = smooth3D[ccs[[1]], 2];   bmr = BoundaryMeshRegion[sm[[1]],     Style[Cases[sm, _Polygon, \[Infinity]],      Directive[Opacity[0.5], Darker[Green]]]];     Column[{Row[{pointsGraphic3D, " \[DoubleLongRightArrow] ",        pointsAndSurface, " \[DoubleLongRightArrow] "}],                     Row[{im2, " \[DoubleLongRightArrow] " ,        Show[{im2, imesh}, Boxed -> True], " \[DoubleLongRightArrow] "}],                      Row[{Show[{im2, bmr}, Boxed -> True],        " \[DoubleLongRightArrow] ",  Show[bmr, Boxed -> True]}]} /.                                                                       \                  gr_Graphics3D :> Show[gr, ImageSize -> 200]]]
3D Example

Shadows of the 3D Shapes

In her reply to my community post, Marina Shchitova showed some examples of faces and animals in shadows of hands and fingers. Some classic examples from the Cassel1896 book are shown here.

Hand shadows

So, what do projections/shadows of the above two 3D shapes look like? (For a good overview of the use of shadows in art at the time and place of the young Arp, see Forgione1999.)

The projections of these 3D shapes are exactly the types of shapes I encountered in the connected smoothed components of 2D images. The function projectTo2D takes a 3D graphic complex and projects it into a thin slice parallel to the three coordinate planes. The result is still a Graphics3D object.

projectTo2D[GraphicsComplex[vs_, r__]] :=   Module[{f = 0.2, \[CurlyEpsilon] = 10^-2, t = Developer`ToPackedArray,    xMin, xMax, yMin, yMax, zMin,     zMax, \[Delta]x, \[Delta]y, \[Delta]z},   {{xMin, xMax}, {yMin, yMax}, {zMin, zMax}} = MinMax /@ Transpose[vs];   {\[Delta]x, \[Delta]y, \[Delta]z} = {xMax - xMin, yMax - yMin,      zMax - zMin};   {EdgeForm[],    {Darker[Red],      GraphicsComplex[      t[{xMin -            f \[Delta]x + \[CurlyEpsilon] (#1 -                xMin)/\[Delta]x, #2, #3} & @@@ vs], r]},     {Darker[Blue],      GraphicsComplex[      t[{#1, yMax +            f \[Delta]y + \[CurlyEpsilon] (#2 - yMin)/\[Delta]y, #3} & @@@         vs], r]},    {Darker[Green, 0.6],      GraphicsComplex[      t[{#1, #2,           zMin - f \[Delta]z + \[CurlyEpsilon] (#3 -                zMin)/\[Delta]z} & @@@ vs], r]}} ]

These are the 2×3 projections of the above two 2D shapes. Most people recognize animal shapes in the projections.

We get exactly these projections if we just look at the 3D shape from a larger distance with a viewpoint and direction parallel to the coordinate axes.

{Graphics3D[{Darker[Blue], EdgeForm[], sampleRegionSmooth2},    ViewPoint -> {1, -20, 0}],   Graphics3D[{Darker[Green, 0.6], EdgeForm[], sampleRegionSmooth2},    ViewPoint -> {1, 0, 20}, ViewVertical -> {0, 1, 0}],  Graphics3D[{Darker[Red, 0.6], EdgeForm[], sampleRegionSmooth2},    ViewPoint -> {20, 0, 1}]}

For comparison, here are three views of the first object from very far away, effectively showing the projections.

By rotating the 3D shapes, we can generate a large variety of different shapes in the 2D projections. The following Manipulate allows us to explore the space of projections’ shapes interactively. Because we need the actual rotated coordinates, we define a function rotate, rather than using the built-in function Rotate.

rotationMatrix3D[{\[Alpha]1_, \[Alpha]2_, \[Alpha]3_}] :=   Module[{c1, s1, c2, s2, c3, s3},   {c3, s3, c2, s2, c1, s1} =     N@{Cos[\[Alpha]3], Sin[\[Alpha]3], Cos[\[Alpha]2], Sin[\[Alpha]2],       Cos[\[Alpha]1], Sin[\[Alpha]1]};   {{c3, s3, 0}, {-s3, c3, 0}, {0, 0, 1}}.           {{c2, 0, s2}, {0, 1, 0}, {-s2, 0, c2}}.           {{1, 0, 0}, {0, c1, s1}, {0, -s1, c1}}]

Here is an array of 16 projections into the x-z plane for random orientations of the 3D shape.

projectToXZImage[GraphicsComplex[vs_, r__]] :=   Module[{f = 0.2, \[CurlyEpsilon] = 10^-2,     t = Developer`ToPackedArray, yMin, yMax, \[Delta]y },   {yMin, yMax} = MinMax@ Transpose[vs][[2]]; \[Delta]y = yMax - yMin;   ImageCrop@Image[Rasterize[      Graphics3D[{EdgeForm[], Darker[Blue],         GraphicsComplex[         t[{#1, yMax +               f \[Delta]y + \[CurlyEpsilon] (#2 -                   yMin)/\[Delta]y, #3} & @@@ vs], r]},       ViewPoint -> {0, -5, 0}, Boxed -> False]]]]

GraphicsGrid[Partition[Show[#, ImageSize -> 120] & /@    Table[projectToXZImage[      rotate[sampleRegionSmooth2, RandomReal[{-Pi, Pi}, 3]]], 16], 4],  Spacings -> {0, 0}]

The initial 3D image does not have to be completely random. In the next example, we randomly place circles in 3D and color a voxel white if the circle intersects the voxel. As a result, the 3D shapes corresponding to the connected voxel regions have a more network-like shape.

randomCircle[   l : {{xml : in_, xmax_}, {ymin_, ymax_}, {zmin_, zmax_}}]  :=    Module[{mp = RandomReal /@ l, \[Delta] = Mean[Abs[Subtract @@@ l]],     dir1, dir2, \[Rho]1, \[Rho]2},    {dir1, dir2} = Orthogonalize[RandomReal[{-1, 1}, {2, 3}]];     {\[Rho]1, \[Rho]2} = RandomReal[\[Delta]/2 {0, 1}, 2];   Circle3D[mp, {\[Rho]1, \[Rho]2}, {dir1, dir2}]]

3D Shapes with Bilateral Symmetry

2D projection shapes of 3D animals typically have no symmetry. Even if an animal has a symmetry, the visible shape from a given viewpoint and a given animal posture does not have a symmetry. But most animals have a bilateral symmetry. I will now use random images that have a bilateral symmetry. As a result, many of the resulting shapes will also have a bilateral symmetry. Not all of the shapes, because some regions do not intersect the symmetry plane. Bilateral symmetry is important for the classic Rorschach inkblot test: “The mid-line appears to attract the patient’s attention with a sort of magical power,” noted Rorschach (Schott2013). The function makeSymmetricShapes3D will generate regions with bilateral symmetry.

makeSymmetricShapes3D[{dimz_, dimy_, dimx_}, {black_, white_}] :=    Module[{ii, randomImage, imesh},    ii[x_, y_,      z_] := (ii[x, y, z] =       ii[x, 1 + dimy - y, z] =        RandomChoice[{black, white} -> {0, 1}]);   randomImage =     Image3D[Table[ii[x, y, z], {x, dimx}, {y, dimy}, {z, dimz}]];    imesh =     ImageMesh[randomImage, Method -> "MarchingCubes",      CornerNeighbors -> False];        Select[ConnectedMeshComponents@imesh, 10 < Volume[#] &]]

Here are some examples.

SeedRandom[888]; symmShapes =   Table[makeSymmetricShapes3D[{d, d, d}, {3, 1}], {d, 5, 8}]

And here are smoothed and colored versions of these regions. The viewpoint is selected in such a way as to make the bilateral symmetry most obvious.

displaySmoothedRegion[reg_BoundaryMeshRegion, color_Directive,    opts___] :=   With[{sm = smooth3D[reg, 2]},   Show[BoundaryMeshRegion[sm[[1]], Style[sm[[2, 1]] , color],      PlotTheme -> "SmoothShading"], opts]]

To get a better feeling for the connection between the pixel values of the 3D image and the resulting smoothed shape, the next Manipulate allows us to specify each pixel value for a small-sized 3D image. The grids/matrices of checkboxes represent the voxel values of one-half of a 3D image with bilateral symmetry.

Manipulate[  DynamicModule[{v = v0, T, imesh, sb, reg, gList},    Column[{Column[{Text[Style["voxel values", Gray, Italic]],        Row[Join[Riffle[          Table[           With[{j = j},             Underscript[Grid[Table[With[{iL = i, jL = j, kL = k},                Checkbox[Dynamic[v[[iL, jL, kL]]]]], {k, kz}, {i, ix}],               Spacings -> 0],                                                                  Text[Style[Row[{"y", "=", j, If[j == jy + 1 - j, "",                                                         Row[{" | y", "=", jy + 1 - j}]]}], Gray,                Italic]]]], {j, Ceiling[jy/2]}],           "\[VerticalSeparator]"], {" "},         {Dynamic[           If[imesh =!= EmptyRegion[3],             Show[reg, ImageSize -> {{140}, {140}},              ViewPoint -> {-3, 1, 1}], ""],           TrackedSymbols :> {reg, imesh}]}]]}],                      Dynamic[T =        Table[Boole@v[[i, Min[j, jy + 1 - j], k]], {k, kz}, {j, jy}, {i,          ix}];                    imesh = ImageMesh[Image3D[T], Method -> "MarchingSquares"];                 If[imesh =!= EmptyRegion[3],        sb = SortBy[ConnectedMeshComponents@imesh, Volume];            Column[{reg = sb[[-1]];         Graphics3D[smooth3D[reg, sm], ImageSize -> 400,           ViewPoint -> {-3, 1, 1},                               Ticks -> None, Axes -> True,           AxesLabel -> {"x", "y", "z"}]}], "empty region"],      TrackedSymbols :> {v}]}, Dividers -> All]],  Row[{Underscript[Control[{{ix, 5, ""}, 3, 10, 1, SetterBar}],      Style["(x)", Gray]], "\[Times]",          Underscript[Control[{{jy, 5, ""}, 3, 10, 1, SetterBar}],      Style["(y)", Gray]], "\[Times]",               Underscript[Control[{{kz, 6, ""}, 3, 10, 1, SetterBar}],      Style["(z)", Gray]]}],      Delimiter,  {{sm, 1, "smoothness"}, 1, 3, 1, SetterBar},  {{v0, MapAt[True &, Table[False, {10}, {10}, {10}],     {{1, 1, 2}, {2, 1, 2}, {3, 1, 2}, {3, 2, 2}, {3, 3, 2}, {3, 4,        2}, {1, 5, 2},      {2, 5, 2}, {3, 5, 2}, {3, 2, 3}, {3, 4, 3}, {3, 1, 4}, {3, 3,        4}, {3, 5, 4},      {2, 1, 5}, {3, 3, 5}, {2, 5, 5}, {1, 1, 6}, {1, 3, 6}, {2, 3,        6}, {3, 3, 6}, {1, 5, 6}} ]}, None},  TrackedSymbols :> {ix, jy, kz, sm}, SaveDefinitions -> True]

Manipulate[ DynamicModule[{v = v0, T, imesh, sb, reg, gList}, Column[{Column[{Text[Style["voxel values", Gray, Italic]], Row[Join[Riffle[ Table[ With[{j = j}, Underscript[Grid[Table[With[{iL = i, jL = j, kL = k}, Checkbox[Dynamic[v[[iL, jL, kL]]]]], {k, kz}, {i, ix}], Spacings -> 0], Text[Style[Row[{"y", "=", j, If[j == jy + 1 - j, "", Row[{" | y", "=", jy + 1 - j}]]}], Gray, Italic]]]], {j, Ceiling[jy/2]}], "[VerticalSeparator]"], {" "}, {Dynamic[ If[imesh =!= EmptyRegion[3], Show[reg, ImageSize -> {{140}, {140}}, ViewPoint -> {-3, 1, 1}], ""], TrackedSymbols :> {reg, imesh}]}]]}], Dynamic[T = Table[Boole@v[[i, Min[j, jy + 1 - j], k]], {k, kz}, {j, jy}, {i, ix}]; imesh = ImageMesh[Image3D[T], Method -> "MarchingSquares"]; If[imesh =!= EmptyRegion[3], sb = SortBy[ConnectedMeshComponents@imesh, Volume]; Column[{reg = sb[[-1]]; Graphics3D[smooth3D[reg, sm], ImageSize -> 400, ViewPoint -> {-3, 1, 1}, Ticks -> None, Axes -> True, AxesLabel -> {"x", "y", "z"}]}], "empty region"], TrackedSymbols :> {v}]}, Dividers -> All]], Row[{Underscript[Control[{{ix, 5, ""}, 3, 10, 1, SetterBar}], Style["(x)", Gray]], "[Times]", Underscript[Control[{{jy, 5, ""}, 3, 10, 1, SetterBar}], Style["(y)", Gray]], "[Times]", Underscript[Control[{{kz, 6, ""}, 3, 10, 1, SetterBar}], Style["(z)", Gray]]}], Delimiter, {{sm, 1, "smoothness"}, 1, 3, 1, SetterBar}, {{v0, MapAt[True &, Table[False, {10}, {10}, {10}], {{1, 1, 2}, {2, 1, 2}, {3, 1, 2}, {3, 2, 2}, {3, 3, 2}, {3, 4, 2}, {1, 5, 2}, {2, 5, 2}, {3, 5, 2}, {3, 2, 3}, {3, 4, 3}, {3, 1, 4}, {3, 3, 4}, {3, 5, 4}, {2, 1, 5}, {3, 3, 5}, {2, 5, 5}, {1, 1, 6}, {1, 3, 6}, {2, 3, 6}, {3, 3, 6}, {1, 5, 6}} ]}, None}, TrackedSymbols :> {ix, jy, kz, sm}, SaveDefinitions -> True]

Randomly and independently selecting the voxel value of a 3D image makes it improbable that very large connected components without many holes form. Using instead random functions and deriving voxel values from these random continuous functions yields different-looking types of 3D shapes that have a larger uniformity over the voxel range. Effectively, the voxel values are no longer totally uncorrelated.

makeSymmetricShapes3DFunctionBased[{dimz_, dimy_, dimx_}, G_] :=  Module[{fun, randomImage, imesh, M = 2 Max[{dimx, dimy, dimz}], x, y,     z},  fun[x_, y_, z_] =      Sum[Cos[RandomReal[{-M, M}] (y - (dimy + 1)/2)]                                                          Cos[RandomReal[{-M, M}] x + 2 Pi RandomReal[]]                                                                    Cos[RandomReal[{-M, M}] z + 2 Pi RandomReal[]], {4}];   randomImage =     Image3D[Table[      If[fun[x, y, z] > G, 0, 1], {x, dimx}, {y, dimy}, {z, dimz}]] ;    imesh = ImageMesh[randomImage, Method -> "MarchingSquares"];        Select[ConnectedMeshComponents@imesh, 10 < Volume[#] &]]

Here are some examples of the resulting regions, as well as their smoothed versions.

SeedRandom[55]; symmFunctionShapes =   Table[makeSymmetricShapes3DFunctionBased[{d, d, d}, -0.3], {d, 5, 8}]

symmFunctionShapes /. bmr_BoundaryMeshRegion :>    displaySmoothedRegion[bmr,     Directive[Blend[{GrayLevel[0.5], Orange}, 0.1],      Specularity[Purple, 10]], ViewPoint -> {-3, -0.5, 1.2}]

Selected Examples of 3D Shapes

Our notebook contains in the initialization section more than 400 selected regions of “interesting” shapes classified into five types (mostly arbitrarily, but based on human feedback).

types = <|"asymmetric general shapes" -> aymmetricGeneralShapes,                 "asymmetric animal shapes" -> asymmetricAnimalShapes,                 "symmetric general shapes"  -> symmetricGeneralShapes,                 "symmetric animal shapes" -> symmetricAnimalShapes,                  "symmetric alien shapes" -> symmetricAlienShapes,                     "asymmetric function animal shapes" ->      asymmetricFunctionAnimalShapes,                     "symmetric function animal shapes" ->      symmetricFunctionAnimalShapes|>;

Let’s look at some examples of these regions. Here is a list of some selected ones. Many of these shapes found in random 3D images could be candidates for Generation 8 Pokémon or even some new creatures, tentatively dubbed Mathtubbies.

selections = <|    "asymmetric general shapes" ->       {1, 4, 7, 8, 9, 10, 11, 13, 18, 20, 32, 35, 39, 43, 48, 49},     "asymmetric animal shapes" ->       {3, 4, 5, 6, 7, 10, 11, 13, 14, 15, 16, 17, 18, 24, 25, 28},     "symmetric general shapes"  ->  {1, 4, 7, 12, 15, 16, 18, 20, 22,       25, 26, 27, 28, 29, 33, 35, 36, 39, 41, 42} ,       "symmetric animal shapes" ->  {2, 3, 5, 6, 7, 8, 9, 10, 11, 12,       14, 15, 20, 22, 23, 25, 26, 31, 32, 35},       "symmetric alien shapes" ->      {2, 4, 5, 6, 8, 9, 13, 15, 17, 18, 19, 20, 26, 30, 38, 39},         "asymmetric function animal shapes" -> {4, 5, 6, 9, 10, 11,       13, 15, 18, 22, 29, 30, 34, 39, 41, 54, 58, 66, 69, 76},         "symmetric function animal shapes" -> {1, 4, 5, 6, 10, 13, 16,       20, 21, 26, 29, 32, 34, 35, 36, 41, 78, 88, 90, 92}|>;

Many of the shapes are reminiscent of animals, even if the number of legs and heads is not always the expected number.

Do[Print[Framed[Style[t, Bold, Gray], FrameStyle -> Gray]];   Print /@ Partition[    Show[Rasterize[#], ImageSize -> {{200}, {200}}] & /@ (makeRegion /@        types[t][[selections[[t]]]]), 4],  {t, Keys[types]}]

asymmetrical general shapes

asymmetrical general shapes 1

asymmetrical general shapes 2

asymmetrical general shapes 3

asymmetrical general shapes 4

asymmetric animal shapes

asymmetric animal shapes 1

asymmetric animal shapes 2

asymmetric animal shapes 3

asymmetric animal shapes 4

symmetric general shapes

symmetric general shapes 1

symmetric general shapes 2

symmetric general shapes 3

symmetric general shapes 4

symmetric general shapes 5

symmetric animal shapes

symmetric animal shapes 1

symmetric animal shapes 2

symmetric animal shapes 3

symmetric animal shapes 4

symmetric animal shapes 5

symmetric alien shapes

symmetric alien shapes 1

symmetric alien shapes 2

symmetric alien shapes 3

symmetric alien shapes 4

asymmetric functional animal shapes

assymetric functional animal shapes 1

assymetric functional animal shapes 2

assymetric functional animal shapes 3

assymetric functional animal shapes 4

assymetric functional animal shapes 5

symmetric function animal shapes

symmetric function animal shapes 1

symmetric function animal shapes 2

symmetric function animal shapes 3

symmetric function animal shapes 4

symmetric function animal shapes 5

To see all of the 400+ shapes from the initialization cells, one could carry out the following.

Do[Print[Framed[Style[t, Bold, Gray], FrameStyle -> Gray]];
Do[Print[Rasterize @ makeRegion @ r], {r, types[t]}], {t,Keys[types]}]

The shapes in the list above were manually selected. One could now go ahead and partially automate the finding of interesting animal-looking shapes and “natural” orientations using machine learning techniques. In the simplest case, we could just use ImageIdentify.

ImageIdentify[ , "animal", 5, "Probability"]

This seems to be a stegosaurus-poodle crossbreed. But we will not pursue this direction here and now, but rather return to the 2D projections. (For using software to find faces in architecture and general equipment, see Hong2014.)

Modifying the 3D Shapes

Before returning to the 2D projections, we will play for a moment with the 3D shapes generated and modify them for a different visual appearance.

For instance, we could tetrahedralize the regions and fill the tetrahedra with spheres.

makeRegion[reg_, n_] :=   With[{sr = smooth3D[reg[[1]], n]},    BoundaryMeshRegion[sr[[1]], sr[[2, 1]]]]

Or with smaller tetrahedra.

dualTetrahedron[Tetrahedron[l_]] :=   Tetrahedron[ Mean /@ Subsets[l, {3}]]

Or add some spikes.

addPrickle[Polygon[{p1_, p2_, p3_}], \[Alpha]_: 1 ] :=   Module[{mp = Mean[{p1, p2, p3}], normal, \[Lambda]},   normal = Normalize[Cross[p1 - mp, p2 - mp]];   \[Lambda] = Mean[EuclideanDistance[#, mp] & /@ {p1, p2, p3}];   Tetrahedron[{p1, p2, p3, mp + \[Alpha] \[Lambda] normal}] ]

Or fill the shapes with cubes.

makeRandomPoints[d_, n_] := RandomPoint[makeRegion[d, 2], n]

Or thicken or thin the shapes.

thickenThinnen[gr_, d_] :=   Show[gr] /.    GraphicsComplex[vs_, b_, VertexNormals -> ns_] :>     GraphicsComplex[ vs + d Normalize /@ ns, b, VertexNormals -> ns]

Or thicken and add thin bands.

Module[{ob = symmetricAlienShapes[[43]], dr, dd},  dr = SignedRegionDistance[ob[[1]]];  dd[{x_Real, y_Real, z_Real}] := dr[{x, y, z}];  Row[{Show[makeRegion[ob], ImageSize -> 240],       ContourPlot3D[dd[{x, y, z}], {x, 0, 9}, {y, -1, 9}, {z, -1, 8},      Contours -> {0.33}, PlotPoints -> 80, MaxRecursion -> 0,     MeshFunctions -> {#3 &}, Mesh -> 40,      MeshShading -> {ob[[2]], None},     Evaluate[makeOptions[ob]], Boxed -> False, Axes -> False,      ImageSize -> 320]}]]

Or just add a few stripes as camouflage.

tigerize[{reg_, col_, {vp_, vd_}}, {col1_, col2_}, {stripes_, xyz_}] :=   Module[{sm = smooth3D[reg, 3], g, size},   g = Show[     BoundaryMeshRegion[sm[[1]], sm[[2, 1]],       PlotTheme -> "SmoothShading"], ViewPoint -> vp,      ViewVertical -> vd];           size = Abs[Subtract @@ MinMax[Transpose[sm[[1]]][[xyz]]]];   g /. GraphicsComplex[vs_, rest__] :> GraphicsComplex[vs, rest,                                             VertexColors -> (         Blend[{col1, col2}, Sin[2 Pi stripes #[[xyz]]/size]^2] & /@ vs

Or model the inside through a wireframe of cylinders.

makeCylinders[pts_, m_, \[Rho]_] := Module[{nf = Nearest[pts]},     {Union[Flatten[      Function[p,         Cylinder[Sort@{#, p}, \[Rho]] & /@  Rest[ nf[p, m + 1]]] /@        pts]],     Sphere[#, \[Rho]] & /@ pts} ]

Or build a stick figure.

toStickFigure[ob_, \[Delta]_] :=   Module[{pts, nf, gr, ccs, modCol,                      f = RandomChoice[{Lighter, Darker}][#, RandomReal[{0, 0.2}]] &},     nf = Nearest[     pts = Cases[makeRegion[ob], _GraphicsComplex, \[Infinity]][[1,        1]]];   gr = Graph[     UndirectedEdge[#, nf[#, {Infinity, \[Delta]}][[-1]]] & /@ pts];   ccs = WeaklyConnectedGraphComponents[gr];   modCol[] := ob[[2]] /. Directive[col1_, Specularity[col2_, e2_]] :>                                                         Directive[f[col1],        Specularity[f[col2], RandomReal[{0.75, 1.25}] e2]];   Graphics3D[{EdgeForm[], CapForm[None],      {modCol[],         Cylinder[Union[Sort /@ List @@@ EdgeList[#]], 0.05]} & /@       Take[ccs, All],       ob[[2]], Sphere[#, 0.05] & /@ pts}, makeOptions[ob],     Boxed -> False,     Method -> {"TubePoints" -> 6, "SpherePoints" -> 6}]]

Or fill the surface with a tube.

makeTube[ob_, n_, \[Rho]_] :=  Module[{dr = makeRegion[ob, 1], pairs, neighbors, nl, mcs},   pairs = {#[[1, 1]], Last /@ #} & /@ Split[Sort[Flatten[{First[#],            Reverse[First[#]]} & /@ MeshCells[dr, 1],         1]], #1[[1]] == #2[[1]] &];   (neighbors[#1] = #2) & @@@ pairs;   nl = NestList[RandomChoice[DeleteCases[neighbors[#], #]] &, 1, n];   mcs = MeshCoordinates[dr];   Tube[BSplineCurve[mcs[[nl]]], \[Rho]]]

Or a Kelvin inversion.

With[{g = With[{o = aymmetricGeneralShapes[[50]]},     With[{sm = smooth3D[o[[1]], 3]},      Show[BoundaryMeshRegion[sm[[1]], Style[sm[[2, 1]], o[[2]]],        PlotTheme -> "SmoothShading"]]]]},  {Row[{Show[g, ImageSize -> 240],                 Show[invert3D[g, {4, 4, 4}], ViewPoint -> {2.62, -2.06, -0.52},                       ViewVertical -> {-0.04, -0.92, -0.42},       ImageSize -> 280]}]}]

Shadows of the Selected Examples

If we look at the 2D projections of some of these 3D shapes, we can see again (with some imagination) a fair number of faces, witches, kobolds, birds and other animals. Here are some selected examples. We show the 3D shape in the original orientation, a randomly oriented version of the 3D shape, and the three coordinate-plane projections of the randomly rotated 3D shape.

projectionPair[{{type_, n_}, angles_}] :=  Module[{opts, col, sr},   opts = Sequence[ImageSize -> {{220}, {220}}, BoxRatios -> {1, 1, 1},      ViewPoint -> {3, -3, 3}, Axes -> False, Boxed -> False];   col = types[type][[n, 2]];   sr = smooth3D[types[type][[n, 1]], 3];   Row[Riffle[Framed /@ Rasterize /@        {Graphics3D[{EdgeForm[], col, sr},          ViewPoint -> types[type][[n]][[3, 1]],                                           ViewVertical -> types[type][[n]][[3, 2]],          ImageSize -> {{220}, {220}}, Axes -> False, Boxed -> False],         Graphics3D[{EdgeForm[], col, rotate[sr, angles]}, opts],         Graphics3D[projectTo2D[rotate[sr, angles]], opts]}, " "]]]

Unsurprisingly, some are recognizable 3D shapes, like these projections that look like bird heads.

projectionPair[{{"asymmetric animal shapes", 15}, {-2.8, 3.05, 2.35}}]

Others are much more surprising, like the two heads in the projections of the two-legged-two-finned frog-dolphin.

projectionPair[{{"symmetric general shapes", 34}, {2.8, -1.4, 1.4}}]

Different orientations of the 3D shape can yield quite different projections.

projectionPair[{{"asymmetric general shapes",     49}, {-3.05, -0.75, -1.3}}]

For the reader’s amusement, here are some more projections.

projectionPair[{{"symmetric alien shapes", 3}, {-0.4, -0.25, 0.85}}]

projectionPair[{{"symmetric alien shapes", 7}, {0., 2.55, 0.6}}]

projectionPair[{{"asymmetric general shapes", 11}, {-1.25,     0.05, -1.6}}]

projectionPair[{{"asymmetric general shapes",     9}, {-0.15, -0.85, -0.55}}]

projectionPair[{{"symmetric general shapes", 26}, {1.8, -2.6, -2.3}}]

projectionPair[{{"asymmetric animal shapes", 5}, {2.65, 2.1, -2.85}}]

projectionPair[{{"asymmetric general shapes",     34}, {-3.1, -2.95, -1.}}]

Shapes from 4D Images

Now that we have looked at 2D projections of 3D shapes, the next natural step would be to look at 3D projections of 4D shapes. And while there is currently no built-in function Image4D, it is not too difficult to implement for finding the connected components of white 4D voxels. We implement this through the graph theory function ConnectedComponents and consider two 4D voxels as being connected by an edge if they share a common 3D cube face. As an example, we use a 10*10*10*10 voxel 4D image. makeVoxels4D makes the 4D image data and whitePositionQ marks the position of the white voxels for quick lookup.

makeVoxels4D[{dimw_, dimz_, dimy_, dimx_}, {black_, white_}] :=  Table[RandomChoice[{black, white} -> {0,       1}], {dimw}, {dimz}, {dimy}, {dimx}]

The 4D image contains quite a few connected components.

ccs = ConnectedComponents[gr];

Here are the four canonical projections of the 4D complex.

With[{cc = ccs[[1]]},  {Graphics3D[(Cuboid[# - 1/2, # + 1/2] &@{#1, #2, #3}) & @@@ cc,                       AxesLabel -> {"x", "y", "z"}, Axes -> True,     Ticks -> False],   Graphics3D[(Cuboid[# - 1/2, # + 1/2] &@{#1, #2, #4}) & @@@ cc,                           AxesLabel -> {"x", "y", "w"}, Axes -> True,     Ticks -> False],   Graphics3D[(Cuboid[# - 1/2, # + 1/2] &@{#1, #3, #4}) & @@@ cc,                           AxesLabel -> {"x", "z", "w"}, Axes -> True,     Ticks -> False],   Graphics3D[(Cuboid[# - 1/2, # + 1/2] &@{#2, #3, #4}) & @@@ cc,                           AxesLabel -> {"y", "z", "w"}, Axes -> True,     Ticks -> False]}]

We package the finding of the connected components into a function getConnected4DVoxels.

getConnected4DVoxels[Image4D[l_], n_] :=   Module[{posis, blackPos, edges, gr, v = UnitVector[4, #] &},   posis =     DeleteCases[     Level[MapIndexed[If[# === 0, #2, Nothing] &, l, {-1}], {-2}], {}];   (blackPos[#] = True) & /@ posis;    edges = Union[Flatten[Table[If[TrueQ[blackPos[# + v[j]]],                Sort@ UndirectedEdge[#, # + v[j]], {}] & /@ posis, {j,         4}]]];   gr = Graph[edges];   Take[Reverse[SortBy[ConnectedComponents[gr], Length]], UpTo[n]]]

We also define a function rotationMatrix4D for conveniently carrying rotations in the six 2D planes of the 4D space.

rotationMatrix4D[{\[Omega]xy_, \[Omega]xz_, \[Omega]xw_, \[Omega]yz_, \ \[Omega]yw_, \[Omega]zw_}] :=    With[{u = UnitVector[4, #] &, c = Cos, s = Sin},     Fold[Dot, IdentityMatrix[4],        {{{c[\[Omega]xy], s[\[Omega]xy], 0, 0}, {-s[\[Omega]xy],         c[\[Omega]xy], 0, 0},  u[3], u[4]},          {{c[\[Omega]xz], 0, s[\[Omega]xz], 0},        u[2], {-s[\[Omega]xz], 0, c[\[Omega]xz], 0}, u[4]},          {{c[\[Omega]xw], 0, 0, s[\[Omega]xw]}, u[2],        u[3], {-s[\[Omega]xw], 0, 0, c[\[Omega]xw]}},          {u[1], {0, c[\[Omega]yz], s[\[Omega]yz],         0}, {0, -s[\[Omega]yz], c[\[Omega]yz], 0}, u[4]},          {u[1], {0, c[\[Omega]yw], 0, s[\[Omega]yw]},        u[3], {0, -s[\[Omega]yw], 0, c[\[Omega]yw]}},          {u[1],        u[2], {0, 0, c[\[Omega]zw], s[\[Omega]zw]}, {0,         0, -s[\[Omega]zw], c[\[Omega]zw]}}}]];

Once we have the 3D projections, we can again use the above function to smooth the corresponding 3D shapes.

to3DImage[l_] :=   With[{mins = Min /@ Transpose[l]}, (# - mins) + 1 & /@ l]

In the absence of Tralfamadorian vision, we can visualize a 4D connected voxel complex, rotate this complex in 4D, then project into 3D, smooth the shapes and then project into 2D. For a single 4D shape, this yields a large variety of possible 2D projections. The function projectionGrid3DAnd2D projects the four 3D projections canonically into 2D. This means we get 12 projections. Depending on the shape of the body, some might be identical.

extractRegion[vs_] := Last[SortBy[ConnectedMeshComponents[     ImageMesh[Image3D[SparseArray[vs -> 1]],       Method -> "MarchingSquares"]], Volume]]

We show the 3D shape in a separate graphic so as not to cover up the projections. Again, many of the 2D projections, and also some of the 3D projections, remind us of animal shapes.

projectionGrid3DAnd2D[ccs[[1]], {1, 2, 3, 4, 5, 6}, 2,   Directive[GrayLevel[0.4], Specularity[Yellow, 12]]]

The following Manipulate allows us to rotate the 4D shape. The human mind sees many animal shapes and faces.

Manipulate[  projectionGrid3DAnd2D[   ccs[[c]], {\[Omega]xy, \[Omega]xz, \[Omega]xw, \[Omega]yz, \ \[Omega]yw, \[Omega]zw},                                                     1,    Directive[GrayLevel[0.4], Specularity[Yellow, 12]]],  {{c, 2, "component"}, 1, 12, 1, SetterBar},   Delimiter,  {{s, 1, "smoothness"}, {0, 1, 2}},   Delimiter,  {{\[Omega]xy, 0}, -Pi, Pi, ImageSize -> Small},  {{\[Omega]xz, 0}, -Pi, Pi, ImageSize -> Small},  {{\[Omega]xw, 0}, -Pi, Pi, ImageSize -> Small},  {{\[Omega]yz, 0}, -Pi, Pi, ImageSize -> Small},  {{\[Omega]yw, 0}, -Pi, Pi, ImageSize -> Small},  {{\[Omega]zw, 0}, -Pi, Pi, ImageSize -> Small},  TrackedSymbols :> True, ControlPlacement -> Left,  SaveDefinitions -> True]

Manipulate[  projectionGrid3DAnd2D[   ccs[[c]], {\[Omega]xy, \[Omega]xz, \[Omega]xw, \[Omega]yz, \ \[Omega]yw, \[Omega]zw},                                                     1,    Directive[GrayLevel[0.4], Specularity[Yellow, 12]]],  {{c, 2, "component"}, 1, 12, 1, SetterBar},   Delimiter,  {{s, 1, "smoothness"}, {0, 1, 2}},   Delimiter,  {{\[Omega]xy, 0}, -Pi, Pi, ImageSize -> Small},  {{\[Omega]xz, 0}, -Pi, Pi, ImageSize -> Small},  {{\[Omega]xw, 0}, -Pi, Pi, ImageSize -> Small},  {{\[Omega]yz, 0}, -Pi, Pi, ImageSize -> Small},  {{\[Omega]yw, 0}, -Pi, Pi, ImageSize -> Small},  {{\[Omega]zw, 0}, -Pi, Pi, ImageSize -> Small},  TrackedSymbols :> True, ControlPlacement -> Left,  SaveDefinitions -> True]

Here is another example, with some more scary animal heads.

SeedRandom[8]; projectionGrid3DAnd2D[          getConnected4DVoxels[    Image4D[makeVoxels4D[{10, 10, 10, 10}, {4, 1}]], 5][[1]],     {-1.8, 2.6, 1., 2.2, -2.7, -1.5}, 3,   Directive[Darker[Yellow, 0.4], Specularity[Red, 10]]]

SeedRandom[8]; projectionGrid3DAnd2D[          getConnected4DVoxels[    Image4D[makeVoxels4D[{10, 10, 10, 10}, {4, 1}]], 5][[1]],     {-1.8, 2.6, 1., 2.2, -2.7, -1.5}, 3,   Directive[Darker[Yellow, 0.4], Specularity[Red, 10]]]

We could now go to 5D images, but this will very probably bring no new insights. To summarize some of the findings: After rotation and smoothing, a few percent of the connected regions of black voxels in random 3D images have an animal-like shape, or an artistic rendering of an animal-like shape. A large fraction (~10%) of the projections of these 3D shapes into 2D pronouncedly show the pareidolia phenomenon, in the sense that we believe we can recognize animals and faces in these projections. 4D images, due to the voxel count that increases exponentially with dimension, yield an even larger number of possible animal and face shapes.

To download this post as a CDF, click here. New to CDF? Get your copy for free with this one-time download.

]]>
http://blog.wolfram.com/2017/02/23/how-many-animals-and-arp-imals-can-one-find-in-a-random-3d-image/feed/ 3
Analyzing and Translating an Alien Language: Arrival, Logograms and the Wolfram Language http://blog.wolfram.com/2017/01/31/analyzing-and-translating-an-alien-language-arrival-logograms-and-the-wolfram-language/ http://blog.wolfram.com/2017/01/31/analyzing-and-translating-an-alien-language-arrival-logograms-and-the-wolfram-language/#comments Tue, 31 Jan 2017 15:30:32 +0000 Michael Gammon http://blog.internal.wolfram.com/?p=34787 Black and white logogram

If aliens actually visited Earth, world leaders would bring in a scientist to develop a process for understanding their language. So when director Denis Villeneuve began working on the science fiction movie Arrival, he and his team turned to real-life computer scientists Stephen and Christopher Wolfram to bring authentic science to the big screen. Christopher specifically was tasked with analyzing and writing code for a fictional nonlinear visual language. On January 31, he demonstrated the development process he went through in a livecoding event broadcast on LiveEdu.tv.

video link

Color logogram

Scientists and general viewers alike were interested in the story of the Wolframs’ behind-the-scenes contributions to the movie, from Space.com to OuterPlaces.com and others. SlashFilm.com went further, pointing readers to the Science vs. Cinema Arrival episode featuring interviews with the Wolframs, other scientists, Jeremy Renner, Amy Adams and Villeneuve. Wired magazine also interviewed Christopher Wolfram on the subject of the Wolfram Language code he created to lend validity to the computer screens shown in the film. Watch Christopher Wolfram walk you through his development process.

Wolfram Research has a track record of contributing to film and TV. From the puzzles in the television show NUMB3RS to the wormhole experience in Interstellar, Wolfram technology and expertise have enriched some beloved popular art and entertainment. With Arrival, however, Stephen and Christopher consulted more extensively on what Stephen calls “the science texture” of the film.

Science and technology shape our world now more than ever. Science fiction movies are finding a wider audience, and we find these stories are crafted into films by some of the most skilled filmmakers around. If filmmakers such as Villeneuve continue to recognize the importance of getting the science right, science fiction will continue to live up to Arthur C. Clarke’s claim that “science fiction is escape into reality…. [It] concern[s] itself with real issues: the origin of man; our future.”

For more information on the Wolframs’ involvement in Arrival, read Stephen Wolfram’s blog post, “Quick, How Might the Alien Spacecraft Work?

]]>
http://blog.wolfram.com/2017/01/31/analyzing-and-translating-an-alien-language-arrival-logograms-and-the-wolfram-language/feed/ 3
Meet the Authors of Hands-on Start to Wolfram Mathematica, Second Edition http://blog.wolfram.com/2017/01/24/meet-the-authors-of-hands-on-start-to-wolfram-mathematica-second-edition/ http://blog.wolfram.com/2017/01/24/meet-the-authors-of-hands-on-start-to-wolfram-mathematica-second-edition/#comments Tue, 24 Jan 2017 17:29:49 +0000 Jeremy Sykes http://blog.internal.wolfram.com/?p=34714 Hands-on Start cover

Jeremy Sykes: To celebrate the release of Hands-on Start to Wolfram Mathematica and Programming with the Wolfram Language (HOS2), now in its second edition, I sat down with the authors. Working with Cliff, Kelvin and Michael as the book’s production manager has been an easy and engaging process. I’m thrilled to see the second edition in print, particularly now in its smaller, more conveniently sized format.

Q: Let’s start with Version 11. What’s new for Version 11 in HOS2 that you’d like to talk about?

Michael: As with any major Mathematica release, there are more new things to talk about than can be discussed in the time we have available. But I’m getting a lot of use out of the new graphics capabilities—the new labeling system, the ability to have callouts on a graph, word clouds, enhanced geographical visualizations and even things you don’t think about, such as the removal of discontinuities when plotting things like tan(x). The second edition of the book also includes updates for working with data, like the ability to process audio information, working with linguistic data, new features for date computation and preparing output for 3D printing. (Also new is an index, to make it easier to find specific topics.)

Q: Getting back to basics, I know that HOS existed in some form long before the book came out. Maybe you could fill out some of the history for us.

Cliff: Twenty years ago, I started at Wolfram on the MathMobile, traveling from city to city, visiting organizations (mostly universities—some companies and government labs as well). The MathMobile was a 30-foot trailer (connected to a truck) with three laptop stations where people would come into the trailer to see Mathematica in action in this mobile computer lab. My job was to walk people through how to get started with Mathematica, sometimes answering technical questions for existing users, and sometimes going through a first overview for non-users. Afterward, I worked in technical support and then in sales, and through these experiences, I had the opportunity to see many types of first-time interactions with Mathematica. Thus, my passion for helping people get started with Mathematica was initiated. Several years ago, I came up with the idea for a free video series showing people how to get started. That was extremely popular. From that, many requests for a book version of the video series came. Then we translated the video series into a book.

Q: Tell me a bit about the partnership behind CKM Media and how you came together for the project.

Cliff: In the late 1990s, Kelvin and I began working closely together on many Wolfram projects relating to academia. We found a lot of shared ideas and approaches to problems, bringing very different strengths to those projects. I tended to look at things from a liberal-arts-college perspective and that of a math student who had strong math skills but not a lot of programming experience. Kelvin often came at things more from the mindset of an engineer at a research university. We found that these different mindsets helped ensure that more members of academia were well represented in those projects. Michael started at Wolfram in the mid-2000s. The three of us worked closely together after his hire. Michael brought a computer science mindset with a focus on data analytics and programmatic solutions to real-world problems. So while we have been good friends for about fifteen years, we also bring such different skill sets to projects and feel we make for a great collaborative team for this book.

Q: What makes HOS a good Mathematica teaching tool?

Kelvin: I think it’s a good teaching tool from two perspectives—one, it’s extremely useful for teaching anyone how to get started with Mathematica. We’ve had lots of great feedback from students, teachers, professors and lots of different types of people in the government and commercial sectors. But also, it’s been a great tool for the classroom. Over the years, we’ve learned a lot from the free Hands-on Start video series. The comments and feedback from educators using the videos for their classes helped shape the philosophy of the book. What we wanted was a slow buildup of material that works well for non-users, and specifically for non-users without any coding experience. As the chapters progress, the examples get more intricate and more interesting, using multiple Mathematica functions. At the same time, we wanted the first few chapters to also show a complete sample project in Mathematica. Then, when syntax conventions are covered, they are framed with a discussion about why that convention is useful for a project. The second thing we wanted to do was show the scope of Mathematica and the Wolfram Language.

There are many good books and tutorials for learning Mathematica, but they often focus on one field or class. Our team wanted to provide a good foundation for how to use Mathematica and the Wolfram Language for a broad range of applications. Even Mathematica users who have focused on a select few functions could learn how to use Mathematica in new types of applications or projects. And it’s been fun to see the results so far.

The first edition has been a recommended or required text in classes like chemistry, economics, physics and mathematics, and in classes specific to teaching Mathematica or the Wolfram Language itself.

Hands-on Start authors

Jeremy: HOS2 is available from our webstore. It’s also available on Amazon. It’s available in the beautiful, perfect-bound 7×10 paperback copy, and also as a fully updated Kindle version. For those who buy the printed book, we have enrolled the book in Kindle’s MatchBook program, which allows you to buy the EPUB at a reduced cost. We also have plans to release on iTunes. For our international users, we plan to release translated versions of HOS2 in Japanese, Chinese and other languages.

Be sure to check out our upcoming series of Hands-on Start to Wolfram Mathematica Training Tutorials. Learn directly from the authors of the book and ask questions during the interactive Q&A. Visit Wolfram Research’s Facebook event page to learn more about upcoming events.

]]>
http://blog.wolfram.com/2017/01/24/meet-the-authors-of-hands-on-start-to-wolfram-mathematica-second-edition/feed/ 2
Exploring a Boxing Legend’s Career with the Wolfram Language: Ali at 75 http://blog.wolfram.com/2017/01/17/exploring-a-boxing-legends-career-with-the-wolfram-language-ali-at-75/ http://blog.wolfram.com/2017/01/17/exploring-a-boxing-legends-career-with-the-wolfram-language-ali-at-75/#comments Tue, 17 Jan 2017 17:41:04 +0000 Jofre Espigule-Pons http://blog.internal.wolfram.com/?p=34647 Muhammad Ali (born Cassius Marcellus Clay Jr.; January 17, 1942–June 3, 2016) is considered one of the greatest heavyweight boxers in history, with a record of 56 wins and 5 losses. He remains the only three-time lineal heavyweight champion, so there’s no doubt why he is nicknamed “The Greatest.”

I used the Wolfram Language to create several visualizations to celebrate his work and gain some new insights into his life. Last June, I wrote a Wolfram Community post about Ali’s career. On what would have been The Greatest’s 75th birthday, I wanted to take a minute to explore the larger context of Ali’s career, from late-career boxing stats to poetry.

First, I created a PieChart showing Ali’s record:

bouts = <|"TKO" -> 21, "KO" -> 11, "UD" -> 18, "RTD" -> 5, "SD" -> 1,     "LUD" -> 2, "LSD" -> 2, "LRTD" -> 1|>; PieChart[bouts, ChartStyle -> 24,   ChartLabels ->    Placed[{Map[Style[#, Bold, FontSize -> 14] &, Values[bouts]],      Map[Style[#, FontFamily -> "Helvetica Neue", Bold,         FontSize -> 16] &, Keys[bouts]]}, {"RadialCenter",      "RadialCallout"}], PlotRange -> All,   SectorOrigin -> {Automatic, 1},  ChartLegends -> {"Technical Knockout", "Knockout",     "Unanimous Decision", "Retired", "Split-Decision",     "Lost - Unanimous Decision", "Lost - Split-Decision",     "Lost - Retired"},   PlotLabel ->    Style["Ali's Record", Bold, FontFamily -> "Helvetica Neue",     FontSize -> 22], ImageSize -> 410]
Ali's Record

Ali was dangerous outside the ring as well as inside it, at least for the white establishment in the US. He converted to Islam and changed his name from Cassius Clay, which he called his “slave name,” to Muhammad Ali. Later he refused military service during the Vietnam War, citing his religious beliefs. For this, he was arrested on charges of evading the draft, and he was pulled out of the ring for four years. All this made Ali an icon of racial pride for African Americans and the counterculture generation during the 1960s Civil Rights Movement.

Perhaps a lesser-known fact about Ali is that he played an important role in the emergence of rap, and he was an influential figure in the world of hip-hop music. He earned two Grammy nominations and he wrote several poems, among which is the shortest poem in the English language:

“Me?
Whee!”

So let’s create a WordCloud of his most popular poems. First, I need to import his poems from a database site like Poetry Soup and do some string processing from the HTML file in order to get the poems as plain strings:

poemsHTML =    Import["http://www.poetrysoup.com/famous/poems/best/muhammad_ali",     "Source"]; poems = StringReplace[    StringCases[poemsHTML,      Shortest["<pre>" ~~ x__ ~~ "</pre>"] -> x], {"\n" -> " ",      Shortest["<" ~~ __ ~~ ">"] -> " "}];

Here are the first three poems:

Take[poems, 3]

Then I get a list of the important words with TextWords and delete the stopwords with DeleteStopwords. Next, I style the word cloud with a boxing glove shape:

WordCloud[  StringDelete[   DeleteStopwords@Flatten[TextWords@poems], {"\[CloseCurlyQuote]",     "ve"}], \!\(\* GraphicsBox[ TagBox[RasterBox[CompressedData[" 1:eJztnb1rXMsZxk2SIo0gqFMlotKlGqcyBFVBnUNKF/cqyiVgfEE3IALq1KpW q8KdQZ1Lgf8AV24NKoLBnUCNGjcbP5u7aL3ZPfPOnJl5P87zwO/6WtjrPefM nJl5P//4488v/vGbJ0+e/PL7b/958cPpn09Ofvj3X//w7Td/e/3LP396ffz3 v7z+1/FPxyd/+vG33374n1/53ZO5ZoQQQgghhBBCCCGEEEIIIYQQQgghhBBC CCGEEEIIIYQQQgghhBBCCCGEEEIIIYQQQgghhBBCCCHrOD4+nr1582b28ePH 2ZcvX2bLuru7m//8+vp6dnJyMtvZ2VH/voRE4OzsbD63SvTp06fZ+fm5+jUQ 4o3Dw8PZ+/fvi+bdJl1dXc22trbUr40Qy2CveXt7W3Xurery8lL9OgmxBube 58+fm869ZeHs+OLFC/XrJkSb58+fz89sWnr79q36PSBEC9gvLQjr7+7urvr9 GOLg4GD28uXLub0XNibsp2EfBniP4Pf4+atXr+br+9OnT9W/M7ELxtLDw4P2 1Ps/4Xtp3xvMn4uLi9nNzc18f/D169dR14Sz9bt37+a2ZevvGdIHjAfLwljt eT+wbmE/0OssjH8Haybn4/R49uzZ3A7iQfBhtLwPmAOaZ+CFPnz4MPcBaY8N 0h6cYTzp/v6+6nlqf39/Pq9X43qsCPEPeDdojxPSBit2F4lw9qoZU4PP6ulv GSvaiOOBvY4XwQZSI4YG9kt8llfhvIB1W3vskPFY3XetqpavHvttC2e8Wjo6 OlIfQ6SM7e3t+XnKg+BbG3u92G96sTXlCjZb7fFE8oAdw6Lfb1WYM2PtgfDh ebjWseJ66AfMv7H+5B6Cf3LMdcJ/OIW5tyyeD+2D/af1cYn3w5g4GMSUeznj 1hauW3uMkWGsn//g/yq9NvjNSvOHI4n5XnaxvjaMiXmBv4x6FGuD2MPy+oC9 Z6nPAfZAD2fb3kK8hfaYI49Yjr1GbErJOxs+ek9xBRrSHnfkf8AnZlWIUSm5 Jtg7qbRYH0sf+NWsqtRuYHlPbU1j7FukDlZ9ECW+ZPgbeO7Ll/YYnDIWz0p4 J6AmTe61WD7PWhdrYemA2CxrQswZ4gNyrmNvb8+8P8W66CvsD2KVrAk5CrnX AZ8DNV6ldi9SjrU81BK7AHIkqDoqef+RcrDvsCTUv8+9Bto964rxo/1ALoQl 5eY84PtHze3TlvbYnAqW1o/cOCnY7qh20h6bU8CS/SI3350xL+2lPT6ngJV8 pNxaX9bOr1GlPT6jY2Uc557/6HfvJ+0xGh0Lyp1/tfuFUpsFO5f2GI2MhbzV XP+DxRi6yEIfGe1xGhXkz2kr1/9uyXY7FeGdl3ouyK8BqHnM/jJytOvS575f I9XU9aSUnQwx9JuEfSyeG+LdYHdAvgt7JT6iqdwzBtc/PSF+f+jZoH5drvD8 MS97956zBGofaQl+kJzaE1z/dJWqj1wjxwZ7InwO8ly050YvNHNzc86AXP/0 leqNU9tHhGceve7+6elp1XtWIuRmpL4n7Z/6kjynVvsU7Jei5i5i3bcg3ONN ddU9+d8j18eQ+G1bC/c30lxEHWlrWq2V4Cn3bxHbGjVXHz3erIwnvLOxh9Oe Q2Ox4JNfp8W9tVxHcVWr59qIsTup8aRR8wR7X8+9vK3WSYM87T9xH9fdXyux tzUkyZ/XfO943J+W+HGo9UI8yKb7jL11hDOiZIxrv9NhM/IUl+NpnbGslM8a wP9prS5PrlL7PbyHrMhL71Ltd1YESWInF3iuqyHxSWjGeayT9b2p5Zr1XrTp DLgO77EFkvFscZ0vqQHWC0/2fquS1puOcK9TsYTW6n8ty2qPDMZcjpO0xg3O Jd4l2W9bt/9arIlKlUtyNgIW8jFrSBKraXEfuipL8zDCu1lT0nw3KzGAY4RY lNR1WrKHpmRlX2rNfuVJUltblHssuV5vPi4L/TKYf1AmaZ5/JJuz5Ho9Sttv Qb9gmaQxiVZqs46VJEfCUzzvqtADVmP+IY6Hyhf2lpL7i31OFElivjzYYoaU U7+hFuzDkC9prRu8V6NIcmaKMJY06jSyF0O+ENsuubeR9vibcqmX8R77s5B0 j1OLKPa6XpLGg0bag0rWwKH6hR7VM//Qmx1ZW5IzUbQcMMkZKVp+sjTuogb0 S8gl3aNEsYNCkj5X2KdGlCQHrQaME5UJObeS+xltby+55mhr4LJ6zEHvtuRe gt8rdS8t5wqUSLIOWKwBVlO5/Z5LiFrvq6YkMZIg0r5+ite8Sa1rYXAOpiXp eRDNDiOp/RApBm9IreNJOQeHJfXHR9rTS/0vU7IlbG9vN5uDkcZOC0nWwGhx DhJfBHIIpyRpjnYJU3qX5UpaHyZSPIw0fyCS/0UiqV28hBY+enxf2BG9z2/J ePScJ7AqnEskYyaa/0WqVr0QW9QiX65dBZsSxrLHHPKprQdDdYkXoP/fVNUq nrtF3YFN7wv4kjAfPYxbiS3MQo+4WpKed6bgixhSqzjS2rVmU/VVPDxHyZrg uUbvsqS236nZYdapVU5FzXqXEt+u9V4LknNRJH9gqpf1gki2p1JJz8y51Myl T9WL81BzS2KL8bCWSyR9rzO/5lHSOnq51LrHqZojHuyIqXzVKPU/pPk5EfLj a6plPkWNPWJqDbH+PpXYvqLY5qXvcw82tJ5qWZO0Rv2TVG9k6/UOJHuzCLYY qa8LeQPU92rprwdj7TOp+nDWz/WoyTD0/T2cZ1OS9h7iHnSzJHbzMYyphZKy sVm2iUpi07z3TJL6ITy8LzXVoyZw6Twc8mFar3kgyRfwHuOeWucXRM6Nr6Fe fQxL+lcN5XhY38el3m24Ns+S1AIAkeJ/WqmVn3Ad8EXn7B+H8l6s53ym6obC 3uRV0jxUPD9Kpl5zMHdv4nkOpvI0vdoIc97Z3vfaPdWzBukC2MlSz2io9oZl O5vE3uw1F0taDwV1DCm5Un64liB2d9NcHDrzW14HJb55yzbdTZLW5Y/UG6OX etUfTa2Lq3GT+NmmP2/ZJpOyiXq0x0jHyJRzAseoR91DKRifeN5YG4d6lVv2 TaRszZbfH+uUMz54BixTL/9Ebazu51Kx5pbPsqvKGRuRetP0lpVe9rlYjb2I MgdzxoWHHBbL6tkXpiZWY7YjzMGc+WfZPuZF0lrk1rCau5SyyVgfs9K6vGBr a8vsmcCTvM5Bq/uf1Bpi2XaYWsNXYW31OvI6B6326fFaByc3ft9rnIFFeZ2D Vsey5HxtrYZMKlfT+vf3Ls9z0GJejCSv3Mo+Gu8L7I1z7jl9EPXleQ5a65GS U5tAWyW9Rzj/2ignH9oilpQTU1KSU1lDsKNI64AuY3HPEUXe56AlP2FubZDe dsXSugmW7nFEpWrpWsdKnH7JuwxnsR52pVyfwwL4/xgD2l45PlmrWIhbK+0f gNqcrWocYm+ca3NZAN+Phfs6BbXukd0DC7Vyx+ZC18qtR/6itN7LJqzZuqIL Oc/ac6gGmqpVlwdrIp5Hbj1qnNeQ81WjfwFtn/3VqidobzTrJ7ToMY46AqhL hs/GGom5gXMd/h+2FeRXSusLSoCtlLXodZQbI2EV2A+0NJTzD4bqU1nAa42p KCo9s1tEq3Z16nvB7mVxv4FaQrS76Et7HNSmdwypxK68GOdjbSW1wL4TthtK X17zd4fobdNLzavVfEHN+j2IIaC/3Za81pJJ0TOvJmUXWbc/hj8wdYasCfac XPdsqkfPFw1go++hsfV8sY9t1fsKnwtbMc97ttXzXdybHjHRkrOgRFij4N8b Yx+D7RX+Cvgv6GPwI9jztedKS1rvSVPxaahNnSvsU+ELxDsE/kGcJ1FPFXMM v2Lvi8/FORT/Ps4TnHM+FdEes0rrmtZDdYiBhRg6yq5KY+m9cXR01Owepnzv rPlADUmz10tvWqxHEntMq3wIKoa050Vvaq9Jqf5KHnu7UP3kPW+3lJp566mc Lw/1tCk9tYjz9wDOb7Vi2VL30ErtNMqmWvmFPVCrNnAqBlsrfpyyL8+1DGtR Y5+Yim+gTZTapCh582MZ67NI7SUYG01tUuT4tFzgnynV7u7u4GfTL0Gtk/da opbmYepzKWqdSuvvRSd3Hkr88xS1TjXqbkUFsZ9SpeYg/fPUOk3VL5+DNM8h tae32huR0tWU4kPHAHtnyo+fqifKOUitCrnU2mPbE9izD+Wfcw5SuaItpoxN /U44B6lcwUagPZ69si7ehXOQylGEni7arMZ+puYg6sJQ1EI1exJMGdR2WSh1 vtasu0/ZUoTegpZAfaWFzTT1ZykK4hrYBomvlaK4BrYlVReSMdvUlPN0LcDc pWkrah8JT+AZUNMVY7P16VFrn7KpqdTutU7vHmyUDUny2kgfGCszTaF3j/bY I4/07glM6Yp1KuxRs6YwZV/IP9Uec+R7cDanpiH6ImxCu8x0pD3WyHp69eSm dJWquU50YcxabLFOk314JoytVK1nos/x8bH2MKEaiXtQP1DxhJh87XFF5DB+ O5YQe5HKXSO24H40lvA8tccUyYdxazHE3oF+wbOjfAv1ZrXHESlnf39fewhR I0U/hH9Y38Kv0JdLe/yQ8aAHMuVPqPGsPXZIPTb1sKBsin7AeOT0GqV0dX9/ rz5eSBu4FvoQ6pFojxXSBmm/X0pPzImPD22kdnV+fq4+Pkh7WHfNpmgDnRaM nbEl1oSZJrC9Ufpij6TpQvuMvliPglxfX2sPw8nq9vZW/fkTG7AecH9x/pFl WAexr5iHRNZxcnKiPTQnIa5/ZAj4p6h2Ygw2kQA7OVVf9P+RHGAvp+qJ8S+k BOZX1BHjP8kY6LMYJ9Qu0H6GxD9cD/OFe7a3t6f+7EgceD6UC312tJ8XiQlr 5qcFH6v2cyKxof9wvbBPQKyR9vMh04D9K77XxcWF+jMh0wPv/KnbarD2oX65 9rMg0+bq6kp7KnQX+uecnp6q33tCFhwcHExmTWTfI2IZnIui9liDTXhnZ0f9 HhMiIVJePuYez3zEI1gz4Kv2Knx3xrmQKMBu8/DwoD2tkkJsLP0MJDLoN2Mt NxHvBuydYVfSvj+E9OTs7Gw+HzVsOLDhIt7n8PBQ/T4QYgHk91xeXs7rPLTY s8KXDp8C1mH2kyZExtHR0Tz3FXPn5uZmvmZi/UJd8Lu7u+9+xc8xf2HDxL4S fw/zjb4EQgghhBBCCCGEEEIIIYQQQgjJ578q61S7 "], {{0, 225}, {225, 0}}, {0,        255}, ColorFunction->RGBColor], BoxForm`ImageTag["Byte", ColorSpace -> "RGB", Interleaving -> True], Selectable->False], DefaultBaseStyle->"ImageGraphics", ImageSize->Automatic, ImageSizeRaw->{225, 225}, PlotRange->{{0, 225}, {0, 225}}]\), ColorFunction -> "SolarColors",   ImageSize -> 500]

With just a glimpse, I can see that he mainly wrote about his opponents, himself and boxing.

In my Community post from last June, I showed how to create the following DateListPlot that shows his victories over time. Note that his suspension period happened just as his performance was rising steeply:

Number of victories over time

I imported the other data from his Wikipedia page, which allowed me to visualize where these fights took place with GeoGraphics and who his opponents were:

Ali's career

Now as a continuation of that previous post, I would like to further analyze Ali’s opponents. For this, I’m going to take the data from the BoxRec.com site, where one can find a record of all of Ali’s opponents. I’m going to skip the parsing process of the relevant data imported from the HTMLs and will directly use a dataset that I created for this purpose (see the attached file at the end of this post).

First, let’s create a CommunityGraphPlot with all of Ali’s opponents. I want the vertexes of the graph to represent the boxers and the edges to indicate if two boxers encountered each other in the ring. Each community here will represent a group of boxers that are more connected to each other than the rest of boxers, and they will each be represented in a different color. For this, I need the list of opponents of each of Ali’s opponents:

dataset = Import["datasetBoxers.m"] boxers = Normal@Normal[dataset[All, "opponents"]]; boxersID = Normal@Keys[dataset];

In addition, I can indicate the number of bouts fought by each boxer by plotting the diameter of the vertexes proportionally and also indicate the losses that Ali had during his career with red edges using VertexSize and VertexLabels, respectively (see the complete code in the attached notebook):

CommunityGraphPlot[  Map[#[[1]] <-> #[[2]] &,    DeleteDuplicates@    Map[Sort,      Flatten[Table[       Map[{boxers[[i, 1]], #} &,         Intersection[boxers[[i, 2]], boxersID]], {i, Length[boxers]}],       1]]],  VertexLabels -> vertexlabels,  VertexSize -> vertexsizes,  EdgeStyle -> rededges, ImageSize -> 620]
CommunityGraphPlot[  Map[#[[1]] <-> #[[2]] &,    DeleteDuplicates@    Map[Sort,      Flatten[Table[       Map[{boxers[[i, 1]], #} &,         Intersection[boxers[[i, 2]], boxersID]], {i, Length[boxers]}],       1]]],  VertexLabels -> vertexlabels,  VertexSize -> vertexsizes,  EdgeStyle -> rededges, ImageSize -> 620]

We can observe that Moore had the largest number of bouts. But was he better than Ali in terms victories over losses?

One way to compare the boxers is by calculating the following ratio for each one:

Wins-Losses

I can then use a machine learning function such as FindClusters to classify the opponents into different categories, visualized here with a Histogram:

wins = Values@Normal@dataset[All, "wins"]; losses = Values@Normal@dataset[All, "losses"]; draws = Values@Normal@dataset[All, "draws"];  Histogram[FindClusters[(wins - losses)/totalfights], {0.038},  AxesLabel ->    Map[Style[#, FontFamily -> "Helvetica Neue",       FontSize -> 14] &, {"Wins-Losses Ratio", "Boxers"}],  ChartLegends -> {"Great Boxers", "Good Boxers", "\"Bad\" Boxers"},   ImageSize -> 500]
wins = Values@Normal@dataset[All, "wins"]; losses = Values@Normal@dataset[All, "losses"]; draws = Values@Normal@dataset[All, "draws"];  Histogram[FindClusters[(wins - losses)/totalfights], {0.038},  AxesLabel ->    Map[Style[#, FontFamily -> "Helvetica Neue",       FontSize -> 14] &, {"Wins-Losses Ratio", "Boxers"}],  ChartLegends -> {"Great Boxers", "Good Boxers", "\"Bad\" Boxers"},   ImageSize -> 500]

Another way to compare the opponents’ records is by plotting a BubbleChart:

bubbles =    MapThread[    Labeled[List[#1, #2, #3],       Style[#4, Bold, FontFamily -> "Helvetica Neue", FontSize -> 10,        FontColor -> Black, Directive[Opacity[0.7]]],       RandomChoice[{Top, Bottom, Left, Right, Center}]] &, {losses,      wins, totalfights, namesFamily}];  bubblesClusters =    Table[bubbles[[Flatten@Position[clusters, i]]], {i, 3}];  BubbleChart[bubblesClusters, PlotTheme -> "Detailed",   AspectRatio -> 1/GoldenRatio, BubbleScale -> "Diameter",   ChartBaseStyle -> Directive[Opacity[0.7]],  ChartLegends ->    Placed[{"Great Boxers", "Good Boxers", "\"Bad\" Boxers"}, Bottom],  PlotLabel ->    Style["Wins vs. Losses", Bold, FontFamily -> "Helvetica Neue",     FontSize -> 18],  FrameLabel -> {Style["Losses", Bold, FontFamily -> "Helvetica Neue",      FontSize -> 18],     Style["Wins", Bold, FontFamily -> "Helvetica Neue",      FontSize -> 18]},  ImageSize -> 610]
Wins vs. Losses

Under such a classification method, Ali is one of the greatest (as I expected), but Moore is just a “good” boxer, even if he holds the record number of wins. Although this is a nice way to compare boxers, one should be cautious—for example, I noticed that Spinks is classified as a “bad” boxer even though he beat Ali once.

Before concluding the opponents analysis, I will plot Ali’s weight over his career and compare it with the one of his rivals with DateListPlot:

Fight Dates

As one should expect, Ali gained weight over the course of his career. And he had one really heavy opponent, Buster Mathis, who weighed over 250 pounds at the end of his career.

Finally, I would like to point out a fun fact that I discovered thanks to the amazing amount of knowledge built into the Wolfram Language. After winning his first world heavyweight title in 1964, there was a little boom of babies named Cassius, who are now around 52 years old. There would probably be even more people called Cassius now if he hadn’t changed his name to Muhammad Ali:

ListLinePlot[  Partition[   Flatten@Entity["GivenName", {"Cassius", "UnitedStates", "male"}][     EntityProperty["GivenName", "GivenNameDistribution"]], 2],   AxesLabel -> {Style["Years", FontFamily -> "Helvetica Neue",      FontSize -> 16],     Style["Percentage", FontFamily -> "Helvetica Neue",      FontSize -> 16]}, ImageSize -> 500]

The Wolfram Language offers so many possibilities to keep exploring Ali’s life. But I will stop here and encourage you to create your own visualizations and share your ideas on Wolfram Community’s Ali thread.

Download this post as a Computable Document Format (CDF) file along with the accompanying dataset. (Note that you should save the dataset file in the same folder as the notebook in order to load the data needed for the visualizations.) New to CDF? Get your copy for free here.

]]>
http://blog.wolfram.com/2017/01/17/exploring-a-boxing-legends-career-with-the-wolfram-language-ali-at-75/feed/ 0
Automotive Reliability in the Wolfram Language http://blog.wolfram.com/2017/01/13/automotive-reliability-in-the-wolfram-language/ http://blog.wolfram.com/2017/01/13/automotive-reliability-in-the-wolfram-language/#comments Fri, 13 Jan 2017 17:39:38 +0000 Nick Lariviere http://blog.internal.wolfram.com/?p=34593 This post originally appeared on Wolfram Community, where the conversation about reliable cars continues. Be sure to check out that conversation and more—we can’t wait to see what you come up with!

For the past couple of years, I’ve been playing with, collecting and analyzing data from used car auctions in my free time with an automotive journalist named Steve Lang to try and get an idea of what the used car market looks like in terms of long-term vehicle reliability. I figured it was about time that I showed off some of the ways that the Wolfram Language has allowed us to parse through information on over one million vehicles (and counting).

Vehicle Class Quality Index Rating

I’ll start off by saying that there isn’t anything terribly elaborate about the process we’re using to collect and analyze the information on these vehicles; it’s mostly a process of reading in reports from our data provider (and cleaning up the data), and then cross-referencing that data with various automotive APIs to get additional information. This data then gets dumped into a database that we use for our analysis, but having all of the tools we need built into the Wolfram Language makes the entire operation something that can be scripted—which greatly streamlines the process. I’ll have to skip over some of the details or this will be a very long post, but I’ll try to cover most of the key elements.

The data we get comes in from a third-party provider that manages used car auctions around the country (unfortunately, our licensing agreement doesn’t allow me to share the data right now), but it’s not very computable at first (the data comes in as a text file report once a week):

text = "01/02/2017 Schaumburg 128 1999 Acura CL 3.0 2D Coupe 131612 \   19UYA2256XL014922 Green A,L,R,Y          9:00 AM Illinois Announcements: Major Transmission Defect, \   Miles Exempt          01/02/2017 Hickory 33 1997 Acura CL 2.2 2D Coupe 217449 \   19UYA1255VL011890 Blue A,L,R,Y          2:00 PM North Carolina Announcements: Major Transmission Defect         01/02/2017 Ft. Bend 46 1995 Acura Integra LS 4D Sedan 98124 \   JH4DB7654SS013119 Green A,R          9:30 AM Texas Announcements: Miles Exempt          01/03/2017 Kansas City 57 1992 Acura Integra LS 4D Sedan \ 174537 \   JH4DB1653NS000122 T/A Yellow A,Y          2:00 PM Kansas Announcements: Structural Damage, Title Absent \      ";

Fortunately, parsing this sort of log-like data into individual records is easy in the Wolfram Language using basic string patterns:

vinPattern = RegularExpression["[A-Z\\d]{17}"]; recordPattern =      DatePattern[{"Month", "Day", "Year"}] ~~ __ ~~     vinPattern ~~ __ ~~        "Announcements:" ~~ __ ~~ "\n";
StringCases[text, Shortest[recordPattern]]

Then it’s mostly a matter of cleaning up the individual records into something more standardized (I’ll spare you some of the hacky details due to artifacts in the data feed). You’ll end up with something like the following:

record = <|"Date" -> "2017-01-02", "ModelYear" -> 1999,       "Make" -> "Acura", "Model" -> "CL",     "TransmissionIssue" -> True,       "EngineIssue" -> False, "Miles" -> 131612,       "VIN" -> "19UYA2256XL014922"|>;

From there, we use the handy Edmunds vehicle API to get more information on an individual vehicle using their VIN decoder:

lookupVIN[vin_String] :=   ImportString[   URLFetch["https://api.edmunds.com/api/vehicle/v2/vins/" <> vin <>      "?fmt=json&api_key=" <> apikey    ], "JSON"]

lookupVIN[vin_String] :=   ImportString[   URLFetch["https://api.edmunds.com/api/vehicle/v2/vins/" <> vin <>      "?fmt=json&api_key=" <> apikey    ], "JSON"]

We then insert the records into an HSQL database (conveniently included with Mathematica), resulting in an easy way to search for the records we want:

SQLSelect[$DataBase, $Table, {"Year", "Miles", "Transmission"},    And[SQLColumn["Make"] == "Nissan", SQLColumn["Model"] == "Cube",     SQLColumn["Year"] <= 2010]] // Short

From there, we can take a quick look at metrics using larger datasets, such as the number of transmission issues for a given set of vehicles for different model years:

Number of transmission issues

Or a histogram of those issues broken down by vehicle mileage:

Issues by vehicle mileage

It also lets us look at industry-wide trends, so we can develop a baseline for what the expected rate of defects for an average vehicle (or vehicle of a certain class) should be:

Yearly defect ratio

lm = LinearModelFit[modeldata, {date, modelyear}, {date, modelyear}]
lm = LinearModelFit[modeldata, {date, modelyear}, {date, modelyear}]

We can then compare a given vehicle to that model:

Powertrain issue rate

We then use that model, as well as other information, to generate a statistical index. We use that index to give vehicles an overall quality rating based on their historical reliability, which ranges from a score of 0 (chronic reliability issues) to 100 (exceptional reliability), with the industry average hovering right around 50:

Full-size

We also use various gauges to put together informative visualizations of defect rates and the overall quality:

MileageGauge[mileage_, opts___] := With[{color = Which[                mileage <= 100000, Lighter[Red],                100000 <= mileage <= 120000, Lighter[Yellow],                120000 <= mileage <= 130000, Lighter[Blue],                True, Lighter[Green]]},       HorizontalGauge[{mileage, $IndustryAverageMileage}, {50000,          200000},           ScalePadding -> {.08, .1},           GaugeLabels -> {                  Placed[             Style[Row[{"Model average: ",                    AccountingForm[mileage, DigitBlock -> 3],           " miles"}],                FontSize -> 20], Above],                   Placed[             Style[Row[{"Industry average: ",                              AccountingForm[$IndustryAverageMileage, DigitBlock -> 3],                    " miles"}], FontSize -> 16], Below]                  },           ScaleRanges -> {If[                    mileage < $IndustryAverageMileage, {mileage, \         \ $IndustryAverageMileage}, {$IndustryAverageMileage, mileage}]},           ScaleRangeStyle -> color, GaugeStyle -> {Darker[Red], Black},           ImageSize -> 500,          ScaleDivisions -> {7, 7},     GaugeFaceStyle -> Lighter[color, .8],           opts]       ]

announcementGauge[value_] :=     AngularGauge[value, {0, .3},        GaugeLabels -> Style[ToString[N[value, 3]*100] <> "%", 15],        PlotLabel -> Style["Transmission Issues", 15],        ScaleRanges -> {{0, $IndustryAverageIssueRates - .01} ->                 Lighter[Green], {{$IndustryAverageIssueRates - .01,                    $IndustryAverageIssueRates + .01}, {0, .2}}, \      \  {$IndustryAverageIssueRates + .01,               1.5*$IndustryAverageIssueRates} ->                       Lighter[Yellow], {1.5*$IndustryAverageIssueRates, 1} ->            Lighter[Red]},        GaugeStyle -> {RGBColor[{.15, .4, .6}], RGBColor[{.5, .5, .5}]}]

There is a lot more we do to pull all of this together (like the Wolfram Language templating we use to generate the HTML pages and reports), and honestly, there is a whole lot more we could do (my background in statistics is pretty limited, so most of this is pretty rudimentary, and I’m sure others here may already have ideas for improvements in presentation for some of this data). If you’d like to take a look at the site, it’s freely available (Steve has a nice introduction to the site here, and he also writes articles for the page related to practical uses for our findings).

Our original site was called the Long-Term Quality Index, which is still live but showed off my lack of experience in HTML development, so we recently rolled out our newer, WordPress-based venture Dashboard Light, which also includes insights from our auto journalist on his experiences running an independent, used car dealership.

This is essentially a two-man project that Steve and I handle in our (limited) free time, and we’re still getting a handle on presenting the data in a useful way, so if anyone has any suggestions or questions about our methodology, feel free to reach out to us.

Cheers!

Continue the conversation at Wolfram Community.

]]>
http://blog.wolfram.com/2017/01/13/automotive-reliability-in-the-wolfram-language/feed/ 1
Recent Wolfram Technology Books http://blog.wolfram.com/2017/01/09/recent-wolfram-technology-books/ http://blog.wolfram.com/2017/01/09/recent-wolfram-technology-books/#comments Mon, 09 Jan 2017 17:48:35 +0000 John Moore http://blog.internal.wolfram.com/?p=34497 We’re always excited to see new books that explore new ways to use Wolfram technologies. Authors continue to find inventive ways to think with the Wolfram Language. A variety of new Wolfram technology books have been published over the past few months. We hope that you’ll find something on this list to support your new year’s resolution to upgrade your skills. (Update: also look for the newly released Chinese translation of Stephen Wolfram’s An Elementary Introduction to the Wolfram Language.)

Toolbox for Mathematica Programmers, Option Valuation under Stochastic Volatility II and CRC Standard Curves and Surfaces with Mathematica

Toolbox for the Mathematica Programmers

This new guide from Viktor Aladjev and V. A. Vaganov outlines a modular approach to programming with the Wolfram Language. Providing over 800 tools that can be incorporated into a variety of projects, Toolbox for the Mathematica Programmers will be useful for students and seasoned programmers alike.

Option Valuation under Stochastic Volatility II: With Mathematica Code

In this second volume of his series about quantitative finance, Alan L. Lewis’s Option Valuation under Stochastic Volatility II: With Mathematica Code expands his original focus to include jump diffusions. The finance industry is increasingly relying on computational analysis to model risk and track customer data. Lewis’s volume is a welcome addition to the literature of the field, of interest for both researchers and investors/traders looking to learn more about computational thinking. Topics covered include spectral theory for jump diffusions, boundary behavior for short-term interest rate models, modeling VIX options, inference theory and discrete dividends.

CRC Standard Curves and Surfaces with Mathematica

The third edition of the popular CRC Standard Curves and Surfaces with Mathematica is an indispensable reference text for anyone who works with curves and surfaces, from engineers to graphic designers. With new illustrations in almost every chapter, the updated version contains nearly 1,000 visualizations, depicting nearly every geometrical figure used today. It also includes a CD with a series of interactive Computable Document Format (CDF) files.



Butterworth & Bessel Filters, Automation of Finite Element Methods and Computational Proximity

Butterworth & Bessel Filters

T. D. McGlone provides a useful introduction to Butterworth and Bessel (aka Thomson) filter functions. With an overview of mathematical functions, topology choices and component selection based on sensitivity criteria, Butterworth & Bessel Filters will be particularly useful for engineers.

Automation of Finite Element Methods

Another text for engineers, Automation of Finite Element Methods provides an introduction to developing virtual prediction techniques. New finite elements need to be created for individual purposes, which can be time-consuming. Authors Jože Korelc and Peter Wriggers outline an approach to automating this process through Wolfram Language programming.

Computational Proximity: Excursions in the Topology of Digital Images

Based on James F. Peters’s popular graduate course on the topology of digital images, Computational Proximity: Excursions in the Topology of Digital Images introduces the concept of computational proximity as an algorithmic approach to finding nonempty sets of points that are either close to each other or far apart. Peters discusses the applications of this concept in computer vision, multimedia, brain activity, biology, social networks and cosmology.

Wolfram 语言入门 Now available as well is the Chinese translation of Stephen Wolfram’s An Elementary Introduction to the Wolfram Language: Wolfram 语言入门. The translated edition includes all of the material that made the English edition popular with anyone wanting to learn to program in the Wolfram Language. Look out for translations into additional languages in the future!
]]>
http://blog.wolfram.com/2017/01/09/recent-wolfram-technology-books/feed/ 2
Our Readers’ Favorite Stories from 2016 http://blog.wolfram.com/2017/01/03/our-readers-favorite-stories-from-2016/ http://blog.wolfram.com/2017/01/03/our-readers-favorite-stories-from-2016/#comments Tue, 03 Jan 2017 18:14:42 +0000 John Moore http://blog.internal.wolfram.com/?p=34441 Story image collage

It’s been a busy year here at the Wolfram Blog. We’ve written about ways to avoid the UK’s most unhygienic foods, exciting new developments in mathematics and even how you can become a better Pokémon GO player. Here are some of our most popular stories from the year.

Today We Launch Version 11!

Geo projections in the Wolfram Language

In August, we launched Version 11 of Mathematica and the Wolfram Language. The result of two years of development, Version 11 includes exciting new functionality like the expanded map generation enabled by satellite images. Here’s what Wolfram CEO Stephen Wolfram had to say about the new release in his blog post:

OK, so what’s the big new thing in Version 11? Well, it’s not one big thing; it’s many big things. To give a sense of scale, there are 555 completely new functions that we’re adding in Version 11—representing a huge amount of new functionality (by comparison, Version 1 had a total of 551 functions altogether). And actually that function count is even an underrepresentation—because it doesn’t include the vast deepening of many existing functions.

Finding the Most Unhygienic Food in the UK

Map of Oxford

Using the Wolfram Language, John McLoone analyzes government data about food safety inspections to create visualizations of the most unhygienic food in the UK. The post is a treasure trove of maps and charts of food establishments that should be avoided at all costs, and includes McLoone’s greatest tip for food safety: “If you really care about food hygiene, then the best advice is probably just to never be rude to the waiter until after you have gotten your food!”

Finding Pokémon GO’s Shortest Tour to Compute ’em All!

Poké-Spikey

Bernat Espigulé-Pons creates visualizations of Pokémon across multiple generations of the game and then uses WikipediaData, GeoDistance and FindShortestTour to create a map to local Pokémon GO gyms. If you’re a 90s kid or an avid gamer, Espigulé-Pons’s Pokémon genealogy is perfect gamer geek joy. If you’re not, this post might just help to explain what all those crowds were doing in your neighborhood park earlier this year.

Behind Wolfram|Alpha’s Mathematical Induction-Based Proof Generator

Induction-based proof generator

Connor Flood writes about creating “the world’s first online syntax-free proof generator using induction,” which he designed using Wolfram|Alpha. With a detailed explanation of the origin of the concept and its creation from development to prototyping, this post provides a glimpse into the ways that computational thinking applications are created.

An Exact Value for the Planck Constant: Why Reaching It Took 100 Years

EntityValue[{Pierre-Simon Laplace, Adrien-Marie Legendre, Joseph-Louis Lagrange, Antoine-Laurent de Lavoisier, Marquis de Condorcet}, {"Entity","Image"}]//Transpose//Grid

Wolfram|Alpha Chief Scientist Michael Trott returns with a post about the history of the discovery of the exact value of the Planck constant, covering everything from the base elements of superheroes to the redefinition of the kilogram.

Launching the Wolfram Open Cloud: Open Access to the Wolfram Language

Wolfram Open Cloud, Programming Lab and Development Platform

In January of 2016, we launched the Wolfram Open Cloud to—as Stephen Wolfram says in his blog post about the launch—“let anyone in the world use the Wolfram Language—and do sophisticated knowledge-based programming—free on the web.” You can read more about this integrated cloud-based computing platform in his January post.

On the Detection of Gravitational Waves by LIGO

Gravitational waves GIF

In February, the Laser Interferometer Gravitational-Wave Observatory (LIGO) announced that it had confirmed the first detection of a gravitational wave. Wolfram software engineer Jason Grigsby explains what gravitational waves are and why the detection of them by LIGO is such an exciting landmark in experimental physics.

Computational Stippling: Can Machines Do as Well as Humans?

Pointilism image of a beach

Silvia Hao uses Mathematica to recreate the renaissance engraving technique of stippling: a kind of drawing style using only points to mimic lines, edges and grayscale. Her post is filled with intriguing illustrations and is a wonderful example of the intersection of math and illustration/drawing.

Newest Wolfram Technologies Books Cover Range of STEM Topics

Wolfram tech books

In April, we reported on new books that use Wolfram technology to explore a variety of STEM topics, from data analysis to engineering. With resources for teachers, researchers and industry professionals and books written in English, Japanese and Spanish, there’s a lot of Wolfram reading to catch up on!

Announcing Wolfram Programming Lab

Wolfram Programming Lab startup screen

The year 2016 also saw the launch of Wolfram Programming Lab, an interactive online platform for learning to program in the Wolfram Language. Programming Lab includes a digital version of Stephen Wolfram’s 2016 book, An Elementary Introduction to the Wolfram Language, as well as Explorations for programmers already familiar with other languages and numerous examples for those who learn best by experimentation.

]]>
http://blog.wolfram.com/2017/01/03/our-readers-favorite-stories-from-2016/feed/ 0
Gardening à la Gardner http://blog.wolfram.com/2016/12/28/gardening-a-la-gardner/ http://blog.wolfram.com/2016/12/28/gardening-a-la-gardner/#comments Wed, 28 Dec 2016 17:45:38 +0000 Kathryn Cramer http://blog.internal.wolfram.com/?p=34414 When looking through the posts on Wolfram Community, the last thing I expected was to find exciting gardening ideas.

The general idea of Ed Pegg’s tribute post honoring Martin Gardner, “Extreme Orchards for Gardner,” is to find patterns for planting trees in configurations with constraints like “25 trees to get 18 lines, each having 5 trees.” Most of the configurations look like ridiculous ideas of how to plant actual trees. For example:

One of Pegg's orchard plans

I have a seven-acre apple orchard with 200+ trees in New York’s Adirondack Park, and so I read “Extreme Orchards for Gardner” as a gardener first. Of course, Pegg’s post was never intended as a proposal for how to plant actual orchards, but as I live in the middle of an orchard, I can’t help wondering, what if you did plant orchards this way?

When considering this as an actual planting pattern, we should borrow that character ubiquitous in physics: the observer. To the observer on the ground, only the center cluster would look much like an orchard; the trees at the vertices would appear to have nothing much to do with the rest.

One of my favorite physics jokes is the one about the theoretical physicist who loses his job as a professor and has to go to work as a milkman. (Once upon a time, milk was delivered to people’s houses by “milkmen.”) After a few weeks on the job, the physicist just can’t stand not being able to give lectures. So he assembles his colleagues in front of a blackboard, draws a circle on the board and begins by saying, “Consider a spherical cow of uniform density.” The representation of orchards by Martin Gardner, Branko Grünbaum and such in the usual rendition of the orchard planting problem is to real orchards as spherical cows are to the animals who produce the milk you drink. So, to some extent, the fact that trees are not points and need a certain spacing is an unfair criticism. Nonetheless, since every way I look out my windows I see real apple trees, I feel compelled to point this out. (I think Grünbaum, who was my professor many years ago and who encouraged us to reality-test our mathematical ideas, would approve.)

This is even more true for this configuration involving rows of six “trees.” Just how much land would it take to plant an orchard like this using real trees? No one would do this.

One of Pegg's orchard diagrams in color

Pegg also shows some more possible configurations—like these, in which the lines pass through exactly four trees each. For actual, rather than hypothetical, trees, some of these look a bit more workable.

Range of possible orchard configurations

My own apple trees, planted in the mid-1980s, are planted in rows, which is practical if a bit boring.

Cramer's orchard

There are pragmatic constraints involved in planting apple trees. The orchardist needs access to the trees from two sides, both for maintenance (pruning, spraying, etc.) and to harvest apples. Assuming semi-dwarf trees, this involves aisles with a minimum width of about 22 feet (ca. 6.7 meters), starting from the center of each trunk. The trees should be planted no closer than intervals of 16 feet (ca. 4.9 meters) to give them enough air and light.

Only configurations in which there is a small variation in the segments connecting trees could realistically be planted as something that would, on the ground, resemble an orchard. Most of the configurations would require an enormous amount of land and so are mostly mathematical abstractions rather than something one could really implement.

But the configuration on the lower left in Pegg’s four-tree grouping looks like something one could actually plant. Like so:

Alternate orchard design

One advantage I see in the configurations with a small variation in segment length is that planting a portion of the orchard as pentagons within pentagons reduces the amount of grass under the trees to be maintained, thus significantly reducing mowing and therefore labor and gasoline costs. So it is not completely foolish to consider planting at least a small orchard this way.

I am attracted to the 25-tree pentagon configuration because of its empty center circle, creating a private grove space. Taking into account an air gap around the outside, my guess is that a circle in the field of about 125 feet in diameter should be big enough. That center circle could, for example, hold a very nice circle of wildflowers 20 feet across for bee forage, maybe some beehives in the center, and still leave room for equipment to navigate.

Another advantage: this would be a good layout for planting five types of trees in groups of five. They could then be easily identified in their mini-groves and harvested together. The more I thought about it, the more this became something I might actually want to do. I started shopping online for heritage varieties of apple trees, looking around at my farm for the right place to put the new trees, imagining new designs…. Hmm.

Pegg, on the other hand, is more concerned with finding new solutions to the abstract version of the orchard problem, which are indeed quite beautiful, if impractical for the planting of trees:

Abstract orchard design

These contemplations make me want to go deeper into mathematical patterns to see what else might be plantable. Maybe this last “orchard” plot might work with bulbs.

]]>
http://blog.wolfram.com/2016/12/28/gardening-a-la-gardner/feed/ 2
The Semantic Representation of Pure Mathematics http://blog.wolfram.com/2016/12/22/the-semantic-representation-of-pure-mathematics/ http://blog.wolfram.com/2016/12/22/the-semantic-representation-of-pure-mathematics/#comments Thu, 22 Dec 2016 17:48:36 +0000 Eric Weisstein http://blog.internal.wolfram.com/?p=34242 Graph of relationships between spaces

Introduction

Building on thirty years of research, development and use throughout the world, Mathematica and the Wolfram Language continue to be both designed for the long term and extremely successful in doing computational mathematics. The nearly 6,000 symbols built into the Wolfram Language as of 2016 allow a huge variety of computational objects to be represented and manipulated—from special functions to graphics to geometric regions. In addition, the Wolfram Knowledgebase and its associated entity framework allow hundreds of concrete “things” (e.g. people, cities, foods and planets) to be expressed, manipulated and computed with.

Despite a rapidly and ever-increasing number of domains known to the Wolfram Language, many knowledge domains still await computational representation. In his blog “Computational Knowledge and the Future of Pure Mathematics,” Stephen Wolfram presented a grand vision for the representation of abstract mathematics, known variously as the Computable Archive of Mathematics or Mathematics Heritage Project (MHP). The eventual goal of this project is no less than to render all of the approximately 100 million pages of peer-reviewed research mathematics published over the last several centuries into a computer-readable form.

In today’s blog, we give a glimpse into the future of that vision based on two projects involving the semantic representation of abstract mathematics. By way of further background and motivation for this work, we first briefly discuss an international workshop dedicated to the semantic representation of mathematical knowledge, which took place earlier this year. Next, we present our work on representing the abstract mathematical concepts of function spaces and topological spaces. Finally, we showcase some experimental work on representing the concepts and theorems of general topology in the Wolfram Language.

The Semantic Representation of Mathematical Knowledge Workshop

In February 2016, the Wolfram Foundation, together with the Fields Institute and the IMU/CEIC working group for the creation of a Global Digital Mathematics Library, organized a Semantic Representation of Mathematical Knowledge Workshop designed to pool the knowledge and experience of a small and select group of experts in order to produce agreement on a forward path toward the semantic encoding of all mathematics. This workshop was sponsored by the Alfred P. Sloan Foundation and held at the Fields Institute in Toronto. The workshop included approximately forty participants who met for three days of talks and discussions. Participants included specialists from various fields, including:

  • computer algebra
  • interactive and automatic proof systems
  • mathematical knowledge representation
  • foundations of mathematics
  • practicing pure mathematics

Among the many accomplished and knowledgeable participants (a complete list of whom, together with the complete schedule of events, may be viewed on the workshop website), Georges Gonthier and Tom Hales shared their experience on the world’s largest extant formal proofs (the Feit–Thompson odd order theorem and the Kepler conjecture, respectively); Harvey Friedman, Dana Scott and Yuri Matiyasevich brought expertise on mathematical foundations, incompleteness and undecidability; Jeremy Avigad and John Harrison shared their knowledge and experience in designing and implementing two of the world’s most powerful theorem provers; Bruno Buchberger and Wieb Bosma contributed extensive knowledge on computational mathematics; Fields Medal winners Stanislav Smirnov and Manjul Bhargava expounded on the needs of practicing mathematicians; and Ingrid Daubechies and Stephen Wolfram shared their thoughts and knowledge on many technical and organizational challenges of the problem as a whole.

Workshop participants

As one might imagine, the list of topics discussed at the workshop was quite extensive. In particular, it included type theory, the calculus of constructions, homotopy type theory, mathematical vernacular, partial functions and proof representations, together with many more. The following word cloud, compiled from the text of hundreds of publications by the workshop participants, gives a glimpse of the main topics:

Topics discussed at the workshop

Recordings of workshop presentations can be viewed on the workshop video archive, and a white paper discussing the workshop’s outcomes is also available. In addition, because of the often under-emphasized yet vital importance of the subject for the future development (and practice) of mathematics in the coming decades, 18 participants were interviewed on the technological and scientific needs for achieving such a project, culminating in a 90-minute video (excerpts also available in a 9-minute condensed version) that highlights the visions and thoughts of some of the world’s most important practitioners. We thank filmmaker Amy Young for volunteering her time and talents in the compilation and production of this unique glimpse into the thoughts of renowned mathematicians and computer scientists from around the world, which we sincerely hope other viewers will find as inspiring and enlightening as we do.

Computational Encoding of Function Spaces

The eCF project encoded continued fraction terminology, theorems, literature and identities in computational form, demonstrating that Wolfram|Alpha and the Wolfram Language provide a powerful framework for representing, exposing and manipulating mathematical knowledge.

While the theory of continued fractions contains both high-level and abstract mathematics, it represents only a tiny first step toward Stephen Wolfram’s grand vision for computational access to all of mathematics and the dynamic use of mathematical knowledge. Our next step down this challenging path therefore sought to encode within the Wolfram Language and Wolfram|Alpha entity-property framework a domain of more abstract and inhomogeneous mathematical objects having nontrivial properties and relations. The domain chosen for this next step was the important and fairly abstract branch of mathematics known as functional analysis.

That step posed a number of new challenges, among them the need for graduate-level mathematical knowledge in the domain of interest, formulation of entity names that “naturally” contain parameters and encode additional information (say, measure spaces) and the introduction of stub extensions to the Wolfram Language.

Work was carried out from December 2014–July 2016 and consisted of knowledge curation in three interconnected knowledge domains: "FunctionSpace", "TopologicalSpaceType" and "FunctionalAnalysisSource", together with the development of framework extensions to support them. This functionality was recently made available through the Wolfram Language entity framework and consists of the following content:

  • 126 function spaces (many parametrized); 45 properties
  • 39 topological space types; 14 properties
  • 147 functional analysis sources; 49 properties

Full availability on the Wolfram|Alpha website is expected by early January 2017.

Function Spaces

Two underlying concepts in functional analysis are those of the function space and the topological space. A function space is a set of functions of a given kind from one set to another. Common examples of function spaces include Lp spaces (Lebesgue spaces; defined using a natural generalization of the p-norm for finite-dimensional vector spaces) and Ck spaces (consisting of functions whose derivatives exist and are continuous up to kth order).

As a simple first example in accessing this functionality, we can use RandomEntity to return a sample list of function spaces:

RandomEntity["FunctionSpace", 5]

Similarly, EntityValue can be used to access curated properties for a given space:

TextGrid[With[{props = {alternate names, associated people, Bessel inequality, Cauchy-Schwarz inequality, classes, classifications, dual space, inner product, isomorphic spaces, measure space, norm, related results, isomorphic spaces, measure space, norm, related results, relationship graph, timeline, triangle inequality, typeset description}}, Transpose[{props, EntityValue[Lebesgue space L2(Rn, dxn) function space, props]}]],Dividers -> All, Background -> {Automatic, {{LightBlue, None}}},   BaseStyle -> 8, ItemSize -> {{13, 76}, Automatic}]
2_out_puremath

As can be seen in various properties in this table, some mathematical representations required the introduction of new symbols not (yet) present in the Wolfram Language. This was accomplished by introducing them into a special PureMath` context. For example, after evaluating the above table, the following “pure math extension symbols” appear:

?PureMath`*

For now, these constructs are just representational. However, they are not merely placeholders for mathematical concepts/computational structures, but also have the benefit of enhancing human readability by automatically adding traditional mathematical typesetting and annotations. This can be seen, for example, by comparing the raw semantic expressions in the table above with those displayed on the Wolfram|Alpha website:

Lebesgue space Wolfram|Alpha

In the longer term, many such concepts may be instantiated in the Wolfram Language itself. As a result, both this and any similar semantic projects to follow will help guide the inclusion and implementation of computational mathematical functionality within Mathematica and the Wolfram Language.

A slightly more involved example demonstrates how the entity framework can be used to construct programmatic queries. Here, we obtain a list of all curated function spaces associated with mathematician David Hilbert:

EntityList[  Entity["FunctionSpace",    EntityProperty["FunctionSpace", "AssociatedPeople"] ->     ContainsAny[{Entity["Person", "DavidHilbert::8r974"]}]]]

One interesting property from the table above that warrants a bit more scrutiny is "RelationshipGraph". This consists of a hierarchical directed graph connecting all curated topological space types, where nodes A and B are connected by a directed edge AB if and only if “S is a topological space of type A” implies “S is a topological space of type B”, and with the additional constraint that all nodes are connected only via paths maximizing the number of intermediate nodes. For each function space, this graph also indicates (in red) topological space types to which a given space belongs. For example, the Lebesgue space L2 has the following relationship graph:

EntityValue[Lebesgue space L2(Rn, dxn)(function space), relationship graph]

Here we show a similar graph in a slightly more streamlined and schematic form:

Graph of relationships between spaces

This graph corresponds to the following topological space type memberships:

Classifications

While portions of this graph appear in the literature, the above graph represents, to our knowledge, the most complete synthesis of the hierarchical structure of topological vector spaces available. (The preceding notwithstanding, it is important to keep in mind that the detailed structure depends on the detailed conventions adopted in the definitions of various topological spaces—conventions that are not uniform across the literature.) A number of interesting facts can be gleaned from the graph. In particular, it can immediately be seen that the well-known Hilbert and Banach spaces (which have high-level structural properties whose relaxations lead to more general spaces) fall at the top of the hierarchical heap together with “inner product space.” On the other hand, topological vector spaces are the “most generic” types in some heuristic sense.

During the curation process, we have taken great care that function space properties are correct for all parameter values. This can be illustrated using code like the following to generate a tab view of Lebesgue spaces for various values of its parameter p and noting how properties adjust accordingly:

With[{props = {"Norm", "TypesetDescription", "Classifications"},    lebesgue =     Entity["FunctionSpace", {{"LebesgueL", {{"Reals", \[FormalN]}, \ {"LebesgueMeasure", \[FormalN]}}}, \[FormalP]}]},   TabView[Table[    space ->      Grid[Transpose[{props, EntityValue[space, props]}],       Dividers -> All, Alignment -> {Left, Center}], {space, {lebesgue,       lebesgue /. \[FormalP] -> 1/3, lebesgue /. \[FormalP] -> 3,       lebesgue /. \[FormalP] -> "Infinity"}}]]]

One of the beautiful things about computational encoding (and part of the reason it is so desirable for mathematics as a whole) is that known results can be easily tested or verified. (Similarly, and maybe even more importantly, new propositions can be easily formulated and explored.) As an example, consider the duality of Lebesgue spaces Lp and Lq for 1/p+1/q=1 with p≥1. First, define a variable to represent the Lp entity:

Entity[

Now, use the "DualSpace" property (which may be specified either as a string or via a fully qualified EntityProperty["FunctionSpace", "DualSpace"] object, the latter of which may be given directly in that form or the corresponding formatted form dual space) to obtain the dual entity:

lq = EntityValue[lp, "DualSpace"]

As can be seen, this formulation allows computation to be performed and expressed through the elegant paradigm of symbolic transformation of the entity canonical name. Taking the dual space of Lq in turn then gives:

EntityValue[lq, "DualSpace"]

Finally, applying symbolic simplification to the entity canonical name:

% // Simplify

This verifies we have obtained the same space we originally started with:

% == lp

In other words, that the double dual (Lp)**, where * denotes the dual space, is equivalent to Lp. (Function spaces with this property are said to be reflexive.)

It is also important to emphasize that the curation of the existing literature on function spaces is not always straightforward, as illustrated in particular by the myriad of (mutually conflicting) conventions used for the interrelated collection of function spaces known as Campanato–Morrey spaces:

cm = Cases[{#,       Or @@ (StringMatchQ[         Cases[CanonicalName[#], _String, \[Infinity]],          "*Campanato*" | "*Morrey*"])} & /@     EntityList["FunctionSpace"], {e_, True} :> e]

This challenge is made clear with the following table, whose creation required a meticulous study of the literature:

Table

As a result of multiple conventions, we chose in cases like this to include multiple, separate entities that are equivalent under appropriate (but possibly nontrivial) transformations of parameters and notations. For example:

Grid[Transpose[   Function[r, {cm[[r]],       EntityValue[cm[[r]], "TypesetDescription"]}]@{4, 5}],   Dividers -> All, ItemSize -> {{13, 68}, Automatic}, BaseStyle -> 10]

Topological Space Types

A topological space may be defined as a set of points and neighborhoods for each point satisfying a set of axioms relating the points and neighborhoods. The definition of a topological space relies only upon set theory and is the most general notion of a mathematical space that allows for the definition of concepts such as continuity, connectedness and convergence. Other spaces, such as manifolds and metric spaces, are specializations of topological spaces with extra structures or constraints. Common examples of topological vector spaces include the Banach space (a complete normed vector space) and the Hilbert space (an abstract Banach space possessing the structure of an inner product that allows length and angle to be measured). Topological spaces could be considered more abstract than function spaces (e.g. they are typically defined based on the existence of a norm as opposed to having a definite value for their norm). Being so general, topological spaces are a central unifying notion and appear in virtually every branch of modern mathematics. The branch of mathematics that studies topological spaces in their own right is called point-set topology or general topology.

EntityList can be used to see a complete list of curated topological space types:

EntityList["TopologicalSpaceType"]

Similarly, EntityValue[space type, "PropertyAssociation"] returns all curated properties for a given space:

TextGrid[List @@@    Normal[DeleteMissing[     EntityValue[Entity["TopologicalSpaceType", "HilbertSpace"],       "PropertyAssociation"]]], Dividers -> All,   Background -> {Automatic, {{LightBlue, None}}}, BaseStyle -> 8,   ItemSize -> {{12, 77}, Automatic}]
Hilbert space table17_Out_PureMath

While more could be said and done with topological space types, in this project this domain was primarily used as a convenient way to classify function spaces. However, as the second project to be discussed in this blog will show, additional exploratory work is currently being done that could result in the augmentation of the human- (but not computer-) readable descriptions of topological spaces with semantically encoded versions potentially even suitable for use with automated proof assistants or theorem provers.

Functional Analysis Sources

A final component added in this project was a set of cross-linked literature references that provide provenance and documentation for the various conventions (definitions etc.) adopted in our curated functional analysis datasets. These references can be searched based on the journal in which a paper appears, the year or decade it was published, the author or the language in which it was written:

For mathematicians who wish to explore the source of the data down to the page (theorem etc.) level, this information has also been encoded:

Entity["TopologicalSpaceType", "HilbertSpace"][  EntityProperty["TopologicalSpaceType", "References"]]

Finally, we can use this detailed reference information in a way that provides a convenient overview of both existing notational conventions and those we adopted in this project:

Entity["FunctionSpace", {{"LebesgueL", {{"Reals", \[FormalN]}, \ {"LebesgueMeasure", \[FormalN]}}}, 2}][  EntityProperty["FunctionSpace", "TypesetNotationsTable"]]

Encoding of Concepts and Theorems from Topology

The second project we discuss in this blog is the not-unrelated augmentation of the Wolfram Language to precisely represent the definitions of mathematical concepts, statements and proofs in the field of point-set topology. This was done by creating an “entity store” for general topology consisting of concepts and theorems curated from the second edition of James Munkres’s popular Topology textbook. Although this project did not construct an explicit proof language (suitable, say, for use by a proof assistant or automated theorem prover), it did result in the comprehensive representation of 216 concepts and 225 theorems from a standard mathematical text, which is a prelude to any work involving machine proof.

EntityStore is a function introduced in Version 11 of the Wolfram Language that allows custom entity-property data to be packaged, placed in the cloud via the Wolfram Data Repository and then conveniently loaded and used. To load and use the general topology entity store, first access it via its ResourceData handle, then make it available in the Wolfram Language by prepending it to the list of known entity stores contained in the global $EntityStores variable:

PrependTo[$EntityStores, ResourceData["General Topology EntityStore"]]

As can be seen in the output, a nice summary blob shows the contents of the registered stores (in this case, a list containing the single store we just registered), including the counts of entities and properties in each of its constituent domains. Now that the entity store is registered, the custom entities it contains can be used within the Wolfram Language entity framework just as if they were built in. For example:

RandomEntity["GeneralTopologyTheorem", 5]

Similarly, we can see a full list of currently supported properties for topological theorems using EntityValue:

EntityValue["GeneralTopologyTheorem", "Properties"]

Before proceeding, we perform a little context path manipulation to make output symbols format more concisely (slightly deferring a discussion of why we do this until the end of this section):

AppendTo[$ContextPath, "GeneralTopology`"];

A nice summary table can now be generated to show basic information about a given theorem:

Entity["GeneralTopologyTheorem", "HausdorffImpliesT1"]["SummaryGrid"]

"InputFormSummaryGrid" displays the same information as "SummaryGrid", but without applying the formatting rules we’ve used to make the concepts and theorems easily readable. It’s a good way to see the exact internal representation of the data associated with the entity. This can help us to understand what is going on when the formatting rules obscure this structure:

Entity["GeneralTopologyTheorem",    "HausdorffImpliesT1"]["InputFormSummaryGrid"]

While it’s pretty straightforward to understand the mathematical assertion being made here, let’s look at each property in detail. Here, for example, is the display name (“label”) used for the entity representing the above theorem in the entity store, formatted using InputForm to display quotes explicitly and thus emphasize that the label is a string:

Entity["GeneralTopologyTheorem", "HausdorffImpliesT1"][   "Label"] // InputForm

Similarly, here are alternate ways of referring to the theorem:

Entity["GeneralTopologyTheorem", "HausdorffImpliesT1"][   "AlternateNames"] // InputForm

… the universally quantified variables appearing at the top level of the theorem statement (i.e. these are the variables representing the objects that the theorem is “about”):

Entity["GeneralTopologyTheorem",    "HausdorffImpliesT1"]["QualifyingObjects"]

… the conditions these objects must satisfy in order for the theorem to apply:

Entity["GeneralTopologyTheorem", "HausdorffImpliesT1"][   "Restrictions"] // InputForm

… and the conclusion of the theorem:

Entity["GeneralTopologyTheorem", "HausdorffImpliesT1"][   "Statement"] // InputForm

Of course, we could have just as easily listed Math["IsHausdorff"][Χ] as a restriction to this theorem and Math["IsT1"][Χ] as the statement since the manner in which the hypotheses are split between "Restrictions" and "Statement" is not unique. However, while the details of the splitting are subject to style and readability, the mathematical content of the theorem as expressed through any of these subjective choices is equivalent.

Finally, we can retrieve metadata about the source from which the theorem was curated:

Entity["GeneralTopologyTheorem", "HausdorffImpliesT1"][   "References"] // InputForm

Now, backing up a bit, you may well wonder about expressions with structures such as Category[...] and Math[...] that you’ve seen above. Let’s take a look at one of them, but this time through a general topology concept instead of a theorem:

Entity["GeneralTopologyConcept", "IsHausdorff"][     "RelatedTheorems"]

Entity["GeneralTopologyConcept", "IsHausdorff"]["SummaryGrid"]

Some of these properties are shared with corresponding properties for theorems:

EntityValue["GeneralTopologyConcept", "Properties"]

You can see the common properties by intersecting the full lists of supported properties for concepts and theorems:

Intersection @@ (CanonicalName[      EntityValue[#, "Properties"]] & /@ {"GeneralTopologyConcept",      "GeneralTopologyTheorem"})

While properties are similar across topology theorems and concepts, there are some differences that should be addressed. "Arguments" for a concept takes the role of "QualifyingObjects" for a theorem. Just as theorems are thought of as applying to certain objects, concepts are thought of as functions that can be applied to certain objects. The output can be a Boolean value, as in this case. We would call such a concept a property or a predicate. Other concepts represent mathematical structures. For example, Math["MetricTopology"] takes a metric space as an argument and outputs the corresponding topology induced by the metric. The entity that corresponds to this math concept is metric topology.

A "Restrictions" property for concepts is very similar to the corresponding property for theorems. And just as in the case with theorems, there’s nothing in principle stopping us from moving this condition from "Restrictions" and conjoining it to the output. The difference is that this can always be done for theorems, but it can only be done for concepts representing properties since the output is interpreted as having a truth value:

Entity["GeneralTopologyConcept", "IsHausdorff"][   "Restrictions"] // InputForm

Finally, the "Output" property here gives the value of the expression Math["IsHausdorff"][X]:

Entity["GeneralTopologyConcept", "IsHausdorff"]["Output"] // InputForm

When we use such an expression in a theorem or in the definition of another concept, we interpret it as equivalent to what we see in "Output". As we know, stating and understanding mathematics is much easier when we have such shorthands than if all theorems were stated in terms of atomic symbols and basic axioms.

Two of the most exciting properties on this list are "RelatedConcepts" and "RelatedTheorems". One of our goals is to represent mathematical concepts and theorems in a maximally computable way, and these are just an example of some of the computations we hope to do with these entities. A concept appears in "RelatedConcepts" if it is used in the "Restrictions", "Notation" or "Output" of a concept or the "Restrictions", "Notation" or "Statement" of a theorem. A theorem appears in the "RelatedTheorems" of a concept if that concept appears in the "RelatedConcepts" of that theorem. With this in mind, take a closer look at the examples above:

Entity["GeneralTopologyTheorem",    "HausdorffImpliesT1"]["RelatedConcepts"]

Entity["GeneralTopologyConcept", "IsHausdorff"]["RelatedTheorems"]

It is important to emphasize that these relations were not curated, but rather computed, which is possible because of the precise, consistent and expressive language used to encode the concepts and theorems. As a matter of convenience, however, they’ve been precomputed for speed to allow you to, say, easily find the definition of concepts appearing in a theorem.

As an example of the power of this approach, we can use the Wolfram Language’s graph functionality to easily analyze the connectivity and structure of the network of topological theorems and concepts in our corpus:

domains = {"GeneralTopologyConcept", "GeneralTopologyTheorem"}; nodes = Join @@ (EntityList /@ domains); labelednodes = Tooltip[Style[#, EntityTypeName[#] /. {         "GeneralTopologyConcept" -> RGBColor[0.65, 1, 0.65],         "GeneralTopologyTheorem" -> RGBColor[1, 1, 0.5]         }], #["SummaryGrid"]] & /@ nodes; edges = Join @@ (Flatten[        Thread /@          Normal@EntityValue[#, "RelatedConcepts",            "EntityAssociation"]] & /@ domains);
Graph with tooltip

As was the case for topological spaces, a number of extension symbols to the Wolfram Language were introduced in this project. We already encountered the Math and Theorem extensions, but there are also a number of others. For now, they have been placed in a GeneralTopology` context (analogous to the PureMath` context introduced for function spaces). This can be verified by examining the context of such symbols, e.g.:

Context[Math]

The motivation behind appending GeneralTopology` to our context path is also now revealed, namely to suppress verbose context formatting in our outputs (so we will see things like Math instead of GeneralTopology`Math). Here is a complete listing of language extensions introduced in the GeneralTopology` context:

GeneralTopology`

Again—as was the case for language extensions introduced for function spaces—some of these may eventually find their way into the Wolfram Language. However, independent of such considerations, these two small projects already show the need for some kind of infrastructure that allows incorporation, sharing and alignment of language extensions from different—and likely independently curated—domains.

We close with some experimental tidbits used to enhance the readability and usability of the concepts and theorems in our entity store. You have probably already noted the nice formatting in "SummaryGrid" and possibly even wondered how it was achieved. The answer is that it was produced using a set of MakeBoxes assignments packaged inside the entity store via the property EntityValue["GeneralTopologyTheorem", "TraditionalFormMakeBoxAssignments"]. Similarly, in order to provide usage messages for the GeneralTopology` symbols (which must be defined prior to having messages associated with them), we have packaged the messages in the special experimental EntityValue["GeneralTopologyTheorem", "Activate"] property, which can be activated as follows:

EntityValue["GeneralTopologyTheorem", "Activate"] // Activate;

The result is the instantiation of standard Mathematica-style usage messages such as:

?SetBuilder

While the eventual implementation details of such features into a standard framework remains the subject of ongoing design and technical discussions, the ease with which it is possible to experiment with such functionality (and to implement semantic representation of mathematical structures in general) is a testament to the power and flexibility of the Wolfram Language as a development and prototyping tool.

Conclusion

These projects undertaken at Wolfram Research during the last year have explored the semantic representation of abstract mathematics. In order to facilitate experimentation with this functionality, we have posted two small notebooks to the cloud (function space entity domain and the topology entity store) that allow interactive exploration and evaluation without the need to install a local copy of Mathematica. We welcome your feedback, comments and even collaboration in these efforts to extend and push the limits of the mathematics that can be represented and computed.

As a final note, we would like to emphasize that significant portions of the work discussed here were carried out as a part of internship projects. If you know or are a motivated mathematics or computer science student who is interested in trying to break new ground in the semantic representation of mathematics, please consider 1) learning the Wolfram Language (which, since you are reading this, you may well have already) and 2) joining the Wolfram internship program next summer!

]]>
http://blog.wolfram.com/2016/12/22/the-semantic-representation-of-pure-mathematics/feed/ 4