Wolfram Blog http://blog.wolfram.com News, views, and ideas from the front lines at Wolfram Research. Tue, 21 Mar 2017 16:54:26 +0000 en hourly 1 http://wordpress.org/?v=3.2.1 Apply Now for the 15th Annual Wolfram Summer School http://blog.wolfram.com/2017/03/21/apply-now-for-the-15th-annual-wolfram-summer-school/ http://blog.wolfram.com/2017/03/21/apply-now-for-the-15th-annual-wolfram-summer-school/#comments Tue, 21 Mar 2017 16:54:26 +0000 Lizzie Turner http://blog.internal.wolfram.com/?p=35750 Wolfram Summer School participants

This year’s Wolfram Summer School will be held at Bentley University in Waltham, Massachusetts, from June 18 to July 7, 2017.

Maybe you’re a researcher who wants to study the dynamics of galaxies with cellular automata. Perhaps you’re an innovator who wants to create a way to read time from pictures of analog clocks or build a new startup with products that use RFID (radio-frequency identification) to track objects. You might be an educator who wants to build an algebra feedback system or write a textbook that teaches designers how to disinvent the need for air conditioning. These projects show the diversity and creativity of some of our recent Summer School students. Does this sound like you? If so, we want you to join us this summer!

We offer three tracks: Science, Technology & Innovation and Educational Innovation. As a student, you will design and execute a challenging, practical project and acquire an incredible amount of knowledge along the way. At the Summer School, you will gain skills that are useful universally and network with a community of smart, creative people from all over the world. Many of our students go on to work for us in some capacity, whether that be as an employee, intern, ambassador, etc.

Mentorship at Wolfram Summer School

If you’re accepted to the Summer School, you will be mentored by top-notch staff who have made powerful and important contributions to the Wolfram Language, Mathematica, Wolfram|Alpha and more—including the inventor of the Wolfram Language, Stephen Wolfram.

The Wolfram Language has a vast depth of built-in algorithms and knowledge. Recent feature additions include a dynamic and ever-growing set of machine learning capabilities (including neural networks), as well as advancements in the areas of image processing, visualization, 3D printing, geographic information systems, natural language processing and computational linguistics and database and web frameworks, among many others.

Whatever your background, profession or future plans, the Wolfram Summer School can equip you with the computational tools you need to bring any idea to reality.

For more information, visit our website.

Apply Now

http://blog.wolfram.com/2017/03/21/apply-now-for-the-15th-annual-wolfram-summer-school/feed/ 0
The R&D Pipeline Continues: Launching Version 11.1 http://blog.wolfram.com/2017/03/16/the-rd-pipeline-continues-launching-version-11-1/ http://blog.wolfram.com/2017/03/16/the-rd-pipeline-continues-launching-version-11-1/#comments Thu, 16 Mar 2017 15:52:44 +0000 Stephen Wolfram http://blog.internal.wolfram.com/?p=35467

A Minor Release That’s Not Minor

I’m pleased to announce the release today of Version 11.1 of the Wolfram Language (and Mathematica). As of now, Version 11.1 is what’s running in the Wolfram Cloud—and desktop versions are available for immediate download for Mac, Windows and Linux.

What’s new in Version 11.1? Well, actually a remarkable amount. Here’s a summary:

Summary of new features

There’s a lot here. One might think that a .1 release, nearly 29 years after Version 1.0, wouldn’t have much new any more. But that’s not how things work with the Wolfram Language, or with our company. Instead, as we’ve built our technology stack and our procedures, rather than progressively slowing down, we’ve been continually accelerating. And now even something as notionally small as the Version 11.1 release packs an amazing amount of R&D, and new functionality.

A Visual Change

There’s one very obvious change in 11.1: the documentation looks different. We’ve spiffed up the design, and on the web we’ve made everything responsive to window width—so it looks good even when it’s in a narrow sidebar in the cloud, or on a phone.

Wolfram Language documentation

We’ve also introduced some new design elements—like the mini-view of the Details section. Most people like to see examples as soon as they get to a function page. But it’s important not to forget the Details—and the mini-view provides what amounts to a little “ad” for them.

Examples and details

Lots of New Functions

Here’s a word cloud of new functions in Version 11.1:

Word cloud of new functions

Altogether there are an impressive 132 new functions—together with another 98 that have been significantly enhanced. These functions represent the finished output of our R&D pipeline in just the few months that have elapsed since Version 11.0 was released.

When we bring out a major “integer” release—like Version 11—we’re typically introducing a bunch of complete, new frameworks. In (supposedly) minor .1 releases like Version 11.1, we’re not aiming for complete new frameworks. Instead, there’s typically new functionality that’s adding to existing frameworks—together with a few (sometimes “experimental”) hints of major new frameworks to come. Oh, and if a complete, new framework does happen to be finished in time for a .1 release, it’ll be there too.

Neural Nets

One very hot area in which Version 11.1 makes some big steps forward is neural nets. It’s been exciting over the past few years to see this area advance so quickly in the world at large, and it’s been great to see the Wolfram Language at the very leading edge of what’s being done.

Our goal is to define a very high-level interface to neural nets, that’s completely integrated into the Wolfram Language. Version 11.1 adds some new recently developed building blocks—in particular 30 new types of neural net layers (more than double what was there in 11.0), together with automated support for recurrent nets. The concept is always to let the neural net be specified symbolically in the Wolfram Language, then let the language automatically fill in the details, interface with low-level libraries, etc. It’s something that’s very convenient for ordinary feed-forward networks (tensor sizes are all knitted together automatically, etc.)—but for recurrent nets (with variable-length sequences, etc.) it’s something that’s basically essential if one’s going to avoid lots of low-level programming.

Another crucial feature of neural nets in the Wolfram Language is that it’s set up to be automatic to encode images, text or whatever in an appropriate way. In Version 11.1, NetEncoder and NetDecoder cover a lot of new cases—extending what’s integrated into the Wolfram Language.

It’s worth saying that underneath the whole integrated symbolic interface, the Wolfram Language is using a very efficient low-level library—currently MXNet—which takes care of optimizing ultimate performance for the latest CPU and GPU configurations. By the way, another feature enhanced in 11.1 is the ability to store complete neural net specifications, complete with encoders, etc. in a portable and reusable .wlnet file.

There’s a lot of power in treating neural nets as symbolic objects. In 11.1 there are now functions like NetMapOperator and NetFoldOperator that symbolically build up new neural nets. And because the neural nets are symbolic, it’s easy to manipulate them, for example breaking them apart to monitor what they’re doing inside, or systematically comparing the performance of different structures of net.

In some sense, neural net layers are like the machine code of a neural net programming system. In 11.1 there’s a convenient function—NetModel—that provides pre-built trained or untrained neural net models. As of today, there are a modest number of famous neural nets included, but we plan to add more every week—surfing the leading edge of what’s being developed in the neural net research community, as well as adding some ideas of our own.

Here’s a simple example of NetModel at work:

net = NetModel["LeNet Trained on MNIST Data"]

Now apply the network to some actual data—and see it gets the right answer:


But because the net is specified symbolically, it’s easy to “go inside” and “see what it’s thinking”. Here’s a tiny (but neat) piece of functional programming that visualizes what happens at every layer in the net—and, yes, in the end the first square lights up red to show that the output is 0:

FoldPairList[{ArrayPlot[ArrayFlatten[Partition[#1, UpTo[5]]],      ColorFunction -> "Rainbow"], #2[#1]} &,   NetExtract[net, "Input"][0], Normal[net]]

More Machine Learning

Neural nets are an important method for machine learning. But one of the core principles of the Wolfram Language is to provide highly automated functionality, independent of underlying methods. And in 11.1 there’s a bunch more of this in the area of machine learning. (As it happens, much of it uses the latest deep learning neural net methods, but for users what’s important is what it does, not how it does it.)

My personal favorite new machine learning function in 11.1 is FeatureSpacePlot. Give it any collection of objects, and it’ll try to lay them out in an appropriate “feature space”. Like here are the flags of countries in Europe:

FeatureSpacePlot[EntityValue[=countries in Europe, "FlagImage"]]

What’s particularly neat about FeatureSpacePlot is that it’ll immediately use sophisticated pre-trained feature extractors for specific classes of input—like photographs, texts, etc. And there’s also now a FeatureNearest function that’s the analog of Nearest, but operates in feature space. Oh, and all the stuff with NetModel and pre-trained net models immediately flows into these functions, so it becomes trivial, say, to experiment with “meaning spaces”:

FeatureSpacePlot[{"dog", "ant", "bear", "moose", "cucumber", "bean",    "broccoli", "cabbage"},   FeatureExtractor ->    NetModel["GloVe 50-Dimensional Word Vectors Trained on Wikipedia \ and Gigaword-5 Data"]]

Particularly with NetModel, there are all sorts of very useful few-line neural net programs that one can construct. But in 11.1 there are also some major new, more infrastructural, machine learning capabilities. Notable examples are ActiveClassification and ActivePrediction—which build classifiers and predictors by actively sampling a space, learning how to do this as efficiently as possible. There will be lots of end-user applications for ActiveClassification and ActivePrediction, but for us internally the most immediately interesting thing is that we can use these functions to optimize all sorts of meta-algorithms that are built into the Wolfram Language.


Version 11.0 began the process of making audio—like images—something completely integrated into the Wolfram Language. Version 11.1 continues that process. For example, for desktop systems, it adds AudioCapture to immediately capture audio from a microphone on your computer. (Yes, it’s nontrivial to automatically handle out-of-core storage and processing of large audio samples, etc.) Here’s an example of me saying “hello”:

Play Audio

You can immediately take this, and, say, make a cepstrogram (yes, that’s another new audio function in 11.1):


Images & Visualization

Version 11.1 has quite an assortment of new features for images and visualization. CurrentImage got faster and better. ImageEffect has lots of new effects added. There are new functions and options to support the latest in computational photography and computational microscopy. And images got even more integrated as first-class objects—that one can for example now immediately do arithmetic with:

Sqrt[2 Wolfie Image]-EdgeDetect[Wolfie Image]

Something else with images—that I’ve long wanted—is the ability to take a bitmap image, and find an approximate vector graphics representation of it:

ImageGraphics[Poke Spikey]

TextRecognize has also become significantly stronger—in particular being able to pick out structure in text, like paragraphs and columns and the like.

Oh, and in visualization, there are things like GeoBubbleChart, here showing the populations of the largest cities in the US:

GeoBubbleChart[EntityValue[United States["LargestCities"], {"Position",     "Population"}]]

There’s lots of little (but nice) stuff too. Like support for arbitrary callouts in pie charts, optimized labeling of discrete histograms and full support of scaling functions for Plot3D, etc.

More Data

There’s always new data flowing into the Wolfram Knowledgebase, and there’ve also been plenty of completely new things added since 11.0: 130,000+ new types of foods, 250,000+ atomic spectral lines, 12,000+ new mountains, 10,000+ new notable buildings, 300+ types of neurons, 650+ new waterfalls, 200+ new exoplanets (because they’ve recently been discovered), and lots else (not to mention 7,000+ new spelling words). There’s also, for example, much higher resolution geo elevation data—so now a 3D-printable Mount Everest can have much more detail:

ListPlot3D[GeoElevationData[GeoDisk[Mount Everest]], Mesh -> None]

Integrated External Services

Something new in Version 11.1 are integrated external services—that allow built-in functions that work by calling external APIs. Two examples are WebSearch and WebImageSearch. Here are thumbnail images found by searching the web for “colorful birds”:

WebImageSearch["colorful birds", "Thumbnails"]

For the heck of it, let’s see what ImageIdentify thinks they are (oh, and in 11.1. ImageIdentify is much more accurate, and you can even play with the network inside it by using NetModel):

ImageIdentify /@ %

Since WebSearch and WebImageSearch use external APIs, users need to pay for them separately. But we’ve set up what we call Service Credits to make this seamless. (Everything’s in the language, of course, so there’s for example $ServiceCreditsAvailable.)

There will be quite a few more examples of integrated services in future versions, but in 11.1, beyond web searching, there’s also TextTranslation. WordTranslation (new in 11.0) handles individual word translation for hundreds of languages; now in 11.1 TextTranslation uses external services to also translate complete pieces of text between several tens of languages:

TextTranslation["This is an integrated external service.", "French"]

More Math, More Algorithms

A significant part of our R&D organization is devoted to continuing our three-decade effort to push the frontiers of mathematical and algorithmic computation. So it should come as no surprise that Version 11.1 has all sorts of advances in these areas. There’s space-filling curves, fractal meshes, ways to equidistribute points on a sphere:

Graphics[HilbertCurve[5]] MengerMesh[3, 3] Graphics3D[Sphere[SpherePoints[200], 0.1]]

There are new kinds of spatial, robust and multivariate statistics. There are Hankel transforms, built-in modular inverses, and more. Even in differentiation, there’s something new: nth order derivatives, for symbolic n:

D[x Exp[x], {x, n}]

Here’s something else about differentiation: there are now functions RealAbs and RealSign that are versions of Abs and Sign that are defined only by the real axis, and so can freely be differentiated, without having to give any assumptions about variables.

In Version 10.1, we introduced the function AnglePath, that computes a path from successive segments with specified lengths and angles. At some level, AnglePath is like an industrial-scale version of Logo (or Scratch) “turtle geometry”. But AnglePath has turned out to be surprisingly broadly useful, so for Version 11.1, we’ve generalized it to AnglePath3D (and, yes, there are all sorts of subtleties about frames and Euler angles and so on).

A Language of Granular Dates

When we say “June 23, 1988”, what do we mean? The beginning of that day? The whole 24-hour period from midnight to midnight? Or what? In Version 11.1 we’ve introduced the notion of granularity for dates—so you can say whether a date is supposed to represent a day, a year, a second, a week starting Sunday—or for that matter just an instant in time.

It’s a nice application of the symbolic character of the Wolfram Language—and it solves all sorts of problems in dealing with dates and times. In a way, it’s a little like precision for numbers, but it’s really its own thing. Here for example is how we now represent “the current week”:


Here’s the current decade:


This is the next month from now:


This says we want to start from next month, then add 7 weeks—getting another month:

NextDate["Month"] + =7wk

And here’s the result to the granularity of a month:

CurrentDate[%, "Month"]

Talking of dates, by the way, one of the things that’s coming across the system is the use of Dated as a qualifier, for example for properties of entities of the knowledgebase (so this asks for the population of New York City in 1970):

New York City [ Dated[ "Population", 1970 ] ]

Language Tweaks

I’m very proud of how smooth the Wolfram Language is to use—and part of how that’s been achieved is that for 30 years we’ve been continually polishing it. We’re always making sure everything fits perfectly together—and we’re always adding little conveniences.

One of our principles is that if there’s a lump of computational work that people repeatedly do, then so long as there’s a good name for it (that people will readily remember, and readily recognize when they see it in a piece of code), it should be inserted as a built-in function. A very simple example in Version 11.1 is ReverseSort:

ReverseSort[{1, 2, 3, 4}]

(One might think: what’s the point of this—it’s just Reverse[Sort[...]]. But it’s very common to want to map what’s now ReverseSort over a bunch of objects, and it’s smoother to be able to say ReverseSort /@ ... rather than Reverse[Sort[#]]& /@ ... or Reverse@*Sort /@ ...).

Another little convenience: Nearest now has special ways to specify useful things to return. For example, this gives the distances from 2.7 to the 5 nearest values:

Nearest[{1, 2, 3, 4, 5, 6, 7} -> "Distance", 2.7, 5]

CellularAutomaton is a very broad function. Version 11.1 makes it easier to use for common cases by allowing rules to be specified by associations with labeled elements:

ArrayPlot[  CellularAutomaton[<|"OuterTotalisticCode" -> 110, "Dimension" -> 2,     "Neighborhood" -> 5|>, {{{1}}, 0}, {{{50}}}]]

We’re always trying to make sure that patterns we’ve established get used as broadly as possible. Like in 11.1, you can use UpTo in lots of new places, like in ImageSize specifications.

We also always trying to make sure that things are as general as possible. Like IntegerString now works not only with the standard representation of integers, but also with traditional ones used for different purposes around the world:

IntegerString[12345, "TraditionalChineseFinancial"]

And IntegerName can also now handle different types and languages of names:

IntegerName[12345, {"French", "Ordinal"}]

And there are lots more examples—each making the experience of using the Wolfram Language just a little bit smoother.

A Language of Persistence

If you make a definition list x=7, or $TimeZone=11, the definition will persist until you clear it, or until your session is over. But what if you want a definition that persists longer—say across all your sessions? Well, in Version 11.1 that’s now possible, thanks to PersistentValue.

PersistentValue lets you specify a name (like "foo"), and a "persistence location". (It also allows options like PersistenceTime and ExpirationDate.) The persistence location can just be "KernelSession"—which means that the value lasts only for a single kernel session. But it can also be "FrontEndSession", or "Local" (meaning that it should be the same whenever you use the same computer), or "Cloud" (meaning that it’s globally synchronized across the cloud).

PersistentValue is pretty general. It lets you have values in different places (like different private clouds, for example); then there’s a $PersistencePath that defines the order to look at them in, and a MergingFunction that specifies how (if at all) the values should be merged.

Systems-Level Programming

One of the goals of the Wolfram Language is to be able to interact as broadly as possible with all computational ecosystems. Version 11.1 adds support for the M4A audio format, the .ubj binary JSON format, as well as .ini files and Java .properties files. There’s also a new function, BinarySerialize, that converts any Wolfram Language expression into a new binary (“WXF”) form, optimized for speed or size:

BinarySerialize[RandomGraph[{50, 100}]]

BinaryDeserialize gets it back:


Version 11.0 introduced WolframScript—a command-line interface to the Wolfram Language, running either locally or in the cloud. With WolframScript you can create standalone Wolfram Language programs that run from the shell. There are several enhancements to WolframScript itself in 11.1, but there’s also now a new New > Script menu item that gives you a notebook interface for creating .wls (=“Wolfram Language Script”) files to be run by WolframScript:


Strengthening the Infrastructure

One of the major ways the Wolfram Language has advanced in recent times has been in its deployability. We’ve put a huge amount of work into making sure that the Wolfram Language can be robustly deployed at scale (and there are now lots of examples of successes out in the world).

We make updates to the Wolfram Cloud very frequently (and invisibly), steadily enhancing server performance and user interface capabilities. Along with Version 11.1 we’ve made some major updates. There are a few signs of this in the language.

Like there’s now an option AutoCopy that can be set for any cloud object—and that means that every time the object is accessed, one should get a fresh copy of it. This is very useful if, for example, you want to have a notebook that lots of people can separately modify. (“Explore these ideas; here’s a notebook to start from…”, etc.)

CloudDeploy[APIFunction[...]] makes it extremely easy to deploy web APIs. In Version 11.1 there are some options to automate aspects of how those APIs behave. For example, there’s AllowedCloudExtraParameters, which lets you say that APIs can have parameters like "_timeout" or "_geolocation" automated. There’s also AllowedCloudParameterExtensions (no, it’s not the longest name in the system; that honor currently goes to MultivariateHypergeometricDistribution). What AllowedCloudParameterExtensions does is to let you say not just x=value, but x__url=..., or x__json=....

Another thing about Version 11.1 is that it’s got various features added to support private instances of the Wolfram Cloud—and our major new Wolfram Enterprise Private Cloud product (with a first version released late last year). For example, in addition to $WolframID for the Wolfram Cloud, there’s also $CloudUserID that’s generalized to allow authentication on private clouds. And inside the system, there are all sorts of new capabilities associated with “multicloud authentication” and so on. (Yes, it’s a complicated area—but the symbolic character of the Wolfram Language lets one handle it rather beautifully.)

And There’s More

OK, so I’ve summarized some of what’s in 11.1. There’s a lot more I could say. New functions, and new capabilities—each of which is going to be exciting to somebody. But to me it’s actually pretty amazing that I can write this long a post about a .1 release! It’s a great testament to the strength of the R&D pipeline—and to how much can be done with the framework we’ve built in the Wolfram Language over the past 30 years.

We always work on a portfolio of projects—from small ones that get delivered very quickly, to ones that may take a decade or more to mature. Version 11.1 has the results of several multi-year projects (e.g. in machine learning, computational geometry, etc.), and a great many shorter projects. It’s exciting for us to be able to deliver the fruits of our efforts, and I look forward to hearing what people do with Version 11.1—and to seeing successive versions continue to be developed and delivered.

http://blog.wolfram.com/2017/03/16/the-rd-pipeline-continues-launching-version-11-1/feed/ 2
Visualizing Anatomy http://blog.wolfram.com/2017/03/10/visualizing-anatomy/ http://blog.wolfram.com/2017/03/10/visualizing-anatomy/#comments Fri, 10 Mar 2017 18:41:34 +0000 Jeffrey Bryant http://blog.internal.wolfram.com/?p=35450 Brain image

In Mathematica 10, we introduced support for anatomical structures in EntityValue, which included, among many other things, a “Graphics3D” property that returns a 3D model of the anatomical structure in question. We also styled the models and aligned them with the concepts in the Unified Medical Language System (UMLS).

EntityValue[{Entity["AnatomicalStructure", "LeftFemur"],    Entity["AnatomicalStructure", "LeftHand"]}, "Graphics3D"]

The output is a standard Graphics3D expression, but it contains metadata in the form of an Annotation that allows for additional exploration.

Head[Entity["AnatomicalStructure", "LeftThumb"]["Graphics3D"]]

This means each model knows what lower-level structures it’s made of.

Hip Bone and Structure Below

Computation versus Appearance

I should note that the models being used are not just eye candy. If that were the intent, we might explore low polygon count models and use textures for a more realistic appearance. But these models are not just good for looking at—you can also use them for computation. For meaningful results, you need accurate models, which may be large, so you may need to be patient when downloading and/or rendering them. Keep in mind that some entities, like the brain, have lots of internal structures. So the model may be larger than you expect, although you may not see this internal structure from the outside.

One example of using these models for computation would be calculating the eigenfrequencies of an air-filled transverse colon (let the jokes fly). Finite element mesh (FEM) calculations are common in medical research today. By retrieving the mesh from AnatomyData, we can perform computations on the model.

ev1 = AnatomyData[Entity["AnatomicalStructure", "TransverseColon"],    "MeshRegion"]

bev = BoundaryMeshRegion[MeshCoordinates[ev1], MeshCells[ev1, 2]];

nde = NDEigenvalues[{-Laplacian[u[x, y, z], {x, y, z}],      DirichletCondition[u[x, y, z] == 0, True]},     u, {x, y, z} \[Element] bev, 20];

Now we can obtain the resonant frequencies of the transverse colon.

freqs = Sqrt[    nde] QuantityMagnitude[     UnitConvert[ThermodynamicData[

You can use Sound to listen to the resonant frequencies of the transverse colon.

Play Audio
Colon Frequencies

We can find the surface area of the model by directly operating on the MeshRegion. Units are in square millimeters.


To compute the volume, we need to convert the MeshRegion into a BoundaryMeshRegion first. Units are in cubic millimeters.

Volume[BoundaryMeshRegion[MeshCoordinates[ev1], MeshCells[ev1, 2]]]

You can even compute the distance between the transverse colon and the region centroid of the large intestine. Units are in millimeters.

RegionDistance[ev1,   AnatomyData[Entity["AnatomicalStructure", "LargeIntestine"],    "RegionCentroid"]]

Lower-resolution models will give lower-quality results.

Structure and Style

To make it easy to render anatomical structures, we introduced AnatomyPlot3D.

AnatomyPlot3D[{Entity["AnatomicalStructure", "LeftFemur"]}]

AnatomyPlot3D allows directives to modify its “primitives,” similar in syntax to Graphics3D.

The output of AnatomyPlot3D doesn’t contain the original Entity objects. They get resolved at evaluation time to 3D graphics primitives, and the normal rules of the graphics language apply to them. The output of AnatomyPlot3D is a Graphics3D object.

AnatomyPlot3D[{RGBColor[0.56, 0, 0],    Entity["AnatomicalStructure", "LeftFemur"], RGBColor[   0.45, 0.58, 0.7000000000000001],    Entity["AnatomicalStructure", "RightFemur"], {RGBColor[    1, 0.81, 0.49], Entity["AnatomicalStructure", "HipBone"]},    Entity["AnatomicalStructure", "Sacrum"]}]


Because AnatomyPlot3D can be thought of as an extension to the Graphics3D language, you can mix anatomical structures with normal Graphics3D primitives.

AnatomyPlot3D[{Entity["AnatomicalStructure", "LeftFemur"],    Cuboid[{200, -44, 550}, {222, 44, 575}]}, Boxed -> True,   Axes -> True]

In AnatomyPlot3D, the Graphics3D language has been extended at multiple levels to make use of anatomical structures. Within AnatomyPlot3D, anatomical entities work just like graphics primitives do in Graphics3D. But they can also be used in place of coordinates in Graphics3D primitives like Point and Line. In that context, the entity represents the region centroid of the structure. This allows you, for example, to draw a line from one entity to another.

AnatomyPlot3D[{Entity["AnatomicalStructure", "LeftHand"],    Entity["AnatomicalStructure", "RightHand"], Thick, Red,    Line[{Entity["AnatomicalStructure", "LeftHand"],      Entity["AnatomicalStructure", "RightHand"]}]}]

This concept can be applied to annotate a 3D model using arrows and labels.

AnatomyPlot3D[{Entity["AnatomicalStructure", "SkeletonOfRightHand"],    Red, Arrow[{{-175, -130, 758},      Entity["AnatomicalStructure", "RightFifthMetacarpalBone"]}, 5],    Text["5th metacarpal", {-175, -130, 758}, {0, -1}]}]

You can refer to the subparts of structures and apply styles to them using AnatomyForm. It applies only to anatomical structures (not Graphics3D primitives) and supports a couple of different forms. The following example behaves similar to a standalone directive, except that it applies only to the anatomical structure, not the Cuboid.

AnatomyPlot3D[{AnatomyForm[EdgeForm[ RGBColor[0.42, 0.52, 1]]], Entity["AnatomicalStructure", "LeftFemur"],    Cuboid[{200, -44, 550}, {222, 44, 575}]}, Boxed -> True,   Axes -> True]

A more useful form can be used to style subparts.

AnatomyPlot3D[{AnatomyForm[<|     Entity["AnatomicalStructure", "SetOfBones"] -> RGBColor[      0.45, 0.55, 1],      Entity["AnatomicalStructure", "Muscle"] -> RGBColor[      0.93, 0.7000000000000001, 0.4]|>],    Entity["AnatomicalStructure", "LeftHand"]}]

AnatomyForm works by allowing you to associate specified directives with specified entities that may or may not exist in the structure you are visualizing. Any Directive supported by Graphics3D is supported, including Lighting, Opacity, EdgeForm, FaceForm, ClipPlanes and any combination thereof. In addition to supporting styles for specific entities, AnatomyForm also supports a default case via the use of an underscore. The following example shows the left humerus in red and everything else transparent and backlit, giving an X-ray-like appearance.

reverselights =    Join[{{"Ambient", Black}},     Table[{"Directional", Hue[.58, .5, 1],       ImageScaled[{Sin[x], Cos[x], -.5}]}, {x, 0, 2 Pi - 2 Pi/8,       2 Pi/8}]];

AnatomyPlot3D[{AnatomyForm[<|_ ->       Directive[Specularity[White, 50], Hue[.58, 0, 1, .1],        Lighting -> reverselights],      Entity["AnatomicalStructure", "LeftHumerus"] -> Red|>],    Entity["AnatomicalStructure", "LeftArm"]},   Background -> Hue[.58, 1, .3], SphericalRegion -> True]

PlotRange can make use of entities to constrain what would otherwise include all of the referenced entities. The following example includes several bones of the left lower limb, but the PlotRange is centered on the left patella and padded out from there by a fixed amount.

AnatomyPlot3D[{Entity["AnatomicalStructure", "LeftFemur"],    Entity["AnatomicalStructure", "LeftTibia"],    Entity["AnatomicalStructure", "LeftFibula"],    Entity["AnatomicalStructure", "LeftPatella"]},   PlotRange -> Entity["AnatomicalStructure", "LeftPatella"],   PlotRangePadding -> 50]

SkinStyle is a convenient way to include any enclosing skin that can be found around the specified entities.

AnatomyPlot3D[{Entity["AnatomicalStructure", "RightHand"]},   SkinStyle -> Automatic]

The default styling can be overridden.

AnatomyPlot3D[{Entity["AnatomicalStructure", "RightHand"]},   SkinStyle -> Directive[RGBColor[0.39, 0.58, 1], Opacity[.3]]]

You can use ClipPlanes to peel away layers of skin.

With[{z = 1000},   AnatomyPlot3D[{Entity["AnatomicalStructure", "LeftUpperLimb"],     ClipPlanes -> {InfinitePlane[{{0, -300, z}, {300, -300,          z + 200}, {200, 0, z}}]},     Entity["AnatomicalStructure", "SkinOfLeftUpperLimb"]}]]

Use multiple clip planes to peel away successive anatomical layers.

AnatomyPlot3D[{AnatomyForm[    Association[     Entity["AnatomicalStructure", "Muscle"] ->       Directive[RGBColor[0.43, 0.65, 1],        ClipPlanes -> {InfinitePlane[{{0, -100, 850}, {300, 0,             1050}, {0, 0, 850}}]}],      Entity["AnatomicalStructure", "Skin"] ->       Directive[RGBColor[0.43, 0.6900000000000001, 0.32],        ClipPlanes -> {InfinitePlane[{{0, -200, 1050}, {300, 0,             1250}, {0, 0, 1050}}]}],      Entity["AnatomicalStructure", "SetOfBones"] -> RGBColor[      0.9, 0.64, 0.47000000000000003`], All -> Transparent]],    Entity["AnatomicalStructure", "LeftUpperLimb"],    Entity["AnatomicalStructure", "Skin"]},   PlotRange -> {{0, 400}, {-300, 100}, {650, 1375}},   Background -> Hue[.58, 1, .3]]

Apply geometric transformations to anatomical structures to rotate them. The following example includes many bones in the skull, but applies a rotation to just the elements of the lower jaw around the temporomandibular joint.

Grid[{Table[    AnatomyPlot3D[{Entity["AnatomicalStructure", "Neurocranium"],       Entity["AnatomicalStructure", "ZygomaticBone"],       Entity["AnatomicalStructure", "SphenoidBone"],       Entity["AnatomicalStructure", "NasalBone"],       Entity["AnatomicalStructure", "Maxilla"],       Entity["AnatomicalStructure", "MaxillaryDentition"],       Rotate[{Entity["AnatomicalStructure", "Mandible"],         Entity["AnatomicalStructure", "MandibularDentition"]},        t Degree, {1, 0, 0},        Entity["AnatomicalStructure", "TemporomandibularJoint"]]},      SphericalRegion -> True,      PlotRange -> Entity["AnatomicalStructure", "Skull"],      PlotRangePadding -> 20], {t, {0, 10, 20}}]}]

A mix of styles can be useful for highlighting different tissue types in the head.

AnatomyPlot3D[{{ClipPlanes -> {InfinitePlane[{{-50, -146,          1500}, {-20, -146 - 20, 1550}, {-50, -146 - 20, 1500}}],       InfinitePlane[{{-50, -146, 1550}, {-20, -146 - 20,          1550}, {-50, -146 - 20, 1550}}]},     Entity["AnatomicalStructure", "Skull"]},    Entity["AnatomicalStructure", "Eye"],    Entity["AnatomicalStructure", "Brain"],    Directive[Specularity[White, 50], Hue[.58, 0, 1, .1],     Lighting ->      Join[{{"Ambient", Black}},       Table[{"Directional", Hue[.58, .5, 1],         ImageScaled[{Sin[x], Cos[x], -.5}]}, {x, 0, 2 Pi - 2 Pi/8,         2 Pi/8}]]], Entity["AnatomicalStructure", "Nose"],    Entity["AnatomicalStructure", "Neck"],    Entity["AnatomicalStructure", "SkinOfHead"],    Entity["AnatomicalStructure", "SkinOfNeck"],    Entity["AnatomicalStructure", "Lip"],    Entity["AnatomicalStructure", "Ear"]},   PlotRange -> Entity["AnatomicalStructure", "Head"],   PlotRangePadding -> 10, Background -> Hue[.58, 1, .3],   SphericalRegion -> True, ViewAngle -> Pi/5, ImageSize -> 600,   ViewPoint -> {-1.4, -1.29, 0.07}, ViewVertical -> {0.16, -0.08, 1.}]

A similar approach can be used in the torso for different organs.

reverselights =    Join[{{"Ambient", Black}},     Table[{"Directional", Hue[.58, .5, 1],       ImageScaled[{Sin[x], Cos[x], -.5}]}, {x, 0, 2 Pi - 2 Pi/8,       2 Pi/8}]];

reverselights2 =    Join[{{"Ambient", GrayLevel[.3]}},     Table[{"Directional", Yellow,       ImageScaled[{Sin[x], Cos[x], -.5}]}, {x, 0, 2 Pi - 2 Pi/8,       2 Pi/8}]];

AnatomyPlot3D[{Entity["AnatomicalStructure", "RibCage"],    Entity["AnatomicalStructure", "Esophagus"],    Entity["AnatomicalStructure", "Stomach"],    Entity["AnatomicalStructure", "SmallIntestine"],    Entity["AnatomicalStructure", "LargeIntestine"],    Entity["AnatomicalStructure", "Spleen"],    Entity["AnatomicalStructure", "Liver"],    Entity["AnatomicalStructure", "Lung"],    Entity["AnatomicalStructure", "Heart"],    Entity["AnatomicalStructure", "Kidney"],    Entity["AnatomicalStructure",     "UrinaryBladder"], {Directive[Specularity[White, 50],      Hue[.58, 0, 1, .1], Lighting -> reverselights],     Entity["AnatomicalStructure", "MusculatureOfTrunk"],     Entity["AnatomicalStructure", "MusculatureOfPectoralGirdle"],     Entity["AnatomicalStructure", "PectoralisMajor"]}},   Background -> Hue[.58, 1, .3], SphericalRegion -> True,   ViewAngle -> Pi/10, ImageSize -> 600,   BaseStyle -> {RenderingOptions -> {"DepthPeelingLayers" -> 20}}]

Here is an advanced example showing the use of ClipPlanes to remove muscles below a specific cutting plane.

reverselights =    Join[{{"Ambient", Black}},     Table[{"Directional", Hue[.58, .5, 1],       ImageScaled[{Sin[x], Cos[x], -.5}]}, {x, 0, 2 Pi - 2 Pi/8,       2 Pi/8}]];

With[{z = 1100},  AnatomyPlot3D[{    {ClipPlanes -> {InfinitePlane[{{0, -100, z}, {0, 0, z}, {300, 0,           z}}]}, AnatomyForm[<|       Entity["AnatomicalStructure", "Muscle"] ->         Directive[Specularity[White, 50], Hue[.58, 0, 1, .1],          Lighting -> reverselights]|>],      Entity["AnatomicalStructure", "LeftUpperLimb"]},    {ClipPlanes -> {InfinitePlane[{{0, -100, z}, {300, 0, z}, {0, 0,           z}}]}, Entity["AnatomicalStructure", "LeftUpperLimb"]}    }, Background -> Hue[.58, 1, .3], ImageSize -> {400, 700},    SphericalRegion -> True, ViewAngle -> Pi/10]]

Inner structures can be differentiated using styles, in this case within the brain.

reverselights =    Join[{{"Ambient", Black}},     Table[{"Directional", Hue[.58, .5, 1],       ImageScaled[{Sin[x], Cos[x], -.5}]}, {x, 0, 2 Pi - 2 Pi/8,       2 Pi/8}]];

reverselights2 =    Join[{{"Ambient", GrayLevel[.3]}},     Table[{"Directional", Yellow,       ImageScaled[{Sin[x], Cos[x], -.5}]}, {x, 0, 2 Pi - 2 Pi/8,       2 Pi/8}]];

With[{style =     Directive[Specularity[White, 50], Red,      Lighting -> reverselights2]},   AnatomyPlot3D[{AnatomyForm[<|      Entity["AnatomicalStructure", "Brainstem"] -> style,       Entity["AnatomicalStructure", "Hypothalamus"] -> style,       Entity["AnatomicalStructure", "Thalamus"] -> style,       Entity["AnatomicalStructure", "LateralVentricle"] -> style,       Entity["AnatomicalStructure", "ThirdVentricle"] -> style,       Entity["AnatomicalStructure", "FourthVentricle"] -> style,       Entity["AnatomicalStructure", "LateralGeniculateBody"] -> style,       Entity["AnatomicalStructure", "MedialGeniculateBody"] ->        style, _ ->        Directive[Specularity[White, 50], Hue[.58, 0, 1, .1],         Lighting -> reverselights]|>],     Entity["AnatomicalStructure", "Brain"]},    Background -> Hue[.58, 1, .3], ImageSize -> 600,    SphericalRegion -> True, ViewAngle -> Pi/6,    ViewPoint -> {-1.4, -1.29, 0.07}, ViewVertical -> {0.16, -0.08, 1.},    BaseStyle -> {RenderingOptions -> {"DepthPeelingLayers" -> 20}}]]

Here are links to some animations rendered using AnatomyPlot3D:

Mandible Opening and Closing

Rotating Head

Rotating Brain

Rotating Torso

Erasing Muscles in Arm

Scaling Transform Applied to the Bones of the Skull

As time goes on, we will continue to add additional models and tools allowing you to explore human anatomy more deeply.

To download this post as a CDF, click here. New to CDF? Get your copy for free with this one-time download.

http://blog.wolfram.com/2017/03/10/visualizing-anatomy/feed/ 2
How to Use Your Smartphone for Vibration Analysis, Part 1: The Wolfram Language http://blog.wolfram.com/2017/03/02/how-to-use-your-smartphone-for-vibration-analysis-part-1-the-wolfram-language/ http://blog.wolfram.com/2017/03/02/how-to-use-your-smartphone-for-vibration-analysis-part-1-the-wolfram-language/#comments Thu, 02 Mar 2017 17:27:27 +0000 Håkan Wettergren http://blog.internal.wolfram.com/?p=35358

Until now, it has been difficult for the average engineer to perform simple vibration analysis. The initial cost for simple equipment, including software, may be several thousand dollars—and it is not unusual for advanced equipment and software to cost ten times as much. Normally, a vibration specialist starts an investigation with a hammer impact test. An accelerometer is mounted on a structure, and a special impact hammer is used to excite the structure at several locations in the simplest and most common form of hammer impact testing. The accelerometer and hammer-force signals are recorded. Modal analysis is then used to get a preliminary understanding of the behavior of the system. The minimum equipment requirements for such a test are an accelerometer, an impact hammer, amplifiers, a signal recorder and analysis software.

I’ve figured out how to use the Wolfram Language on my smartphone to sample and analyze machine vibration and noise, and to perform surprisingly good vibration analysis. I’ll show you how, and give you some simple Wolfram Language code to get you started.

Throughout the history of the development of machines, vibration and sound measurements have been important issues. There are two reasons for this:

  1. Vibrations are a source of noise, can decrease operating performance and can also cause workplace injuries such as “vibration white fingers,” also known as hand-arm vibration syndrome (HAVS), resulting from continuous use of handheld machinery vibrating at certain key frequencies.
  2. Vibrations can indicate wear or defects of the bearings and gears, and so analysis of vibrations is a useful and commonly used tool for fault detection.

The many applications of vibration analysis have led to a huge number engineers studying this subject in universities all over the world. The research area of “machine vibrations” has its own conferences and publications. Companies specialize in developing different kinds of equipment and services. Most large machine-building industries have departments specializing in vibrations.

So here we go: recording the sound of a vibrating machine with an iPhone is simple. I used the iPhone’s built-in digital voice recorder, Voice Memo. The recording can be converted to an MP3 file just by saving it in the MP3 format in iTunes. The default Apple M4A format cannot be used directly in the Wolfram Language. If iTunes is not available, there are a lot of free converters on the internet that will change your M4A files to MP3 files.

Play Audio

I’ll use an industrial gearbox as an example. The example itself is not that important, but it does suggest possible areas where the method can be used. Some data and dimensions have been modified from the original application.

A gearbox is making a lot of noise. What is the problem?

In the figure below, a motor drives the input shaft. The shaft speed is reduced by the gearbox. The power is used by the output shaft. Typically, the motor is a diesel or electrical engine; in this example, it is a diesel engine. The number of teeth are z1 = 23, z2 = 65, z3 = 27 and z4 = 43, respectively. The largest wheels are about one meter in diameter.


We ran the engine at 1200 rpm, recording five seconds of sound with my iPhone. Converted to MP3, the sound file was named “measurement.mp3”. Then all I needed to do was import it into the Wolfram Language to plot the frequency spectrum.


The excitation frequency in Hertz of the gear contact on the input shaft is z1*rpm/60, and
(z1/z2)*z3 *rpm/60 on the output shaft. Marking the histogram for the input and output shafts’ gear mesh excitation frequencies—red and green, respectively—makes it clear that these frequencies are correlated with the spectrogram.

Excitation frequency histogram

Let’s make the spectrogram interactive: often we don’t want to use the whole sound file, so we add an option to select a start and end time within the file. Let’s also make it possible to change the rpm.

Manipulate start and stop time

The analysis toolkit is ready. The options PlotRange, MaxValue and Manipulate in the plot above are set manually. Of course, this can be developed further. But we stop here to keep it simple.

So what happened with the real application investigation? Well, the same analysis as above was performed over the whole rpm region. The maxima of the peaks at each rpm are plotted below.

Amplitude and frequency graph

The input shaft’s highest value is 747.7 rpm, and the output shaft’s is 1,800 rpm. Both become excited at about 287 Hz, the gearbox fundamental resonance frequency. Note that
747.7/60*23 = 286.6 Hz and 1800*23/65*27/60 = 286.6 Hz

We concluded that the gear mesh was not optimized for smooth running and the gearbox had a bad resonance frequency. We opened the gearbox and were able to confirm wear on the teeth, which suggested possibilities for improving the contact pattern. We improved contact by selecting an optimum helical angle, as well as tip relief and correct crowning. Tip relief is a surface modification of a tooth profile; a small amount of material is removed near the tip of the gear tooth. Crowning is a surface modification in the lengthwise direction to prevent contact at the teeth ends, where a small amount of material is removed near the end of the gear tooth.


I have used this method, utilizing my smartphone and the Wolfram Language, several times for real-world and often complex investigations and applications. Often, measurement specialists have already gotten involved before I arrive. But they may have missed the basics because they are using comprehensive measurement programs.

The method I describe here may sometimes yield a similar—or even better—understanding of the problem in just a few minutes at no cost. Well worth trying.

http://blog.wolfram.com/2017/03/02/how-to-use-your-smartphone-for-vibration-analysis-part-1-the-wolfram-language/feed/ 1
Hidden Figures: Modern Approaches to Orbit and Reentry Calculations http://blog.wolfram.com/2017/02/24/hidden-figures-modern-approaches-to-orbit-and-reentry-calculations/ http://blog.wolfram.com/2017/02/24/hidden-figures-modern-approaches-to-orbit-and-reentry-calculations/#comments Fri, 24 Feb 2017 18:08:39 +0000 Jeffrey Bryant http://blog.internal.wolfram.com/?p=35113 The movie Hidden Figures was released in theaters recently and has been getting good reviews. It also deals with an important time in US history, touching on a number of topics, including civil rights and the Space Race. The movie details the hidden story of Katherine Johnson and her coworkers (Dorothy Vaughan and Mary Jackson) at NASA during the Mercury missions and the United States’ early explorations into manned space flight. The movie focuses heavily on the dramatic civil rights struggle of African American women in NASA at the time, and these struggles are set against the number-crunching ability of Johnson and her coworkers. Computers were in their early days at this time, so Johnson and her team’s ability to perform complicated navigational orbital mechanics problems without the use of a computer provided an important sanity check against the early computer results.

Row[{Show[    Entity["Movie", "HiddenFigures::k39bj"][     EntityProperty["Movie", "Image"]], ImageSize -> 101], "  ",    Show[Entity["PopularCurve", "KatherineJohnsonCurve"][     EntityProperty["PopularCurve", "Image"]], Axes -> False,     Background -> LightBlue, ImageSize -> 120]}]

I will touch on two aspects of her scientific work that were mentioned in the film: orbit calculations and reentry calculations. For the orbit calculation, I will first exactly follow what Johnson did and then compare with a more modern, direct approach utilizing an array of tools made available with the Wolfram Language. Where the movie mentions the solving of differential equations using Euler’s method, I will compare this method with more modern ones in an important problem of rocketry: computing a reentry trajectory from the rocket equation and drag terms (derived using atmospheric model data obtained directly from within the Wolfram Language).

The movie doesn’t focus much on the math details of the types of problems Johnson and her team dealt with, but for the purposes of this blog, I hope to provide at least a flavor of the approaches one might have used in Johnson’s day compared to the present.

Placing a Satellite over a Selected Position

One of the earliest papers that Johnson coauthored, “Determination of Azimuth Angle at Burnout for Placing a Satellite over a Selected Earth Position,” deals with the problem of making sure that a satellite can be placed over a specific Earth location after a specified number of orbits, given a certain starting position (e.g. Cape Canaveral, Florida) and orbital trajectory. The approach that Johnson’s team used was to determine the azimuthal angle (the angle formed by the spacecraft’s velocity vector at the time of engine shutoff with a fixed reference direction, say north) to fire the rocket in, based on other orbital parameters. This is an important step in making sure that an astronaut is in the correct location for reentry to Earth.

NASA Technical Note

Constants and Initial Processing

In the paper, Johnson defines a number of constants and input parameters needed to solve the problem at hand. One detail to explain is the term “burnout,” which refers to the shutoff of the rocket engine. After burnout, orbital parameters are essentially “frozen,” and the spacecraft moves solely under the Earth’s gravity (as determined, of course, through Newton’s laws). In this section, I follow the paper’s unit conventions as closely as possible.

v1 = 25761.345 60; (* satellite velocity in ft/min *) r1 = 21637933.;(* circular orbit radius in feet *) \[Gamma]1 =    0.5 ; (* elevation angle between local horizon and velocity vector \ in degrees *) vc = 25506.28 60; (* circular orbit velocity in ft/min *) poverr1 =    1.020022269; (* p is the semilatus rectum of the orbit ellipse *) a = 22081775.57; (* semimajor axis of orbit in feet *) t\[Theta]1 =    5.842 ;(* t[\[Theta]1] is the time from perigee for \[Theta]1 in \ min, where \[Theta]1 is angle in orbit plane between perigee and \ burnout *) g0 = 115991.595; (* acceleration due to gravity ft/min^2 *) R = 2.090226 10^7; (* Earth radius in feet *) (* launch coordinates *)  \[Phi]1 = 28.50 ; (* launch latitude  *)  \[Lambda]1 = 279.45 ; (* launch longitude *)   \[Phi]2 = 34.00 ;(* intended pass over latitude *) \[Lambda]2 = 241.00 ; (* intended pass over longitude *) n = 3; (* number of orbits *) \[Omega]Capitale =    0.25068  (* angular velocity of Earth in degrees/min *) ;

For convenience, some functions are defined to deal with angles in degrees instead of radians. This allows for smoothly handling time in angle calculations:

SinDegree[x_] := Sin[x Degree] CosDegree[x_] := Cos[x Degree] TanDegree[x_] := Tan[x Degree] SecDegree[x_] := Sec[x Degree]  ArcSinDegree[x_] := ArcSin[x]/Degree ArcCosDegree[x_] := ArcCos[x]/Degree ArcTanDegree[x_] := ArcTan[x]/Degree

ArcTanDegree[x_, y_] := ArcTan[x, y]/Degree

Johnson goes on to describe several other derived parameters, though it’s interesting to note that she sometimes adopted values for these rather than using the values returned by her formulas. Her adopted values were often close to the values obtained by the formulas. For simplicity, the values from the formulas are used here.

Semilatus rectum of the orbit ellipse:

p = r1*(v1/vc)^2 CosDegree[\[Gamma]1];

Angle in orbit plane between perigee and burnout point:

\[Theta]1 = ArcTanDegree[TanDegree[\[Gamma]1] ((p/r1)/(p/r1 - 1))];

Orbit eccentricity:

e = (1/CosDegree[\[Theta]1]) (p/r1 - 1);

Orbit period:

T = 2 Pi Sqrt[R/g0] Sqrt[(a/R)^3];

Eccentric anomaly:

Eanomaly[\[Theta]_] := 2 ArcTan[Tan[\[Theta]/2] Sqrt[(1 - e)/(1 + e)]]

To describe the next parameter, it’s easiest to quote the original paper: “The requirement that a satellite with burnout position φ1, λ1 pass over a selected position φ2, λ2 after the completion of n orbits is equivalent to the requirement that, during the first orbit, the satellite pass over an equivalent position with latitude φ2 the same as that of the selected position but with longitude λ2e displaced eastward from λ2 by an amount sufficient to compensate for the rotation of the Earth during the n complete orbits, that is, by the polar hour angle n ωE T. The longitude of this equivalent position is thus given by the relation”:

\[Lambda]2e = \[Lambda]2 + n \[Omega]Capitale T;

Time from perigee for angle θ:

t[\[Theta]_] :=   T/(2 Pi) (Eanomaly[\[Theta] Degree] -      e Sin[Eanomaly[\[Theta] Degree]])


Part of the final solution is to determine values for intermediate parameters δλ1-2e and θ2e. This can be done in a couple of ways. First, I can use ContourPlot to obtain a graphical solution via equations 19 and 20 from the paper:

Clear[\[CapitalDelta]\[Lambda]1minus2e, \[Theta]2e]; ContourPlot[  Evaluate[{\[CapitalDelta]\[Lambda]1minus2e  == \[Lambda]2e - \ \[Lambda]1 + \[Omega]Capitale (t[\[Theta]2e ] - t[\[Theta]1]),     CosDegree[\[Theta]2e - \[Theta]1] ==      SinDegree[\[Phi]2] SinDegree[\[Phi]1] +       CosDegree[\[Phi]2] CosDegree[\[Phi]1] CosDegree[\[CapitalDelta]\ \[Lambda]1minus2e ] }],                                               {\[CapitalDelta]\ \[Lambda]1minus2e, 0, 60 }  , {\[Theta]2e, 0, 60 },   Prolog -> {Red, PointSize[Large],     Point[{31.948028324349036`, 51.7127788640602`}]}]

FindRoot can be used to find the solutions numerically:

Clear[\[CapitalDelta]\[Lambda]1minus2e, \[Theta]2e]; FindRoot[Evaluate[{\[CapitalDelta]\[Lambda]1minus2e  == \[Lambda]2e - \ \[Lambda]1 + \[Omega]Capitale (t[\[Theta]2e ] - t[\[Theta]1]),     CosDegree[\[Theta]2e - \[Theta]1] ==      SinDegree[\[Phi]2] SinDegree[\[Phi]1] +       CosDegree[\[Phi]2] CosDegree[\[Phi]1] CosDegree[\[CapitalDelta]\ \[Lambda]1minus2e ] }],                                          { \ {\[CapitalDelta]\[Lambda]1minus2e, 30 }, {\[Theta]2e, 55 }}]

{\[CapitalDelta]\[Lambda]1minus2e -> 32.1445, \[Theta]2e -> 51.8351}

Of course, Johnson didn’t have access to ContourPlot or FindRoot, so her paper describes an iterative technique. I translated the technique described in the paper into the Wolfram Language, and also solved for a number of other parameters via her iterative method. Because the base computations are for a spherical Earth, corrections for oblateness are included in her method:

\[CapitalDelta]\[Omega] = 0; \[CapitalDelta]\[Phi]2 = 0; \[CapitalDelta]\[CapitalOmega] = 0; \[CapitalDelta]\[Lambda]2 = 0; iO = 2;  Table[   \[Lambda]2e = \[Lambda]2 - \[CapitalDelta]\[Lambda]2 +      n \[Omega]Capitale T;   \[CapitalDelta]\[Lambda]1minus2e = \[Lambda]2e - \[Lambda]1 + \ \[Omega]Capitale*      If[iter == 1,        T/360 (\[Lambda]2e - \[Lambda]1) +         t[\[Theta]1], (tOf\[Theta]2e - t[\[Theta]1])];   \[Theta]2e =     ArcCosDegree[      SinDegree[\[Phi]2 - \[CapitalDelta]\[Phi]2] SinDegree[\[Phi]1] +       CosDegree[\[Phi]2 - \[CapitalDelta]\[Phi]2] CosDegree[\[Phi]1] \ CosDegree[\[CapitalDelta]\[Lambda]1minus2e]] + \[Theta]1;    \[Psi]1 =     ArcSinDegree[(SinDegree[\[CapitalDelta]\[Lambda]1minus2e] \ CosDegree[\[Phi]2 - \[CapitalDelta]\[Phi]2])/      SinDegree[\[Theta]2e - \[Theta]1]];   i = ArcCosDegree[     Piecewise[{{CosDegree[\[Phi]1] SinDegree[\[Psi]1],         0 < \[Psi]1 < 180}, {-CosDegree[\[Phi]1] SinDegree[\[Psi]1],         180 <= \[Psi]1 < 360}}]];   \[Omega] = ArcSinDegree[SinDegree[\[Phi]1]/SinDegree[i]] - \[Theta]1;   (* after the first iteration,         correct \[Lambda]2 and \[Phi]2 for oblateness and     then keep this correction| page 18, 19 *)   If[iter == iO, \[CapitalDelta]\[Omega] =       3.4722*^-3 (R/p)^2 (R/a)^(3/2) (5 CosDegree[i]^2 - 1) (n T +         t[\[Theta]2e] - t[\[Theta]1])];   If[iter ==      iO, \[CapitalDelta]\[Phi]2 = \[CapitalDelta]\[Omega] SinDegree[       i] CosDegree[\[Omega] + \[Theta]2e]/CosDegree[\[Phi]2]];    If[iter ==      iO, \[CapitalDelta]\[CapitalOmega] = -6.9444*^-3 (R/p)^2 (R/a)^(3/         2) CosDegree[i] (n T + t[\[Theta]2e] - t[\[Theta]1])];   If[iter ==      iO, \[CapitalDelta]\[Lambda]2 = \[CapitalDelta]\[Omega] CosDegree[        i] SecDegree[\[Omega] + \[Theta]2e]^2/        (1 +           CosDegree[             i]^2 TanDegree[\[Omega] + \[Theta]2e]^2) + \ \[CapitalDelta]\[CapitalOmega]];   \[CapitalDelta]\[Lambda]N1 =     ArcTanDegree[SinDegree[\[Phi]1] TanDegree[\[Psi]1]];   \[Lambda]Nref = \[Lambda]1 - \[CapitalDelta]\[Lambda]N1;   tOf\[Theta]2e = t[\[Theta]2e];   {\[Theta]2e, tOf\[Theta]2e}, {iter, 1, 8, 1}];

Graphing the value of θ2e for the various iterations shows a quick convergence:

ListPlot[%[[All, 2]], PlotRange -> {10, 15}, Filling -> Axis,                    AxesLabel -> {"iteration", "\[Theta]2e"},   ImageSize -> 360]

ListPlot[%[[All, 2]], PlotRange -> {10, 15}, Filling -> Axis,                    AxesLabel -> {"iteration", "\[Theta]2e"},   ImageSize -> 360]

ksol = {   "\[CapitalDelta]\[Lambda]1minus2e" -> \ \[CapitalDelta]\[Lambda]1minus2e,    "\[CapitalDelta]\[Lambda]2" -> \[CapitalDelta]\[Lambda]2,    "\[Theta]2e" -> \[Theta]2e , "\[Psi]1" -> \[Psi]1,    "\[Omega]" -> \[Omega],    "\[CapitalDelta]\[Omega]" -> \[CapitalDelta]\[Omega],    "\[CapitalDelta]\[CapitalOmega]" -> \[CapitalDelta]\[CapitalOmega],    "i" -> N@i, "\[CapitalDelta]\[Phi]2" -> \[CapitalDelta]\[Phi]2 }

{"\[CapitalDelta]\[Lambda]1minus2e" -> 31.0612,   "\[CapitalDelta]\[Lambda]2" -> 1.02742, "\[Theta]2e" -> 50.9335,   "\[Psi]1" -> 70.5551, "\[Omega]" -> 34.5578,   "\[CapitalDelta]\[Omega]" -> 1.96276,   "\[CapitalDelta]\[CapitalOmega]" -> -1.33792, "i" -> 34.0355,   "\[CapitalDelta]\[Phi]2" -> 0.0841712}

I can convert the method in a FindRoot command as follows (this takes the oblateness effects into account in a fully self-consistent manner and calculates values for all nine variables involved in the equations):

fr := FindRoot[Evaluate[    {\[CapitalDelta]\[Lambda]1minus2e  == (\[Lambda]2 - \ \[CapitalDelta]\[Lambda]2 +          n \[Omega]Capitale T) - \[Lambda]1 + \[Omega]Capitale*(t[\ \[Theta]2e] - t[\[Theta]1]),      CosDegree[\[Theta]2e - \[Theta]1] ==       SinDegree[\[Phi]2 - \[CapitalDelta]\[Phi]2] SinDegree[\[Phi]1] +        CosDegree[\[Phi]2 - \[CapitalDelta]\[Phi]2] CosDegree[\[Phi]1] \ CosDegree[\[CapitalDelta]\[Lambda]1minus2e ],     SinDegree[\[Psi]1] == \ (SinDegree[\[CapitalDelta]\[Lambda]1minus2e] CosDegree[\[Phi]2 - \ \[CapitalDelta]\[Phi]2])/SinDegree[\[Theta]2e - \[Theta]1],     CosDegree[i] == CosDegree[\[Phi]1] SinDegree[\[Psi]1],     SinDegree[\[Omega] + \[Theta]1] == SinDegree[\[Phi]1]/SinDegree[i],     \[CapitalDelta]\[Omega] ==        3.4722*^-3 (R/p)^2 (R/a)^(3/2) (5 CosDegree[i]^2 - 1) (n T +          t[\[Theta]2e] - t[\[Theta]1]),     \[CapitalDelta]\[Phi]2 == \[CapitalDelta]\[Omega] SinDegree[        i] CosDegree[\[Omega] + \[Theta]2e]/        CosDegree[\[Phi]2 - \[CapitalDelta]\[Phi]2],     \[CapitalDelta]\[CapitalOmega] == -6.9444*^-3 (R/p)^2 (R/a)^(3/          2) CosDegree[i] (n T + t[\[Theta]2e] - t[\[Theta]1]),     \[CapitalDelta]\[Lambda]2 == \[CapitalDelta]\[Omega] CosDegree[         i] SecDegree[\[Omega] + \[Theta]2e]^2/(1 +            CosDegree[              i]^2 TanDegree[\[Omega] + \[Theta]2e]^2) + \ \[CapitalDelta]\[CapitalOmega]}],   {{\[CapitalDelta]\[Lambda]1minus2e, 31}, {\[CapitalDelta]\[Lambda]2,      1}, {\[Theta]2e, 51}, {\[Psi]1, 70}, {\[Omega],      34}, {\[CapitalDelta]\[Omega],      2}, {\[CapitalDelta]\[CapitalOmega], -1.3}, {i,      34}, {\[CapitalDelta]\[Phi]2, 0.1}}]

Clear[\[CapitalDelta]\[Lambda]1minus2e, \[CapitalDelta]\[Lambda]2, \ \[Theta]2e, \[Psi]1, \[Omega], \[CapitalDelta]\[Omega], \ \[CapitalDelta]\[CapitalOmega], i, \[CapitalDelta]\[Phi]2]; wlsol = fr

{\[CapitalDelta]\[Lambda]1minus2e ->    31.062, \[CapitalDelta]\[Lambda]2 -> 1.02664, \[Theta]2e ->    50.9332, \[Psi]1 -> 70.5964, \[Omega] ->    34.61, \[CapitalDelta]\[Omega] ->    1.96527, \[CapitalDelta]\[CapitalOmega] -> -1.33778,   i -> 34.0139, \[CapitalDelta]\[Phi]2 -> 0.102921}

Interestingly, even the iterative root-finding steps of this more complicated system converge quite quickly:

(OwnValues[fr] /.      HoldPattern[FindRoot[args__]] :>       Reap[FindRoot[args, StepMonitor :> Sow[          {\[CapitalDelta]\[Lambda]1minus2e, \ \[CapitalDelta]\[Lambda]2, \[Theta]2e, \[Psi]1, \[Omega], \ \[CapitalDelta]\[Omega], \[CapitalDelta]\[CapitalOmega],            i, \[CapitalDelta]\[Phi]2}]]])[[1, 2]][[2]]

{{{31.0619, 1.02671, 50.9332, 70.5861, 34.6025, 1.96521, -1.33777,     34.0149, 0.102885}, {31.062, 1.02664, 50.9332, 70.5964, 34.61,     1.96527, -1.33778, 34.0139, 0.102922}, {31.062, 1.02664, 50.9332,     70.5964, 34.61, 1.96527, -1.33778, 34.0139, 0.102921}, {31.062,     1.02664, 50.9332, 70.5964, 34.61, 1.96527, -1.33778, 34.0139,     0.102921}}}


With the orbital parameters determined, it is desirable to visualize the solution. First, some critical parameters from the previous solutions need to be extracted:

{\[Theta]2e, \[Omega], i} = {"\[Theta]2e", "\[Omega]", "i"} /. ksol;

Next, the latitude and longitude of the satellite as a function of azimuth angle need to be derived:

\[CapitalDelta]\[Lambda]Ns[\[Theta]s_] :=   ArcTanDegree[CosDegree[\[Omega] + \[Theta]s],    CosDegree[-i] SinDegree[\[Omega] + \[Theta]s]]

\[Lambda]N[\[Theta]s_] := \[Lambda]Nref - \[Omega]Capitale \ (t[\[Theta]s] - t[\[Theta]1]) + \[CapitalDelta]\[Lambda]Ns[\[Theta]s]

φs and λs are the latitudes and longitudes as a function of θs:

\[Phi]s[\[Theta]s_] :=   ArcSinDegree[SinDegree[i] SinDegree[\[Omega] + \[Theta]s]]

satpts[n_] :=    Table[{\[Phi]s[\[Theta]s], \[Lambda]s[\[Theta]s, n]}, {\[Theta]s, 0,      360, .01}];

The satellite ground track can be constructed by creating a table of points:

points = {Table[{\[Lambda]s[\[Theta]s,        0], \[Phi]s[\[Theta]s]}, {\[Theta]s, \[Theta]1, 180, .1}],       Table[{\[Lambda]s[\[Theta]s,        1], \[Phi]s[\[Theta]s]}, {\[Theta]s, 0, 360, .1}],       Table[{\[Lambda]s[\[Theta]s,        2], \[Phi]s[\[Theta]s]}, {\[Theta]s, 0, 360, .1}],       Table[{\[Lambda]s[\[Theta]s,        3], \[Phi]s[\[Theta]s]}, {\[Theta]s, -180, \[Theta]2e, .1}]};

Johnson’s paper presents a sketch of the orbital solution including markers showing the burnout, selected and equivalent positions. It’s easy to reproduce a similar plain diagram here:

ListPlot[points,  PlotStyle -> Gray, PlotRange -> {{0, 360}, {-60, 60}},  Epilog -> {Black, Text[Style[#1, 32, Bold], #2] & @@@                {{"\[EmptySmallCircle]", {\[Lambda]1, \[Phi]1}}, {"\ \[EmptyDiamond]", {\[Lambda]2, \[Phi]2}, {"\[EmptySmallSquare]", {\ \[Lambda]2e, \[Phi]2}}} }} ,  Prolog -> {Arrow[{{333, 33}, {342, 31}}],     Arrow[{{212, 29}, {215, 30} }],                          Text[Style[      "Burnout\n\!\(\*SubscriptBox[\(\[Phi]\), \ \(1\)]\),\!\(\*SubscriptBox[\(\[Lambda]\), \(1\)]\)", 10], {280, 18}],                          Text[Style[      "Selected\nposition\n\!\(\*SubscriptBox[\(\[Phi]\), \ \(2\)]\),\!\(\*SubscriptBox[\(\[Lambda]\), \(2\)]\)", 10], {240, 48}],                          Text[Style[      "Equivalent\nposition\n\!\(\*SubscriptBox[\(\[Phi]\), \ \(2\)]\),\!\(\*SubscriptBox[\(\[Lambda]\), \(2  e\)]\)", 10], {312,       46}]},  GridLines -> {Table[\[Lambda], {\[Lambda], 0, 360, 40}],     Table[\[Phi], {\[Phi], -60, 60, 20}]},  Axes -> False, Frame -> True, ImageSize -> 600, AspectRatio -> 0.7,  FrameLabel -> {"Longitude, \[Lambda], deg", "Latitude, \[Phi], deg"},  FrameTicks -> {{Table[{\[Phi], \[Phi]}, {\[Phi], -60, 60, 20}],      None}, {Table[{\[Lambda], \[Lambda]}, {\[Lambda], 0, 360, 40}],      None}}]

ListPlot[points,  PlotStyle -> Gray, PlotRange -> {{0, 360}, {-60, 60}},  Epilog -> {Black, Text[Style[#1, 32, Bold], #2] & @@@                {{"\[EmptySmallCircle]", {\[Lambda]1, \[Phi]1}}, {"\ \[EmptyDiamond]", {\[Lambda]2, \[Phi]2}, {"\[EmptySmallSquare]", {\ \[Lambda]2e, \[Phi]2}}} }} ,  Prolog -> {Arrow[{{333, 33}, {342, 31}}],     Arrow[{{212, 29}, {215, 30} }],                          Text[Style[      "Burnout\n\!\(\*SubscriptBox[\(\[Phi]\), \ \(1\)]\),\!\(\*SubscriptBox[\(\[Lambda]\), \(1\)]\)", 10], {280, 18}],                          Text[Style[      "Selected\nposition\n\!\(\*SubscriptBox[\(\[Phi]\), \ \(2\)]\),\!\(\*SubscriptBox[\(\[Lambda]\), \(2\)]\)", 10], {240, 48}],                          Text[Style[      "Equivalent\nposition\n\!\(\*SubscriptBox[\(\[Phi]\), \ \(2\)]\),\!\(\*SubscriptBox[\(\[Lambda]\), \(2  e\)]\)", 10], {312,       46}]},  GridLines -> {Table[\[Lambda], {\[Lambda], 0, 360, 40}],     Table[\[Phi], {\[Phi], -60, 60, 20}]},  Axes -> False, Frame -> True, ImageSize -> 600, AspectRatio -> 0.7,  FrameLabel -> {"Longitude, \[Lambda], deg", "Latitude, \[Phi], deg"},  FrameTicks -> {{Table[{\[Phi], \[Phi]}, {\[Phi], -60, 60, 20}],      None}, {Table[{\[Lambda], \[Lambda]}, {\[Lambda], 0, 360, 40}],      None}}]

ListPlot[points,  PlotStyle -> Gray, PlotRange -> {{0, 360}, {-60, 60}},  Epilog -> {Black, Text[Style[#1, 32, Bold], #2] & @@@                {{"\[EmptySmallCircle]", {\[Lambda]1, \[Phi]1}}, {"\ \[EmptyDiamond]", {\[Lambda]2, \[Phi]2}, {"\[EmptySmallSquare]", {\ \[Lambda]2e, \[Phi]2}}} }} ,  Prolog -> {Arrow[{{333, 33}, {342, 31}}],     Arrow[{{212, 29}, {215, 30} }],                          Text[Style[      "Burnout\n\!\(\*SubscriptBox[\(\[Phi]\), \ \(1\)]\),\!\(\*SubscriptBox[\(\[Lambda]\), \(1\)]\)", 10], {280, 18}],                          Text[Style[      "Selected\nposition\n\!\(\*SubscriptBox[\(\[Phi]\), \ \(2\)]\),\!\(\*SubscriptBox[\(\[Lambda]\), \(2\)]\)", 10], {240, 48}],                          Text[Style[      "Equivalent\nposition\n\!\(\*SubscriptBox[\(\[Phi]\), \ \(2\)]\),\!\(\*SubscriptBox[\(\[Lambda]\), \(2  e\)]\)", 10], {312,       46}]},  GridLines -> {Table[\[Lambda], {\[Lambda], 0, 360, 40}],     Table[\[Phi], {\[Phi], -60, 60, 20}]},  Axes -> False, Frame -> True, ImageSize -> 600, AspectRatio -> 0.7,  FrameLabel -> {"Longitude, \[Lambda], deg", "Latitude, \[Phi], deg"},  FrameTicks -> {{Table[{\[Phi], \[Phi]}, {\[Phi], -60, 60, 20}],      None}, {Table[{\[Lambda], \[Lambda]}, {\[Lambda], 0, 360, 40}],      None}}]

For comparison, here is her original diagram:

Original diagram

Katherine Johnson's Diagram

A more visually useful version can be constructed using GeoGraphics, taking care to convert the geocentric coordinates into geodetic coordinates:

path = {#1,  QuantityMagnitude[GeodesyData["Krassovsky",        {"FromGeocentricLatitude", #2}]]} & @@@ Flatten[points, 1];

GeoGraphics[{Blue, PointSize[0.015],    Point[GeoPosition[{\[Phi]1, \[Lambda]1}]], Red,    Point[GeoPosition[{\[Phi]2, \[Lambda]2}]],   Black, PointSize[0.002], Point[GeoPosition[Reverse /@ path]]},   GeoRange -> "World", GeoZoomLevel -> 3]

GeoGraphics[{Blue, PointSize[0.015],    Point[GeoPosition[{\[Phi]1, \[Lambda]1}]], Red,    Point[GeoPosition[{\[Phi]2, \[Lambda]2}]],   Black, PointSize[0.002], Point[GeoPosition[Reverse /@ path]]},   GeoRange -> "World", GeoZoomLevel -> 3]

How to Calculate Orbits Today

Today, virtually every one of us has, within immediate reach, access to computational resources far more powerful than those available to the entirety of NASA in the 1960s. Now, using only a desktop computer and the Wolfram Language, you can easily find direct numerical solutions to problems of orbital mechanics such as those posed to Katherine Johnson and her team. While perhaps less taxing of our ingenuity than older methods, the results one can get from these explorations are no less interesting or useful.

To solve for the azimuthal angle ψ using more modern methods, let’s set up parameters for a simple circular orbit beginning after burnout over Florida, assuming a spherically symmetric Earth (I’ll not bother trying to match the orbit of the Johnson paper precisely, and I’ll redefine certain quantities from above using the modern SI system of units). Starting from the same low-Earth orbit altitude used by Johnson, and using a little spherical trigonometry, it is straightforward to derive the initial conditions for our orbit:

\[Phi]1 = 28.50 Degree (* latitude of burnout point*); \[Lambda]1 = (279.45 - 360) Degree (* longitude of burnout point *);

(* assuming a circular orbit, and leaving azimuth angle \[Psi] \ undetermined *)  r0 = QuantityMagnitude[Quantity[21637933., "Feet"],     "Meters"] (* radius of the orbit *); v0 = Sqrt[G M/r0] (* initial velocity of the circular orbit *); \[ScriptCapitalR] =    RotationTransform[\[Lambda]1, {0, 0,      1}] (* used to rotate initial conditions *); {x0, y0, z0} = \[ScriptCapitalR][{r0 Cos[\[Phi]1], 0,      r0 Sin[\[Phi]1]}] (* initial x,y,z at burnout point *); {vx0, vy0, vz0} = \[ScriptCapitalR][    v0 {-Cos[\[Phi]1] Sin[\[Gamma]] -        Cos[\[Gamma]] Cos[\[Psi]] Sin[\[Phi]1],      Cos[\[Gamma]] Sin[\[Psi]],       Cos[\[Gamma]] Cos[\[Phi]1] Cos[\[Psi]] -        Sin[\[Gamma]] Sin[\[Phi]1]}] (* initial velocity at the burnout \ point *);

initCond[\[Psi]_, \[Gamma]_] = {x[0] == x0, y[0] == y0,     z[0] == z0,                                                           x'[0] == vx0, y'[0] == vy0,     z'[0] == vz0};

The relevant physical parameters can be obtained directly from within the Wolfram Language:

M, R, G} = QuantityMagnitude[Flatten[{Entity["Planet",                 "Earth"][{"Mass", "Radius"}],       Quantity[1, "GravitationalConstant"]}], "SIBase"];

Next, I obtain a differential equation for the motion of our spacecraft, given the gravitational field of the Earth. There are several ways you can model the gravitational potential near the Earth. Assuming a spherically symmetric planet and utilizing a Cartesian coordinate system throughout, the potential is merely:

pot[{x_, y_, z_}] := -G M /Sqrt[x^2 + y^2 + z^2];

Alternatively, you can use a more realistic model of Earth’s gravity, where the planet’s shape is taken to be an oblate ellipsoid of revolution. The exact form of the potential from such an ellipsoid (assuming constant mass-density over ellipsoidal shells), though complicated (containing multiple elliptic integrals), is available through EntityValue:

evg = EntityValue[    Entity["PhysicalSystem", "MassiveTriaxialEllipsoid"],     "GravitationalPotential"];

For a general homogeneous triaxial ellipsoid, the potential contains piecewise functions:

(pw = Cases[evg, _ _Piecewise, \[Infinity]][[1]]) // TraditionalForm //   Style[#, 6] &

Traditional form output

Here, κ is the largest root of x2/(a2+κ)+y2/(b2+κ)+z2/(c2+κ)=1. In the case of an oblate ellipsoid, the previous formula can be simplified to contain only elementary functions…

Limit[pw[[1, -1]],    QuantityVariable["b","Radius"] -> QuantityVariable[    "a","Radius"]] // FullSimplify

Traditional form output

… where κ=((2 z2 (a2-c2+x2+y2)+(-a2+c2+x2+y2)2+z4)1/2-a2-c2+x2+y2+z2)/2.

A simpler form that is widely used in the geographic and space science community, and that I will use here, is given by the so-called International Gravity Formula (IGF). The IGF takes into account differences from a spherically symmetric potential up to second order in spherical harmonics, and gives numerically indistinguishable results from the exact potential referenced previously. In terms of four measured geodetic parameters, the IGF potential can be defined as follows:

{a, b} = GeodesyData["ITRF00", #] & /@ {"SemimajorAxis",      "SemiminorAxis"} ;  gPole = 9.8321849378(* g at Earth's pole in m/s^2*); gEquator = 9.78903267715(* g at Earth's equator in m/s^2*); With[{k = b gPole/(a gEquator) - 1, e = Sqrt[1 - b^2/a^2]},  potIGF[{x_, y_, z_}] :=   With[{(* latitude *) \[Phi] = ArcTan[Sqrt[x^2 + y^2], z]},    -G M/Sqrt[x^2 + y^2 + z^2] (1 + k Sin[\[Phi]]^2)/      Sqrt[1 - e^2 Sin[\[Phi]]^2]]]

I could easily use even better values for the gravitational force through GeogravityModelData. For the starting position, the IGF potential deviates only 0.06% from a high-order approximation:

{potIGF[{x0, y0, z0}],  GeogravityModelData[GeoPosition[GeoPositionXYZ[{x0, y0, z0}]],     "Potential"] // QuantityMagnitude[#, "SIBase"] &}

{-6.04963*10^7, -6.05363*10^7}

With these functional forms for the potential, finding the orbital path amounts to taking a gradient of the potential to get the gravitational field vector and then applying Newton’s third law. Doing so, I obtain the orbital equations of motion for the two gravity models:

grad = -Grad[pot[{x[t], y[t], z[t]}], {x[t], y[t], z[t]}]; gradIGF = -Grad[potIGF[{x[t], y[t], z[t]}], {x[t], y[t], z[t]}];

Fma = {{x''[t], y''[t], z''[t]} == grad}; FmaIGF = {{x''[t], y''[t], z''[t]} == gradIGF};

I am now ready to use the power of NDSolve to compute orbital trajectories. Before doing this, however, it will be nice to display the orbital path as a curve in three-dimensional space. To give these curves context, I will plot them over a texture map of the Earth’s surface, projected onto a sphere. Here I construct the desired graphics objects:

(*define orbit burnoutDot and burnoutArrow graphics*) burnoutCoords = {x0, y0, z0}; burnoutDirection = {vx0, vy0, vz0}/v0; burnoutDot = {Red, Sphere[burnoutCoords, R/30]}; burnoutArrow = {Red, Arrow[{burnoutCoords,       burnoutCoords + (R/3) burnoutDirection}]}; burnoutLabels[\[Psi]_, \[Gamma]_] =    Graphics3D[{burnoutDot, burnoutArrow}];

(*define globe texture and markings*) earthTexture = Lighter[#, 0.75] &@ ImageReflect[PlanetData["Earth",       "CylindricalEquidistantTexture"], Bottom]; globe = ParametricPlot3D[   R {-Cos[p] Sin[t], -Sin[p] Sin[t], Cos[t]}, {p, 0, 2 Pi}, {t, 0,     Pi}, Mesh -> None, PlotStyle -> Texture[earthTexture],    TextureCoordinateFunction -> Automatic]

(*define globe texture and markings*) earthTexture = Lighter[#, 0.75] &@ ImageReflect[PlanetData["Earth",       "CylindricalEquidistantTexture"], Bottom]; globe = ParametricPlot3D[   R {-Cos[p] Sin[t], -Sin[p] Sin[t], Cos[t]}, {p, 0, 2 Pi}, {t, 0,     Pi}, Mesh -> None, PlotStyle -> Texture[earthTexture],    TextureCoordinateFunction -> Automatic]

pole = Graphics3D[{Thick, Red, Line[{{0, 0, -1.3 R}, {0, 0, 1.3 R}}]}]; equator =    ParametricPlot3D[R {Cos[t], Sin[t], 0}, {t, 0, 2 Pi},     PlotStyle -> Yellow]; globeMarkings = {pole, equator};

While the orbital path computed in an inertial frame forms a periodic closed curve, when you account for the rotation of the Earth, it will cause the spacecraft to pass over different points on the Earth’s surface during each subsequent revolution. I can visualize this effect by adding an additional rotation term to the solutions I obtain from NDSolve. Taking the number of orbital periods to be three (similar to John Glenn’s flight) for visualization purposes, I construct the following Manipulate to see how the orbital path is affected by the azimuthal launch angle ψ, similar to the study in Johnson’s paper. I’ll plot both a path assuming a spherical Earth (in white) and another path using the IGF (in green) to get a sense of the size of the oblateness effect (note that the divergence of the two paths increases with each orbit):

With[{\[Omega] = \[Omega]Capitale/(360/(2 Pi ) 60), T = 3 2 Pi r0/v0,             \[Rho] = Sqrt[x[t]^2 + y[t]^2], \[Phi] =     ArcTan[x[t], y[t]],   \[Gamma] = 0.5 Degree},  Manipulate[   Block[{\[Psi] = \[Psi]s},     sol = NDSolve[      Join[Fma, initCond[\[Psi]s, \[Gamma]]], {x, y, z}, {t,        0, \[Tau] T}];    solIGF =      NDSolve[Join[FmaIGF, initCond[\[Psi]s, \[Gamma]]], {x, y, z}, {t,        0, \[Tau] T}];    Show[ParametricPlot3D[       Evaluate[{\[Rho] Cos[\[Phi] - \[Omega] t], \[Rho] Sin[\[Phi] - \ \[Omega] t],          z[t]} /. {sol[[1]], solIGF[[1]]}], {t, 0, \[Tau] T},        PlotStyle -> {White, Darker[Green]}] /.       l_Line :> Tube[l, 35000],      globe, globeMarkings, burnoutLabels[\[Psi]s, \[Gamma]],     Frame -> None, Ticks -> None,      PlotRange -> {All, All, {-1.2 R, 1.2 R}},      RotationAction -> "Clip", ViewPoint -> {0, -2, 0.4}]],   {{\[Psi]s, 70 Degree, "start angle"}, -Pi, Pi},   {{\[Tau], 1.025, "flight time"}, 0.001, 3},    SaveDefinitions -> True]]

globe manipulate

In the notebook attached to this blog, you can see this Manipulate in action, and note the speed at which each new solution is obtained. You would hope that Katherine Johnson and her colleagues at NASA would be impressed!

Now, varying the angle ψ at burnout time, it is straightforward to calculate the position of the spacecraft after, say, three revolutions:

posis = Block[{\[Gamma] =       0.5 Degree, \[Omega] = \[Omega]Capitale/(360/(2 Pi ) 60),      T = 3.05 2 Pi r0/v0},    Table[Map[(Most[#] - {0, \[Omega] T 360/(2 Pi)}) &,       GeoPosition[GeoPositionXYZ[{x[T], y[T], z[T]} /.         NDSolve[           Join[FmaIGF, initCond[\[Psi], \[Gamma]]], {x, y, z}, {t, 0,             T}][[1]]]]],     {\[Psi], 50 Degree, 90 Degree, 1 Degree}]];

GeoGraphics[{Gray, GeoPath[{"Parallel", 28, {-135, -120}}],   Yellow, Point[posis]}, GeoRange -> Quantity[450, "Miles"],  GeoBackground -> "StreetMap", ImageSize -> 400]

GeoGraphics[{Gray, GeoPath[{"Parallel", 28, {-135, -120}}],   Yellow, Point[posis]}, GeoRange -> Quantity[450, "Miles"],  GeoBackground -> "StreetMap", ImageSize -> 400]

Modeling the Reentry of a Satellite

The movie also mentions Euler’s method in connection with the reentry phase. After the initial problem of finding the azimuthal angle has been solved, as done in the previous sections, it’s time to come back to Earth. Rockets are fired to slow down the orbiting body, and a complex set of events happens as the craft transitions from the vacuum of space to an atmospheric environment. Changing atmospheric density, rapid deceleration and frictional heating all become important factors that must be taken into account in order to safely return the astronaut to Earth. Height, speed and acceleration as a function of time are all problems that need to be solved. This set of problems can be solved with Euler’s method, as done by Katherine Johnson, or by using the differential equation-solving functionality in the Wolfram Language.

For simple differential equations, one can get a detailed step-by-step solution with a specified quadrature method. An equivalent of Newton’s famous F = m a for a time-dependent mass m(t) is the so-called ideal rocket equation (in one dimension)…

DSolve[{(m0 - \[Beta] t) v'[t] == Subscript[\[ScriptV], ex] \[Beta],    v[0] == vi}, v[t], t]

… where m(t) is the rocket mass, ve the engine exhaust velocity and mp(t) the time derivative of the propellant mass. Assuming a constant mp(t), the structure of the equation is relatively simple and easily solvable in closed form:

{{v[t] ->     vi + Log[m0] Subscript[\[ScriptV], ex] -      Log[m0 - t \[Beta]] Subscript[\[ScriptV], ex]}}

With initial and final conditions for the mass, I get the celebrated rocket equation (Tsiolkovsky 1903):

FormulaLookup["rocket equation"]



QuantityVariable[ \!\(\*SubscriptBox[\("v"\), \("f"\)]\),"Speed"] ==   Log[QuantityVariable[ \!\(\*SubscriptBox[\("m"\), \("i"\)]\),"Mass"]/QuantityVariable[ \!\(\*SubscriptBox[\("m"\), \("f"\)]\),"Mass"]] QuantityVariable[ \!\(\*SubscriptBox[\("v"\), \("e"\)]\),"Speed"] + QuantityVariable[ \!\(\*SubscriptBox[\("v"\), \("i"\)]\),"Speed"]

FormulaData["RocketEquation", "QuantityVariableTable"]

FormulaData["RocketEquation", "QuantityVariableTable"]

FormulaData@FormulaLookup["rocket equation"][[1]]

FormulaData@FormulaLookup["rocket equation"][[1]]

The details of solving this equation with concrete parameter values and e.g. with the classical Euler method I can get from Wolfram|Alpha. Here are those details together with a detailed comparison with the exact solution, as well as with other numerical integration methods:

WolframAlpha["use Euler method (2-t)v'(t)=4, v(0)=0, from t=0 to \ 0.95"]

WolframAlpha["use Euler method (2-t)v'(t)=4, v(0)=0, from t=0 to \ 0.95"]

Following the movie plot, I will now implement a minimalistic ODE model of the reentry process. I start by defining parameters that mimic Glenn’s flight:

(* Glenn's flight *) mCapsule =    QuantityMagnitude[    Entity["MannedSpaceMission", "MercuryAtlas6"][     EntityProperty["MannedSpaceMission", "Mass"]],     "SIBase"]; (* mass of Mercury Atlas 6  *) \[Phi]0 = 34.00; \[Lambda]0 =   241.00 Degree;  (* initial latitude and longitude *) h = QuantityMagnitude[    Entity["Satellite", "00240"][     EntityProperty["Satellite", "AverageElevationWGS84"]], "Meters"];

X0 = (R + h) {Cos[\[Lambda]0] Cos[\[Phi]0],      Sin[\[Lambda]0] Cos[\[Phi]0],      Sin[\[Phi]0]}; (* position at start of reentry *) V0 = Sqrt[G M/(R + h)] {-Sin[\[Lambda]0], Cos[\[Lambda]0],      0};(* velocity at start of reentry *)

I assume that the braking process uses 1% of the thrust of the stage-one engine and runs, say, for 60 seconds. The equation of motion is:

Traditional output

Here, Fgrav is the gravitational force, Fexhaust(t) the explicitly time-dependent engine force and Ffriction(x(t),v(t)) the friction force. The latter depends via the air density explicitly on the position x(t) and via the friction law on v(t).

For the height-dependent air density, I can conveniently use the StandardAtmosphereData function. I also account for a height-dependent area because of the parachute that opened about 8.5 km above ground:

Cd = 1 (* drag coefficient *);  airDensity[X : {_Real, __}] :=   QuantityMagnitude[   StandardAtmosphereData[Quantity[Norm[X] - R, "Meters"], "Density"],    "SIBase"] airFriction[X_List,    V_List] := -1/2 area[Norm[X - R]] V.V Cd airDensity[X] V/Sqrt[V.V] brake[V_List,    t_, {\[Alpha]_, T_}] := -If[t < T, \[Alpha] Normalize[V], 0]

This gives the following set of coupled nonlinear differential equations to be solved. The last WhenEvent[...] specifies to end the integration when the capsule reaches the surface of the Earth. I use vector-valued position and velocity variables X and V:

area[height_] := If[h > 8500, 10, (* parachute *)2000];

With these definitions for the weight, exhaust and air friction force terms…

Fgrav[X_, V_, t_] := -G mCapsule M X/Sqrt[X.X]^3  Fexhaust[X_, V_, t_] := brake[V, t, {3000, 60}] Fairfraction[X_, V_, t_] := airFriction[X , V]

… total force can be found via:

Ftotal[X_, V_, t_]  :=   Fgrav[X, V, t] + Fexhaust[X, V, t] + Fairfraction[X, V, t]

odeSystem = { X'[t] == V[t],                          mCapsule V'[t] == Ftotal[X[t] , V[t], t],                              WhenEvent[Norm[X[t]] - R == 0, "StopIntegration"]};

In this simple model, I neglected the Earth’s rotation, intrinsic rotations of the capsule, active flight angle changes, supersonic effects on the friction force and more. The explicit form of the differential equations in coordinate components is the following. The equations that Katherine Johnson solved would have been quite similar to these:

(odeSystem /. {X -> ({x[#], y[#], z[#]} &),       V -> ({vx[#], vy[#], vz[#]} &)} /.                              w_WhenEvent :> Evaluate //@ w) //   Column[#, Dividers -> Center] &

Traditional output

Supplemented by the initial position and velocity, it is straightforward to solve this system of equations numerically. Today, this is just a simple call to NDSolve. I don’t have to worry about the method to use, step size control, error control and more because the Wolfram Language automatically chooses values that guarantee meaningful results:

tMax = 5000; inits = {X[0] == X0, V[0] == V0}; nds = NDSolve[Join[odeSystem, inits], {X, V}, {t, 0, 60, tMax}]


Here is a plot of the height, speed and acceleration as a function of time:

plotXVAT[nds_] :=   With[{T = nds[[1, 1, 2, 1, 1, 2]]/60,     opts = Sequence[ImageSize -> 220, PlotRange -> All]},   GraphicsRow[{     Plot[(Norm[X[60 t]] - R)/1000 /. nds[[1]], {t, 0, T}, opts,       AxesLabel -> { "time (min)", "height (km)"}],     Plot[Norm[V[60 t]]/1000 /. nds[[1]], {t, 0, T}, opts,       AxesLabel -> { "time (min)", "speed (km/s)"}],      Plot[      Norm[Ftotal[X[60 t], V[60 t], 60 t]]/(9.81 mCapsule) /.        nds[[1]], {t, 0, T}, opts,       AxesLabel -> { "time (min)", "acceleration (g)"}]},    Spacings -> 0]]



Plotting as a function of height instead of time shows that the exponential increase of air density is responsible for the high deceleration. This is not due to the parachute, which happens at a relatively low altitude. The peak deceleration happens at a very high altitude as the capsule goes from a vacuum to an atmospheric environment very quickly:

With[{T = nds[[1, 1, 2, 1, 1, 2]]/60,    opts = Sequence[ImageSize -> 220, PlotRange -> All]},   ParametricPlot[   Evaluate[{(Norm[X[60 t]] - R)/1000,       Norm[Ftotal[X[60 t], V[60 t], 60 t]]/(9.81 mCapsule)} /.      nds[[1]]], {t, 0, T}, opts,    AxesLabel -> {"height (km)", "acceleration (g)"},    AspectRatio -> 0.6]]

With[{T = nds[[1, 1, 2, 1, 1, 2]]/60,    opts = Sequence[ImageSize -> 220, PlotRange -> All]},   ParametricPlot[   Evaluate[{(Norm[X[60 t]] - R)/1000,       Norm[Ftotal[X[60 t], V[60 t], 60 t]]/(9.81 mCapsule)} /.      nds[[1]]], {t, 0, T}, opts,    AxesLabel -> {"height (km)", "acceleration (g)"},    AspectRatio -> 0.6]]

And here is a plot of the vertical and tangential speed of the capsule in the reentry process:

With[{T = nds[[1, 1, 2, 1, 1, 2]],    opts = Sequence[ImageSize -> 300, PlotRange -> All]},   {Plot[V[t].Normalize[X[t]]/1000 /. nds[[1]], {t, 0, T}, opts,     AxesLabel -> { "time (s)", "vertical speed (km/s)"}],   Plot[(Norm[V[t] - V[t].Normalize[X[t]]  Normalize[V[t]]])/1000 /.      nds[[1]], {t, 0, T}, opts,     AxesLabel -> { "time (s)", "tangential speed (km/s)"}]}]

With[{T = nds[[1, 1, 2, 1, 1, 2]],    opts = Sequence[ImageSize -> 300, PlotRange -> All]},   {Plot[V[t].Normalize[X[t]]/1000 /. nds[[1]], {t, 0, T}, opts,     AxesLabel -> { "time (s)", "vertical speed (km/s)"}],   Plot[(Norm[V[t] - V[t].Normalize[X[t]]  Normalize[V[t]]])/1000 /.      nds[[1]], {t, 0, T}, opts,     AxesLabel -> { "time (s)", "tangential speed (km/s)"}]}]

Now I repeat the numerical solution with a fixed-step Euler method:

ndsEuler =   NDSolve[Join[odeSystem, inits], {X, V}, {t, 0, tMax},    Method -> {"FixedStep", Method -> "ExplicitEuler"},    StartingStepSize -> 0.05]

ndsEuler =   NDSolve[Join[odeSystem, inits], {X, V}, {t, 0, tMax},    Method -> {"FixedStep", Method -> "ExplicitEuler"},    StartingStepSize -> 0.05]

Qualitatively, the solution looks the same as the previous one:



For the used step size of the time integration, the accumulated error is on the order of a few percent. Smaller step sizes would reduce the error (see the previous Wolfram|Alpha output):

With[{T = nds[[1, 1, 2, 1, 1, 2]]},  Plot[100 ((Norm[X[t]] - R /. nds[[1]])/       (Norm[X[t]] - R /. ndsEuler[[1]]) - 1), {t, 0, T},                AxesLabel -> { "time (s)", "height error (%)"}]]

With[{T = nds[[1, 1, 2, 1, 1, 2]]},  Plot[100 ((Norm[X[t]] - R /. nds[[1]])/       (Norm[X[t]] - R /. ndsEuler[[1]]) - 1), {t, 0, T},                AxesLabel -> { "time (s)", "height error (%)"}]]

Note that the landing time predicted by the Euler method deviates only 0.11% from the previous time. (For comparison, if I were to solve the equation with two modern methods, say "BDF" vs. "Adams", the error would be smaller by a few orders of magnitude.)

Now, the reentry process generates a lot of heat. This is where the heat shield is needed. At which height is the most heat per area q generated? Without a detailed derivation, I can, from purely dimensional grounds, conjecture Overscript[q, .]~\[Rho] v^3:

DimensionalCombinations[{"Speed", "MassDensity"}, "HeatFlux"]

{QuantityVariable["MassDensity","MassDensity"] QuantityVariable[   "Speed","Speed"]^3}

With[{T = nds[[1, 1, 2, 1, 1, 2]]},  ParametricPlot[   Evaluate[{(Norm[X[t]] - R)/1000, airDensity[X[t]] Norm[V[t]]^3} /.      nds[[1]]], {t, 0, T}, PlotRange -> All, Ticks -> {True, False},               AxesLabel -> { "height (km)", "heat generated (a.u.)"},    AspectRatio -> 0.6]]

With[{T = nds[[1, 1, 2, 1, 1, 2]]},  ParametricPlot[   Evaluate[{(Norm[X[t]] - R)/1000, airDensity[X[t]] Norm[V[t]]^3} /.      nds[[1]]], {t, 0, T}, PlotRange -> All, Ticks -> {True, False},               AxesLabel -> { "height (km)", "heat generated (a.u.)"},    AspectRatio -> 0.6]]

Many more interesting things could be calculated (Hicks 2009), but just like the movie had to fit everything into two hours and seven minutes, I will now end my blog for the sake of time. I hope I can be pardoned for the statement that, with the Wolfram Language, the sky’s the limit.

To download this post as a Computable Document Format (CDF) file, click here. New to CDF? Get your copy for free with this one-time download.

http://blog.wolfram.com/2017/02/24/hidden-figures-modern-approaches-to-orbit-and-reentry-calculations/feed/ 5
How Many Animals and Arp-imals Can One Find in a Random 3D Image? http://blog.wolfram.com/2017/02/23/how-many-animals-and-arp-imals-can-one-find-in-a-random-3d-image/ http://blog.wolfram.com/2017/02/23/how-many-animals-and-arp-imals-can-one-find-in-a-random-3d-image/#comments Thu, 23 Feb 2017 15:16:56 +0000 Michael Trott http://blog.internal.wolfram.com/?p=34823 And How Many Animals, Animal Heads, Human Faces, Aliens and Ghosts in Their 2D Projections?


In my recent Wolfram Community post, “How many animals can one find in a random image?,” I looked into the pareidolia phenomenon from the viewpoints of pixel clusters in random (2D) black-and-white images. Here are some of the shapes I found, extracted, rotated, smoothed and colored from the connected black pixel clusters of a single 800×800 image of randomly chosen, uncorrelated black-and-white pixels.


For an animation of such shapes arising, changing and disappearing in a random gray-level image with slowly time-dependent pixel values, see here. By looking carefully at a selected region of the image, at the slowly changing, appearing and disappearing shapes, one frequently can “see” animals and faces.

The human mind quickly sees faces, animals, animal heads and ghosts in these shapes. Human evolution has optimized our vision system to recognize predators and identify food. Our recognition of an eye (or a pair of eyes) in the above shapes is striking. For the neuropsychological basis of seeing faces in a variety of situations where actual faces are absent, see Martinez-Conde2016.

A natural question: is this feature of our vision specific to 2D silhouette shapes, or does the same thing happen for 3D shapes? So here, I will look at random shapes in 3D images and the 2D projections of these 3D shapes. Various of the region-related functions that were added in the last versions of the Wolfram Language make this task possible, straightforward and fun.

I should explain the word Arp-imals from the title. With the term “Arp-imals” I refer to objects in the style of the sculptures by Jean Arp, meaning smooth, round, randomly curved biomorphic forms. Here are some examples.

personOverview[person_] :=   With[{props = {"Entity", EntityProperty["Person", "Image"],       EntityProperty["Person", "BirthDate"],       EntityProperty["Person", "BirthPlace"],       EntityProperty["Person", "DeathDate"]}},   TextGrid[DeleteMissing[Transpose[{props, person[props]}], 1, 2],                      Dividers -> All, Background -> GrayLevel[0.9]]]

artworkOverview[art_] :=   With[{props = {"Entity", EntityProperty["Artwork", "Image"],       EntityProperty["Artwork", "Artist"],       EntityProperty["Artwork", "StartDate"],       EntityProperty["Artwork", "Owner"]}},   TextGrid[    DeleteMissing[     Transpose[{props, Item[#, ItemSize -> 15] & /@ art[props]}], 1, 2],                      Dividers -> All, Background -> GrayLevel[0.9]]]

Forms such as these hide frequently in 3D images made from random black-and-white voxels. Here is a quick preview of shapes we will extract from random images.

Quick Preview of Shapes

We will also encounter what I call Moore-iens, in the sense of the sculptures by the slightly later artist Henry Moore.

personOverview[Entity["Person", "HenryMoore::96psy"]]

artworkOverview /@ {Entity["Artwork",     "LargeInteriorForm::HenryMoore"],    Entity["Artwork", "KnifeEdgeTwoPiece::HenryMoore"],    Entity["Artwork", "OvalWithPointsPrinceton::HenryMoore"]}

With some imagination, one can also see forms of possible aliens in some of the following 2D shapes. (See Domagal-Goldman2016 for a discussion of possible features of alien life forms.)

As in the 2D case, we start with a random image: this time, a 3D image of voxels of values 0 and 1. For reproducibility, we will seed the random number generator. The Arp-imals are so common that virtually any seed produces them. And we start with a relatively small image. Larger images will contain many more Arp-imals.

Shapes from Random 3D Images

SeedRandom[1]; randomImage =   Image3D[Table[RandomChoice[{6, 1} -> {0, 1}], {20}, {20}, {20}]]

Hard to believe at first, but the blueprints of the above-shown 3D shapes are in the last 3D cube. In the following, we will extract them and make them more visible.

As in the 2D case, we again use ImageMesh to extract connected regions of white cells. The regions still look like a random set of connected polyhedra. After smoothing the boundaries, nicer shapes will arise.

Show[imesh = ImageMesh[randomImage, Method -> "MarchingSquares"],   ImageSize -> 400]

Here are the regions, separated into non-touching ones, using the function ConnectedMeshComponents. The function makeShapes3D combines the image creation, the finding of connected voxel regions, and the region separation.

makeShapes3D[{dimz_, dimy_, dimx_}, {black_, white_}] :=  Module[{randomImage, imesh},   randomImage =     Image3D[Table[      RandomChoice[{black, white} -> {0, 1}], {dimx}, {dimy}, {dimz}]];    imesh = ImageMesh[randomImage, Method -> "MarchingSquares"];                Select[ConnectedMeshComponents@imesh, 10 < Volume[#] < 200 &]]

For demonstration purposes, in the next example, we use a relatively low density of white voxels to avoid the buildup of a single large connected region that spans the whole cube.

SeedRandom[333]; shapes = makeShapes3D[{20, 20, 20}, {7, 1}]

Here are the found regions individually colored in their original positions in the 3D image.

Show[HighlightMesh[#, Style[2, RandomColor[]]] & /@ shapes,   Boxed -> True]

To smooth the outer boundaries, thereby making the shapes more animal-, Arp-imal- and alien-like, the function smooth3D (defined in the accompanying notebook) is a quick-and-dirty implementation of the Loop subdivision algorithm. (As the 3D shapes might have a higher genus, we cannot use BSplineSurface directly, which would have been the direct equivalent to the 2D case.) Here are successive smoothings of the third of the above-extracted regions.

{sampleRegion,    Graphics3D[{EdgeForm[],     sampleRegionSmooth1 = smooth3D[sampleRegion, 1]},                         ImageSize -> {{320}, {320}}]}  {Graphics3D[{EdgeForm[],     sampleRegionSmooth2 = smooth3D[sampleRegion, 2]},                           ImageSize -> {{320}, {320}}],  Graphics3D[{EdgeForm[],     sampleRegionSmooth3 = smooth3D[sampleRegion, 3]},                         ImageSize -> {{320}, {320}}]}

Using the region plot theme "SmoothShading" of the function BoundaryMeshRegion, we can add normals to get the feeling of a genuinely smooth boundary.

shapeF = With[{sr = sampleRegionSmooth3},   BoundaryMeshRegion[sr[[1]],     Style[sr[[2, 1]] ,      Directive[GrayLevel[0.4],       Specularity[RGBColor[0.71, 0.65, 0.26], 12]]],     PlotTheme -> "SmoothShading"]]

And for less than $320 one can obtain this Arp-inspired piece in brass. A perfect, unique, stunning post-Valentine’s gift. For hundreds of alternative shapes to print, see below. We use ShellRegion to reduce the price and save some internal material by building a hollow region.

thinreg = ShellRegion[shapeF]; Printout3D[thinreg, "IMaterialise",   RegionSize -> Quantity[10, "Centimeters"]]

Here is the smoothing procedure shown for another of the above regions.

sampleRegion2 = {sampleRegion2,   Graphics3D[{EdgeForm[],     sampleRegionSmooth21 = smooth3D[sampleRegion2, 1]},                           ImageSize -> {{360}, {360}}],  Graphics3D[{EdgeForm[],     sampleRegionSmooth22 = smooth3D[sampleRegion2, 2]},                          ImageSize -> {{360}, {360}}]}

And for three more.

With[{sf = Directive[#, Specularity[ColorNegate[#], 10]] &},  Row[{Graphics3D[{EdgeForm[], sf[Red], smooth3D[shapes[[4]], 3]},      ImageSize -> {{360}, {360}},                                 ViewPoint -> {0.08, -3.31, 0.67},      ViewVertical -> {0.00, -0.85, 0.90}],    Graphics3D[{EdgeForm[], sf[Blue], smooth3D[shapes[[8]], 3]},      ImageSize -> {{360}, {360}},                    ViewPoint -> {2.99, 0.66, 1.43},      ViewVertical -> {1.07, 0.90, 0.23}],    Graphics3D[{EdgeForm[], sf[Green], smooth3D[shapes[[13]], 3]},      ImageSize -> {{360}, {360}},                    ViewPoint -> {-2.53, 2.18, 0.49},      ViewVertical -> {-0.93, 0.598, 0.76}]}]]

Many 3D shapes can now be extracted from random and nonrandom 3D images. The next input calculates the region corresponding to lattice points with coprime coordinates.

Graphics3D[{EdgeForm[], Directive[Gray, Specularity[Pink, 12]],             smooth3D[ConnectedMeshComponents[ImageMesh[       Image3D[        Table[Boole@CoprimeQ[x, y, z], {x, -6, 6}, {y, -6, 6}, {z, -6,           6}]],       Method -> "MarchingSquares"]][[1]], 2]},  ViewPoint -> {2, -3, 2}, ViewVertical -> {1, 0, 1}, Boxed -> False]

The Importance of Coarse Rasterization and Smoothing

In the above example, we start with a coarse 3D region, which feels polyhedral due to the obvious triangular boundary faces. It is only after the smoothing procedure that we obtain “interesting-looking” 3D shapes. The details of the applied smoothing procedure do not matter, as long as sharp edges and corners are softened.

Human perception is optimized for smooth shapes, and most plants and animals have smooth boundaries. This is why we don’t see anything interesting in the collection of regions returned from ImageMesh applied to a 3D image. This is quite similar to the 2D case. In the following visualization of the 2D case, we start with a set of randomly selected points. Then we connect these points through a curve. Filling the curve yields a deformed checkerboard-like pattern that does not remind us of a living being. Rasterizing the filled curve in a coarse-grained manner still does not remind us of organic shapes. The connected region, and especially the smoothed region, do remind most humans of living beings.

Smoothed Region

The following Manipulate (available in the notebook) allows us to explore the steps and parameters involved in an interactive session.

smooth2D[reg_, col_, d_] :=   Graphics[{col, (ToExpression[ToString[InputForm@reg], StandardForm,         Hold] /.       HoldPattern[BoundaryMeshRegion[v_, b__, ___Rule]] :>         GraphicsComplex[v,         FilledCurve[{b} /. Line[l_] :>                         BSplineCurve[DeleteDuplicates[Flatten[l, 1]],              SplineClosed -> True, SplineDegree -> d]]])[[1]]}]

Manipulate[  Module[{randomFunction, f1, f2, filledPolygon, ras, im, imesh,     shapes, toShow, map},   Block[{$PerformanceGoal = "Quality"},    randomFunction[m_] :=      Interpolation[      MapIndexed[{(#2[[1]] - 1)/(m + 1), #} &,        Join[#, Take[#, 2]] &@ RandomReal[{0, 1}, {m, 2}]],       InterpolationOrder -> 3];    SeedRandom[seed]; f1 = randomFunction[deg];       f2 = randomFunction[deg];    pp = ParametricPlot[Evaluate[(1 - s) f1[t] + s f2[t]], {t, 0, 1},       PlotStyle -> Directive[Opacity[1], Black], Axes -> False,       PlotRange -> {{-0, 1}, {-0, 1}}] ;    filledPolygon = pp /. Line :> Polygon;    ras = Rasterize[filledPolygon, RasterSize -> {rs, rs},       ImageSize -> {rs, rs}];      im = Image[ras];                   imesh = ImageMesh[ColorNegate[im], Method -> m];     II = imesh;     shapes =      Reverse[SortBy[ConnectedMeshComponents@imesh,        Length[MeshCells[#, 1]] &]];    map[{x_, y_}] := rs {x, y} + {1/2, -1/2};    toShow = {If[sI, Graphics[ras], {}],      If[sP,        Graphics[{Opacity[0.8],          filledPolygon[[1]] /.           Polygon[l_] :> Polygon[Map[map, l, {-2}]]}], {}],      If[sO, Graphics[{Opacity[0.8], Blue, Show[imesh][[1]]}], {}],      If[sR,        Table[smooth2D[shapes[[k]], Directive[Opacity[0.7], rC],          d], {k, Length[shapes]}], {}],      If[sC,        pp /. Line[l_] :> {ColorNegate[rC],           Line[Map[map, l, {-2}]]}, {}],      If[sIP,        Graphics[{ Gray, PointSize[Medium],          Point[map /@ ((1 - s) f1[[4, All, 1]] +              s  f2[[4, All, 1]])]}], {}]};    If[toShow === {{}, {}, {}, {}, {}}, Text["nothing to show" ] ,         Graphics[ Rotate[First /@ Flatten[toShow], \[CurlyPhi]],      PlotRangePadding -> 0, ImagePadding -> 0,       PlotRange -> {{-0.05 rs, 1.05 rs}, {-0.05 rs, 1.05 rs}},       ImageSize -> 400]]]],   {{seed, 595}, 1, 10000, 1},  {{deg, 24, "curve degree"}, 2, 36, 1},  {{s, 0.961, "transition"}, 0, 1},  Delimiter,  {{rs, 24, "raster size"}, 10, 60, 1},  Row[{"show: ",     Control[{{sR, True,        "smoothed region" <> FromCharacterCode[62340]}, {True,        False}}], "|  ",                       Control[{{sO, False, "region" <> FromCharacterCode[62340]}, {True,        False}}], "|  ",                      Control[{{sI, False, "raster" <> FromCharacterCode[62340]}, {True,        False}}], "|\n          ",                         Control[{{sP, False,        "polygon" <> FromCharacterCode[62340]}, {True, False}}],     "|  ",                        Control[{{sC, False, "curve" <> FromCharacterCode[62340]}, {True,        False}}], "|  ",                        Control[{{sIP, False,        "points" <> FromCharacterCode[62340]}, {True, False}}]}],  Delimiter,  {{d, 3, "smoothness"}, 0, 8, 1, SetterBar},   {{m, "DualMarchingSquares", "method"}, {"MarchingSquares",     "DualMarchingSquares", "Exact"}},   {{rC, Darker[Green, 0.6], "region color"}, Red, ImageSize -> Small},  Delimiter,  {{\[CurlyPhi], -2.06, "rotation"}, -Pi, Pi},  Delimiter,  Button["random shape", seed = RandomInteger[{1, 1000}];                                                         deg = RandomInteger[{2, 36}];                                                        s = RandomReal[{0, 1}]],  ControlPlacement -> Left,  TrackedSymbols :> True,   SaveDefinitions -> True]
3D Manipulate

And here is a corresponding 3D example.

SeedRandom[1]; Module[{deg = 3, pp = 16, L = 3, \[Delta], p, pts, sol, p1, cp,    pointsGraphic3D, pointsAndSurface, im2,               imesh, sm, ccs, bmr},   \[Delta] = 2 L/pp;  p[x_, y_, z_] = (x^2 + y^2 + z^2)^(2 deg) +     Sum[c[i, j, k] x^i y^j z^k, {i, 0, deg}, {j, 0, deg}, {k, 0, deg}];   pts = RandomReal[{-1, 1}, {Length@Cases[p[x, y, z], _c, \[Infinity]],      3}];    sol = Solve[(p @@@ pts) == 0, Cases[p[x, y, z], _c, \[Infinity]]];    p1 = p[x, y, z] /. sol[[1]];     cp = ContourPlot3D[    Evaluate[p1], {x, -L, L}, {y, -L, L}, {z, -L, L}, Contours -> {0}];  L = Ceiling[    Max[Abs[Transpose[       Cases[cp, _GraphicsComplex, \[Infinity]][[1, 1]]]]], 0.2];    pointsGraphic3D =    Graphics3D[{Red, Sphere[#, 0.05] & /@ pts}, PlotRange -> L];   pointsAndSurface =    Show[{cp =       ContourPlot3D[Evaluate[p1], {x, -L, L}, {y, -L, L}, {z, -L, L},        Contours -> {0},       ContourStyle -> Gray, Lighting -> "Neutral",        MeshFunctions -> {Norm[{#1, #2, #3}] & }], pointsGraphic3D},     Axes -> False];  im2 = Graphics3D[    Table[If[p1 < 0, {Opacity[0.3], EdgeForm[Blue], Gray, Opacity[0.3],                                                                       \  Cuboid[{x, y, z}/\[Delta] + pp/2, {x, y, z}/\[Delta] + pp/2 +          1]}, {}],                                                  {x, -L, L,       2 L/pp}, {y, L, -L, -2 L/pp}, {z, L, -L, -2 L/pp}],                   Lighting -> "Neutral", Axes -> False];   imesh = ImageMesh[Image3D[Table[Boole[p1 < 0],                                                          {x, -L, L,        2 L/pp}, {y, L, -L, -2 L/pp}, {z, L, -L, -2 L/pp}]],                                                     Method -> "MarchingCubes"];  ccs = Reverse[    SortBy[ConnectedMeshComponents[imesh], Length[MeshCells[#, 2]] &]];    sm = smooth3D[ccs[[1]], 2];   bmr = BoundaryMeshRegion[sm[[1]],     Style[Cases[sm, _Polygon, \[Infinity]],      Directive[Opacity[0.5], Darker[Green]]]];     Column[{Row[{pointsGraphic3D, " \[DoubleLongRightArrow] ",        pointsAndSurface, " \[DoubleLongRightArrow] "}],                     Row[{im2, " \[DoubleLongRightArrow] " ,        Show[{im2, imesh}, Boxed -> True], " \[DoubleLongRightArrow] "}],                      Row[{Show[{im2, bmr}, Boxed -> True],        " \[DoubleLongRightArrow] ",  Show[bmr, Boxed -> True]}]} /.                                                                       \                  gr_Graphics3D :> Show[gr, ImageSize -> 200]]]
3D Example

Shadows of the 3D Shapes

In her reply to my community post, Marina Shchitova showed some examples of faces and animals in shadows of hands and fingers. Some classic examples from the Cassel1896 book are shown here.

Hand shadows

So, what do projections/shadows of the above two 3D shapes look like? (For a good overview of the use of shadows in art at the time and place of the young Arp, see Forgione1999.)

The projections of these 3D shapes are exactly the types of shapes I encountered in the connected smoothed components of 2D images. The function projectTo2D takes a 3D graphic complex and projects it into a thin slice parallel to the three coordinate planes. The result is still a Graphics3D object.

projectTo2D[GraphicsComplex[vs_, r__]] :=   Module[{f = 0.2, \[CurlyEpsilon] = 10^-2, t = Developer`ToPackedArray,    xMin, xMax, yMin, yMax, zMin,     zMax, \[Delta]x, \[Delta]y, \[Delta]z},   {{xMin, xMax}, {yMin, yMax}, {zMin, zMax}} = MinMax /@ Transpose[vs];   {\[Delta]x, \[Delta]y, \[Delta]z} = {xMax - xMin, yMax - yMin,      zMax - zMin};   {EdgeForm[],    {Darker[Red],      GraphicsComplex[      t[{xMin -            f \[Delta]x + \[CurlyEpsilon] (#1 -                xMin)/\[Delta]x, #2, #3} & @@@ vs], r]},     {Darker[Blue],      GraphicsComplex[      t[{#1, yMax +            f \[Delta]y + \[CurlyEpsilon] (#2 - yMin)/\[Delta]y, #3} & @@@         vs], r]},    {Darker[Green, 0.6],      GraphicsComplex[      t[{#1, #2,           zMin - f \[Delta]z + \[CurlyEpsilon] (#3 -                zMin)/\[Delta]z} & @@@ vs], r]}} ]

These are the 2×3 projections of the above two 2D shapes. Most people recognize animal shapes in the projections.

We get exactly these projections if we just look at the 3D shape from a larger distance with a viewpoint and direction parallel to the coordinate axes.

{Graphics3D[{Darker[Blue], EdgeForm[], sampleRegionSmooth2},    ViewPoint -> {1, -20, 0}],   Graphics3D[{Darker[Green, 0.6], EdgeForm[], sampleRegionSmooth2},    ViewPoint -> {1, 0, 20}, ViewVertical -> {0, 1, 0}],  Graphics3D[{Darker[Red, 0.6], EdgeForm[], sampleRegionSmooth2},    ViewPoint -> {20, 0, 1}]}

For comparison, here are three views of the first object from very far away, effectively showing the projections.

By rotating the 3D shapes, we can generate a large variety of different shapes in the 2D projections. The following Manipulate allows us to explore the space of projections’ shapes interactively. Because we need the actual rotated coordinates, we define a function rotate, rather than using the built-in function Rotate.

rotationMatrix3D[{\[Alpha]1_, \[Alpha]2_, \[Alpha]3_}] :=   Module[{c1, s1, c2, s2, c3, s3},   {c3, s3, c2, s2, c1, s1} =     N@{Cos[\[Alpha]3], Sin[\[Alpha]3], Cos[\[Alpha]2], Sin[\[Alpha]2],       Cos[\[Alpha]1], Sin[\[Alpha]1]};   {{c3, s3, 0}, {-s3, c3, 0}, {0, 0, 1}}.           {{c2, 0, s2}, {0, 1, 0}, {-s2, 0, c2}}.           {{1, 0, 0}, {0, c1, s1}, {0, -s1, c1}}]

Here is an array of 16 projections into the x-z plane for random orientations of the 3D shape.

projectToXZImage[GraphicsComplex[vs_, r__]] :=   Module[{f = 0.2, \[CurlyEpsilon] = 10^-2,     t = Developer`ToPackedArray, yMin, yMax, \[Delta]y },   {yMin, yMax} = MinMax@ Transpose[vs][[2]]; \[Delta]y = yMax - yMin;   ImageCrop@Image[Rasterize[      Graphics3D[{EdgeForm[], Darker[Blue],         GraphicsComplex[         t[{#1, yMax +               f \[Delta]y + \[CurlyEpsilon] (#2 -                   yMin)/\[Delta]y, #3} & @@@ vs], r]},       ViewPoint -> {0, -5, 0}, Boxed -> False]]]]

GraphicsGrid[Partition[Show[#, ImageSize -> 120] & /@    Table[projectToXZImage[      rotate[sampleRegionSmooth2, RandomReal[{-Pi, Pi}, 3]]], 16], 4],  Spacings -> {0, 0}]

The initial 3D image does not have to be completely random. In the next example, we randomly place circles in 3D and color a voxel white if the circle intersects the voxel. As a result, the 3D shapes corresponding to the connected voxel regions have a more network-like shape.

randomCircle[   l : {{xml : in_, xmax_}, {ymin_, ymax_}, {zmin_, zmax_}}]  :=    Module[{mp = RandomReal /@ l, \[Delta] = Mean[Abs[Subtract @@@ l]],     dir1, dir2, \[Rho]1, \[Rho]2},    {dir1, dir2} = Orthogonalize[RandomReal[{-1, 1}, {2, 3}]];     {\[Rho]1, \[Rho]2} = RandomReal[\[Delta]/2 {0, 1}, 2];   Circle3D[mp, {\[Rho]1, \[Rho]2}, {dir1, dir2}]]

3D Shapes with Bilateral Symmetry

2D projection shapes of 3D animals typically have no symmetry. Even if an animal has a symmetry, the visible shape from a given viewpoint and a given animal posture does not have a symmetry. But most animals have a bilateral symmetry. I will now use random images that have a bilateral symmetry. As a result, many of the resulting shapes will also have a bilateral symmetry. Not all of the shapes, because some regions do not intersect the symmetry plane. Bilateral symmetry is important for the classic Rorschach inkblot test: “The mid-line appears to attract the patient’s attention with a sort of magical power,” noted Rorschach (Schott2013). The function makeSymmetricShapes3D will generate regions with bilateral symmetry.

makeSymmetricShapes3D[{dimz_, dimy_, dimx_}, {black_, white_}] :=    Module[{ii, randomImage, imesh},    ii[x_, y_,      z_] := (ii[x, y, z] =       ii[x, 1 + dimy - y, z] =        RandomChoice[{black, white} -> {0, 1}]);   randomImage =     Image3D[Table[ii[x, y, z], {x, dimx}, {y, dimy}, {z, dimz}]];    imesh =     ImageMesh[randomImage, Method -> "MarchingCubes",      CornerNeighbors -> False];        Select[ConnectedMeshComponents@imesh, 10 < Volume[#] &]]

Here are some examples.

SeedRandom[888]; symmShapes =   Table[makeSymmetricShapes3D[{d, d, d}, {3, 1}], {d, 5, 8}]

And here are smoothed and colored versions of these regions. The viewpoint is selected in such a way as to make the bilateral symmetry most obvious.

displaySmoothedRegion[reg_BoundaryMeshRegion, color_Directive,    opts___] :=   With[{sm = smooth3D[reg, 2]},   Show[BoundaryMeshRegion[sm[[1]], Style[sm[[2, 1]] , color],      PlotTheme -> "SmoothShading"], opts]]

To get a better feeling for the connection between the pixel values of the 3D image and the resulting smoothed shape, the next Manipulate allows us to specify each pixel value for a small-sized 3D image. The grids/matrices of checkboxes represent the voxel values of one-half of a 3D image with bilateral symmetry.

Manipulate[  DynamicModule[{v = v0, T, imesh, sb, reg, gList},    Column[{Column[{Text[Style["voxel values", Gray, Italic]],        Row[Join[Riffle[          Table[           With[{j = j},             Underscript[Grid[Table[With[{iL = i, jL = j, kL = k},                Checkbox[Dynamic[v[[iL, jL, kL]]]]], {k, kz}, {i, ix}],               Spacings -> 0],                                                                  Text[Style[Row[{"y", "=", j, If[j == jy + 1 - j, "",                                                         Row[{" | y", "=", jy + 1 - j}]]}], Gray,                Italic]]]], {j, Ceiling[jy/2]}],           "\[VerticalSeparator]"], {" "},         {Dynamic[           If[imesh =!= EmptyRegion[3],             Show[reg, ImageSize -> {{140}, {140}},              ViewPoint -> {-3, 1, 1}], ""],           TrackedSymbols :> {reg, imesh}]}]]}],                      Dynamic[T =        Table[Boole@v[[i, Min[j, jy + 1 - j], k]], {k, kz}, {j, jy}, {i,          ix}];                    imesh = ImageMesh[Image3D[T], Method -> "MarchingSquares"];                 If[imesh =!= EmptyRegion[3],        sb = SortBy[ConnectedMeshComponents@imesh, Volume];            Column[{reg = sb[[-1]];         Graphics3D[smooth3D[reg, sm], ImageSize -> 400,           ViewPoint -> {-3, 1, 1},                               Ticks -> None, Axes -> True,           AxesLabel -> {"x", "y", "z"}]}], "empty region"],      TrackedSymbols :> {v}]}, Dividers -> All]],  Row[{Underscript[Control[{{ix, 5, ""}, 3, 10, 1, SetterBar}],      Style["(x)", Gray]], "\[Times]",          Underscript[Control[{{jy, 5, ""}, 3, 10, 1, SetterBar}],      Style["(y)", Gray]], "\[Times]",               Underscript[Control[{{kz, 6, ""}, 3, 10, 1, SetterBar}],      Style["(z)", Gray]]}],      Delimiter,  {{sm, 1, "smoothness"}, 1, 3, 1, SetterBar},  {{v0, MapAt[True &, Table[False, {10}, {10}, {10}],     {{1, 1, 2}, {2, 1, 2}, {3, 1, 2}, {3, 2, 2}, {3, 3, 2}, {3, 4,        2}, {1, 5, 2},      {2, 5, 2}, {3, 5, 2}, {3, 2, 3}, {3, 4, 3}, {3, 1, 4}, {3, 3,        4}, {3, 5, 4},      {2, 1, 5}, {3, 3, 5}, {2, 5, 5}, {1, 1, 6}, {1, 3, 6}, {2, 3,        6}, {3, 3, 6}, {1, 5, 6}} ]}, None},  TrackedSymbols :> {ix, jy, kz, sm}, SaveDefinitions -> True]

Manipulate[ DynamicModule[{v = v0, T, imesh, sb, reg, gList}, Column[{Column[{Text[Style["voxel values", Gray, Italic]], Row[Join[Riffle[ Table[ With[{j = j}, Underscript[Grid[Table[With[{iL = i, jL = j, kL = k}, Checkbox[Dynamic[v[[iL, jL, kL]]]]], {k, kz}, {i, ix}], Spacings -> 0], Text[Style[Row[{"y", "=", j, If[j == jy + 1 - j, "", Row[{" | y", "=", jy + 1 - j}]]}], Gray, Italic]]]], {j, Ceiling[jy/2]}], "[VerticalSeparator]"], {" "}, {Dynamic[ If[imesh =!= EmptyRegion[3], Show[reg, ImageSize -> {{140}, {140}}, ViewPoint -> {-3, 1, 1}], ""], TrackedSymbols :> {reg, imesh}]}]]}], Dynamic[T = Table[Boole@v[[i, Min[j, jy + 1 - j], k]], {k, kz}, {j, jy}, {i, ix}]; imesh = ImageMesh[Image3D[T], Method -> "MarchingSquares"]; If[imesh =!= EmptyRegion[3], sb = SortBy[ConnectedMeshComponents@imesh, Volume]; Column[{reg = sb[[-1]]; Graphics3D[smooth3D[reg, sm], ImageSize -> 400, ViewPoint -> {-3, 1, 1}, Ticks -> None, Axes -> True, AxesLabel -> {"x", "y", "z"}]}], "empty region"], TrackedSymbols :> {v}]}, Dividers -> All]], Row[{Underscript[Control[{{ix, 5, ""}, 3, 10, 1, SetterBar}], Style["(x)", Gray]], "[Times]", Underscript[Control[{{jy, 5, ""}, 3, 10, 1, SetterBar}], Style["(y)", Gray]], "[Times]", Underscript[Control[{{kz, 6, ""}, 3, 10, 1, SetterBar}], Style["(z)", Gray]]}], Delimiter, {{sm, 1, "smoothness"}, 1, 3, 1, SetterBar}, {{v0, MapAt[True &, Table[False, {10}, {10}, {10}], {{1, 1, 2}, {2, 1, 2}, {3, 1, 2}, {3, 2, 2}, {3, 3, 2}, {3, 4, 2}, {1, 5, 2}, {2, 5, 2}, {3, 5, 2}, {3, 2, 3}, {3, 4, 3}, {3, 1, 4}, {3, 3, 4}, {3, 5, 4}, {2, 1, 5}, {3, 3, 5}, {2, 5, 5}, {1, 1, 6}, {1, 3, 6}, {2, 3, 6}, {3, 3, 6}, {1, 5, 6}} ]}, None}, TrackedSymbols :> {ix, jy, kz, sm}, SaveDefinitions -> True]

Randomly and independently selecting the voxel value of a 3D image makes it improbable that very large connected components without many holes form. Using instead random functions and deriving voxel values from these random continuous functions yields different-looking types of 3D shapes that have a larger uniformity over the voxel range. Effectively, the voxel values are no longer totally uncorrelated.

makeSymmetricShapes3DFunctionBased[{dimz_, dimy_, dimx_}, G_] :=  Module[{fun, randomImage, imesh, M = 2 Max[{dimx, dimy, dimz}], x, y,     z},  fun[x_, y_, z_] =      Sum[Cos[RandomReal[{-M, M}] (y - (dimy + 1)/2)]                                                          Cos[RandomReal[{-M, M}] x + 2 Pi RandomReal[]]                                                                    Cos[RandomReal[{-M, M}] z + 2 Pi RandomReal[]], {4}];   randomImage =     Image3D[Table[      If[fun[x, y, z] > G, 0, 1], {x, dimx}, {y, dimy}, {z, dimz}]] ;    imesh = ImageMesh[randomImage, Method -> "MarchingSquares"];        Select[ConnectedMeshComponents@imesh, 10 < Volume[#] &]]

Here are some examples of the resulting regions, as well as their smoothed versions.

SeedRandom[55]; symmFunctionShapes =   Table[makeSymmetricShapes3DFunctionBased[{d, d, d}, -0.3], {d, 5, 8}]

symmFunctionShapes /. bmr_BoundaryMeshRegion :>    displaySmoothedRegion[bmr,     Directive[Blend[{GrayLevel[0.5], Orange}, 0.1],      Specularity[Purple, 10]], ViewPoint -> {-3, -0.5, 1.2}]

Selected Examples of 3D Shapes

Our notebook contains in the initialization section more than 400 selected regions of “interesting” shapes classified into five types (mostly arbitrarily, but based on human feedback).

types = <|"asymmetric general shapes" -> aymmetricGeneralShapes,                 "asymmetric animal shapes" -> asymmetricAnimalShapes,                 "symmetric general shapes"  -> symmetricGeneralShapes,                 "symmetric animal shapes" -> symmetricAnimalShapes,                  "symmetric alien shapes" -> symmetricAlienShapes,                     "asymmetric function animal shapes" ->      asymmetricFunctionAnimalShapes,                     "symmetric function animal shapes" ->      symmetricFunctionAnimalShapes|>;

Let’s look at some examples of these regions. Here is a list of some selected ones. Many of these shapes found in random 3D images could be candidates for Generation 8 Pokémon or even some new creatures, tentatively dubbed Mathtubbies.

selections = <|    "asymmetric general shapes" ->       {1, 4, 7, 8, 9, 10, 11, 13, 18, 20, 32, 35, 39, 43, 48, 49},     "asymmetric animal shapes" ->       {3, 4, 5, 6, 7, 10, 11, 13, 14, 15, 16, 17, 18, 24, 25, 28},     "symmetric general shapes"  ->  {1, 4, 7, 12, 15, 16, 18, 20, 22,       25, 26, 27, 28, 29, 33, 35, 36, 39, 41, 42} ,       "symmetric animal shapes" ->  {2, 3, 5, 6, 7, 8, 9, 10, 11, 12,       14, 15, 20, 22, 23, 25, 26, 31, 32, 35},       "symmetric alien shapes" ->      {2, 4, 5, 6, 8, 9, 13, 15, 17, 18, 19, 20, 26, 30, 38, 39},         "asymmetric function animal shapes" -> {4, 5, 6, 9, 10, 11,       13, 15, 18, 22, 29, 30, 34, 39, 41, 54, 58, 66, 69, 76},         "symmetric function animal shapes" -> {1, 4, 5, 6, 10, 13, 16,       20, 21, 26, 29, 32, 34, 35, 36, 41, 78, 88, 90, 92}|>;

Many of the shapes are reminiscent of animals, even if the number of legs and heads is not always the expected number.

Do[Print[Framed[Style[t, Bold, Gray], FrameStyle -> Gray]];   Print /@ Partition[    Show[Rasterize[#], ImageSize -> {{200}, {200}}] & /@ (makeRegion /@        types[t][[selections[[t]]]]), 4],  {t, Keys[types]}]

asymmetrical general shapes

asymmetrical general shapes 1

asymmetrical general shapes 2

asymmetrical general shapes 3

asymmetrical general shapes 4

asymmetric animal shapes

asymmetric animal shapes 1

asymmetric animal shapes 2

asymmetric animal shapes 3

asymmetric animal shapes 4

symmetric general shapes

symmetric general shapes 1

symmetric general shapes 2

symmetric general shapes 3

symmetric general shapes 4

symmetric general shapes 5

symmetric animal shapes

symmetric animal shapes 1

symmetric animal shapes 2

symmetric animal shapes 3

symmetric animal shapes 4

symmetric animal shapes 5

symmetric alien shapes

symmetric alien shapes 1

symmetric alien shapes 2

symmetric alien shapes 3

symmetric alien shapes 4

asymmetric functional animal shapes

assymetric functional animal shapes 1

assymetric functional animal shapes 2

assymetric functional animal shapes 3

assymetric functional animal shapes 4

assymetric functional animal shapes 5

symmetric function animal shapes

symmetric function animal shapes 1

symmetric function animal shapes 2

symmetric function animal shapes 3

symmetric function animal shapes 4

symmetric function animal shapes 5

To see all of the 400+ shapes from the initialization cells, one could carry out the following.

Do[Print[Framed[Style[t, Bold, Gray], FrameStyle -> Gray]];
Do[Print[Rasterize @ makeRegion @ r], {r, types[t]}], {t,Keys[types]}]

The shapes in the list above were manually selected. One could now go ahead and partially automate the finding of interesting animal-looking shapes and “natural” orientations using machine learning techniques. In the simplest case, we could just use ImageIdentify.

ImageIdentify[ , "animal", 5, "Probability"]

This seems to be a stegosaurus-poodle crossbreed. But we will not pursue this direction here and now, but rather return to the 2D projections. (For using software to find faces in architecture and general equipment, see Hong2014.)

Modifying the 3D Shapes

Before returning to the 2D projections, we will play for a moment with the 3D shapes generated and modify them for a different visual appearance.

For instance, we could tetrahedralize the regions and fill the tetrahedra with spheres.

makeRegion[reg_, n_] :=   With[{sr = smooth3D[reg[[1]], n]},    BoundaryMeshRegion[sr[[1]], sr[[2, 1]]]]

Or with smaller tetrahedra.

dualTetrahedron[Tetrahedron[l_]] :=   Tetrahedron[ Mean /@ Subsets[l, {3}]]

Or add some spikes.

addPrickle[Polygon[{p1_, p2_, p3_}], \[Alpha]_: 1 ] :=   Module[{mp = Mean[{p1, p2, p3}], normal, \[Lambda]},   normal = Normalize[Cross[p1 - mp, p2 - mp]];   \[Lambda] = Mean[EuclideanDistance[#, mp] & /@ {p1, p2, p3}];   Tetrahedron[{p1, p2, p3, mp + \[Alpha] \[Lambda] normal}] ]

Or fill the shapes with cubes.

makeRandomPoints[d_, n_] := RandomPoint[makeRegion[d, 2], n]

Or thicken or thin the shapes.

thickenThinnen[gr_, d_] :=   Show[gr] /.    GraphicsComplex[vs_, b_, VertexNormals -> ns_] :>     GraphicsComplex[ vs + d Normalize /@ ns, b, VertexNormals -> ns]

Or thicken and add thin bands.

Module[{ob = symmetricAlienShapes[[43]], dr, dd},  dr = SignedRegionDistance[ob[[1]]];  dd[{x_Real, y_Real, z_Real}] := dr[{x, y, z}];  Row[{Show[makeRegion[ob], ImageSize -> 240],       ContourPlot3D[dd[{x, y, z}], {x, 0, 9}, {y, -1, 9}, {z, -1, 8},      Contours -> {0.33}, PlotPoints -> 80, MaxRecursion -> 0,     MeshFunctions -> {#3 &}, Mesh -> 40,      MeshShading -> {ob[[2]], None},     Evaluate[makeOptions[ob]], Boxed -> False, Axes -> False,      ImageSize -> 320]}]]

Or just add a few stripes as camouflage.

tigerize[{reg_, col_, {vp_, vd_}}, {col1_, col2_}, {stripes_, xyz_}] :=   Module[{sm = smooth3D[reg, 3], g, size},   g = Show[     BoundaryMeshRegion[sm[[1]], sm[[2, 1]],       PlotTheme -> "SmoothShading"], ViewPoint -> vp,      ViewVertical -> vd];           size = Abs[Subtract @@ MinMax[Transpose[sm[[1]]][[xyz]]]];   g /. GraphicsComplex[vs_, rest__] :> GraphicsComplex[vs, rest,                                             VertexColors -> (         Blend[{col1, col2}, Sin[2 Pi stripes #[[xyz]]/size]^2] & /@ vs

Or model the inside through a wireframe of cylinders.

makeCylinders[pts_, m_, \[Rho]_] := Module[{nf = Nearest[pts]},     {Union[Flatten[      Function[p,         Cylinder[Sort@{#, p}, \[Rho]] & /@  Rest[ nf[p, m + 1]]] /@        pts]],     Sphere[#, \[Rho]] & /@ pts} ]

Or build a stick figure.

toStickFigure[ob_, \[Delta]_] :=   Module[{pts, nf, gr, ccs, modCol,                      f = RandomChoice[{Lighter, Darker}][#, RandomReal[{0, 0.2}]] &},     nf = Nearest[     pts = Cases[makeRegion[ob], _GraphicsComplex, \[Infinity]][[1,        1]]];   gr = Graph[     UndirectedEdge[#, nf[#, {Infinity, \[Delta]}][[-1]]] & /@ pts];   ccs = WeaklyConnectedGraphComponents[gr];   modCol[] := ob[[2]] /. Directive[col1_, Specularity[col2_, e2_]] :>                                                         Directive[f[col1],        Specularity[f[col2], RandomReal[{0.75, 1.25}] e2]];   Graphics3D[{EdgeForm[], CapForm[None],      {modCol[],         Cylinder[Union[Sort /@ List @@@ EdgeList[#]], 0.05]} & /@       Take[ccs, All],       ob[[2]], Sphere[#, 0.05] & /@ pts}, makeOptions[ob],     Boxed -> False,     Method -> {"TubePoints" -> 6, "SpherePoints" -> 6}]]

Or fill the surface with a tube.

makeTube[ob_, n_, \[Rho]_] :=  Module[{dr = makeRegion[ob, 1], pairs, neighbors, nl, mcs},   pairs = {#[[1, 1]], Last /@ #} & /@ Split[Sort[Flatten[{First[#],            Reverse[First[#]]} & /@ MeshCells[dr, 1],         1]], #1[[1]] == #2[[1]] &];   (neighbors[#1] = #2) & @@@ pairs;   nl = NestList[RandomChoice[DeleteCases[neighbors[#], #]] &, 1, n];   mcs = MeshCoordinates[dr];   Tube[BSplineCurve[mcs[[nl]]], \[Rho]]]

Or a Kelvin inversion.

With[{g = With[{o = aymmetricGeneralShapes[[50]]},     With[{sm = smooth3D[o[[1]], 3]},      Show[BoundaryMeshRegion[sm[[1]], Style[sm[[2, 1]], o[[2]]],        PlotTheme -> "SmoothShading"]]]]},  {Row[{Show[g, ImageSize -> 240],                 Show[invert3D[g, {4, 4, 4}], ViewPoint -> {2.62, -2.06, -0.52},                       ViewVertical -> {-0.04, -0.92, -0.42},       ImageSize -> 280]}]}]

Shadows of the Selected Examples

If we look at the 2D projections of some of these 3D shapes, we can see again (with some imagination) a fair number of faces, witches, kobolds, birds and other animals. Here are some selected examples. We show the 3D shape in the original orientation, a randomly oriented version of the 3D shape, and the three coordinate-plane projections of the randomly rotated 3D shape.

projectionPair[{{type_, n_}, angles_}] :=  Module[{opts, col, sr},   opts = Sequence[ImageSize -> {{220}, {220}}, BoxRatios -> {1, 1, 1},      ViewPoint -> {3, -3, 3}, Axes -> False, Boxed -> False];   col = types[type][[n, 2]];   sr = smooth3D[types[type][[n, 1]], 3];   Row[Riffle[Framed /@ Rasterize /@        {Graphics3D[{EdgeForm[], col, sr},          ViewPoint -> types[type][[n]][[3, 1]],                                           ViewVertical -> types[type][[n]][[3, 2]],          ImageSize -> {{220}, {220}}, Axes -> False, Boxed -> False],         Graphics3D[{EdgeForm[], col, rotate[sr, angles]}, opts],         Graphics3D[projectTo2D[rotate[sr, angles]], opts]}, " "]]]

Unsurprisingly, some are recognizable 3D shapes, like these projections that look like bird heads.

projectionPair[{{"asymmetric animal shapes", 15}, {-2.8, 3.05, 2.35}}]

Others are much more surprising, like the two heads in the projections of the two-legged-two-finned frog-dolphin.

projectionPair[{{"symmetric general shapes", 34}, {2.8, -1.4, 1.4}}]

Different orientations of the 3D shape can yield quite different projections.

projectionPair[{{"asymmetric general shapes",     49}, {-3.05, -0.75, -1.3}}]

For the reader’s amusement, here are some more projections.

projectionPair[{{"symmetric alien shapes", 3}, {-0.4, -0.25, 0.85}}]

projectionPair[{{"symmetric alien shapes", 7}, {0., 2.55, 0.6}}]

projectionPair[{{"asymmetric general shapes", 11}, {-1.25,     0.05, -1.6}}]

projectionPair[{{"asymmetric general shapes",     9}, {-0.15, -0.85, -0.55}}]

projectionPair[{{"symmetric general shapes", 26}, {1.8, -2.6, -2.3}}]

projectionPair[{{"asymmetric animal shapes", 5}, {2.65, 2.1, -2.85}}]

projectionPair[{{"asymmetric general shapes",     34}, {-3.1, -2.95, -1.}}]

Shapes from 4D Images

Now that we have looked at 2D projections of 3D shapes, the next natural step would be to look at 3D projections of 4D shapes. And while there is currently no built-in function Image4D, it is not too difficult to implement for finding the connected components of white 4D voxels. We implement this through the graph theory function ConnectedComponents and consider two 4D voxels as being connected by an edge if they share a common 3D cube face. As an example, we use a 10*10*10*10 voxel 4D image. makeVoxels4D makes the 4D image data and whitePositionQ marks the position of the white voxels for quick lookup.

makeVoxels4D[{dimw_, dimz_, dimy_, dimx_}, {black_, white_}] :=  Table[RandomChoice[{black, white} -> {0,       1}], {dimw}, {dimz}, {dimy}, {dimx}]

The 4D image contains quite a few connected components.

ccs = ConnectedComponents[gr];

Here are the four canonical projections of the 4D complex.

With[{cc = ccs[[1]]},  {Graphics3D[(Cuboid[# - 1/2, # + 1/2] &@{#1, #2, #3}) & @@@ cc,                       AxesLabel -> {"x", "y", "z"}, Axes -> True,     Ticks -> False],   Graphics3D[(Cuboid[# - 1/2, # + 1/2] &@{#1, #2, #4}) & @@@ cc,                           AxesLabel -> {"x", "y", "w"}, Axes -> True,     Ticks -> False],   Graphics3D[(Cuboid[# - 1/2, # + 1/2] &@{#1, #3, #4}) & @@@ cc,                           AxesLabel -> {"x", "z", "w"}, Axes -> True,     Ticks -> False],   Graphics3D[(Cuboid[# - 1/2, # + 1/2] &@{#2, #3, #4}) & @@@ cc,                           AxesLabel -> {"y", "z", "w"}, Axes -> True,     Ticks -> False]}]

We package the finding of the connected components into a function getConnected4DVoxels.

getConnected4DVoxels[Image4D[l_], n_] :=   Module[{posis, blackPos, edges, gr, v = UnitVector[4, #] &},   posis =     DeleteCases[     Level[MapIndexed[If[# === 0, #2, Nothing] &, l, {-1}], {-2}], {}];   (blackPos[#] = True) & /@ posis;    edges = Union[Flatten[Table[If[TrueQ[blackPos[# + v[j]]],                Sort@ UndirectedEdge[#, # + v[j]], {}] & /@ posis, {j,         4}]]];   gr = Graph[edges];   Take[Reverse[SortBy[ConnectedComponents[gr], Length]], UpTo[n]]]

We also define a function rotationMatrix4D for conveniently carrying rotations in the six 2D planes of the 4D space.

rotationMatrix4D[{\[Omega]xy_, \[Omega]xz_, \[Omega]xw_, \[Omega]yz_, \ \[Omega]yw_, \[Omega]zw_}] :=    With[{u = UnitVector[4, #] &, c = Cos, s = Sin},     Fold[Dot, IdentityMatrix[4],        {{{c[\[Omega]xy], s[\[Omega]xy], 0, 0}, {-s[\[Omega]xy],         c[\[Omega]xy], 0, 0},  u[3], u[4]},          {{c[\[Omega]xz], 0, s[\[Omega]xz], 0},        u[2], {-s[\[Omega]xz], 0, c[\[Omega]xz], 0}, u[4]},          {{c[\[Omega]xw], 0, 0, s[\[Omega]xw]}, u[2],        u[3], {-s[\[Omega]xw], 0, 0, c[\[Omega]xw]}},          {u[1], {0, c[\[Omega]yz], s[\[Omega]yz],         0}, {0, -s[\[Omega]yz], c[\[Omega]yz], 0}, u[4]},          {u[1], {0, c[\[Omega]yw], 0, s[\[Omega]yw]},        u[3], {0, -s[\[Omega]yw], 0, c[\[Omega]yw]}},          {u[1],        u[2], {0, 0, c[\[Omega]zw], s[\[Omega]zw]}, {0,         0, -s[\[Omega]zw], c[\[Omega]zw]}}}]];

Once we have the 3D projections, we can again use the above function to smooth the corresponding 3D shapes.

to3DImage[l_] :=   With[{mins = Min /@ Transpose[l]}, (# - mins) + 1 & /@ l]

In the absence of Tralfamadorian vision, we can visualize a 4D connected voxel complex, rotate this complex in 4D, then project into 3D, smooth the shapes and then project into 2D. For a single 4D shape, this yields a large variety of possible 2D projections. The function projectionGrid3DAnd2D projects the four 3D projections canonically into 2D. This means we get 12 projections. Depending on the shape of the body, some might be identical.

extractRegion[vs_] := Last[SortBy[ConnectedMeshComponents[     ImageMesh[Image3D[SparseArray[vs -> 1]],       Method -> "MarchingSquares"]], Volume]]

We show the 3D shape in a separate graphic so as not to cover up the projections. Again, many of the 2D projections, and also some of the 3D projections, remind us of animal shapes.

projectionGrid3DAnd2D[ccs[[1]], {1, 2, 3, 4, 5, 6}, 2,   Directive[GrayLevel[0.4], Specularity[Yellow, 12]]]

The following Manipulate allows us to rotate the 4D shape. The human mind sees many animal shapes and faces.

Manipulate[  projectionGrid3DAnd2D[   ccs[[c]], {\[Omega]xy, \[Omega]xz, \[Omega]xw, \[Omega]yz, \ \[Omega]yw, \[Omega]zw},                                                     1,    Directive[GrayLevel[0.4], Specularity[Yellow, 12]]],  {{c, 2, "component"}, 1, 12, 1, SetterBar},   Delimiter,  {{s, 1, "smoothness"}, {0, 1, 2}},   Delimiter,  {{\[Omega]xy, 0}, -Pi, Pi, ImageSize -> Small},  {{\[Omega]xz, 0}, -Pi, Pi, ImageSize -> Small},  {{\[Omega]xw, 0}, -Pi, Pi, ImageSize -> Small},  {{\[Omega]yz, 0}, -Pi, Pi, ImageSize -> Small},  {{\[Omega]yw, 0}, -Pi, Pi, ImageSize -> Small},  {{\[Omega]zw, 0}, -Pi, Pi, ImageSize -> Small},  TrackedSymbols :> True, ControlPlacement -> Left,  SaveDefinitions -> True]

Manipulate[  projectionGrid3DAnd2D[   ccs[[c]], {\[Omega]xy, \[Omega]xz, \[Omega]xw, \[Omega]yz, \ \[Omega]yw, \[Omega]zw},                                                     1,    Directive[GrayLevel[0.4], Specularity[Yellow, 12]]],  {{c, 2, "component"}, 1, 12, 1, SetterBar},   Delimiter,  {{s, 1, "smoothness"}, {0, 1, 2}},   Delimiter,  {{\[Omega]xy, 0}, -Pi, Pi, ImageSize -> Small},  {{\[Omega]xz, 0}, -Pi, Pi, ImageSize -> Small},  {{\[Omega]xw, 0}, -Pi, Pi, ImageSize -> Small},  {{\[Omega]yz, 0}, -Pi, Pi, ImageSize -> Small},  {{\[Omega]yw, 0}, -Pi, Pi, ImageSize -> Small},  {{\[Omega]zw, 0}, -Pi, Pi, ImageSize -> Small},  TrackedSymbols :> True, ControlPlacement -> Left,  SaveDefinitions -> True]

Here is another example, with some more scary animal heads.

SeedRandom[8]; projectionGrid3DAnd2D[          getConnected4DVoxels[    Image4D[makeVoxels4D[{10, 10, 10, 10}, {4, 1}]], 5][[1]],     {-1.8, 2.6, 1., 2.2, -2.7, -1.5}, 3,   Directive[Darker[Yellow, 0.4], Specularity[Red, 10]]]

SeedRandom[8]; projectionGrid3DAnd2D[          getConnected4DVoxels[    Image4D[makeVoxels4D[{10, 10, 10, 10}, {4, 1}]], 5][[1]],     {-1.8, 2.6, 1., 2.2, -2.7, -1.5}, 3,   Directive[Darker[Yellow, 0.4], Specularity[Red, 10]]]

We could now go to 5D images, but this will very probably bring no new insights. To summarize some of the findings: After rotation and smoothing, a few percent of the connected regions of black voxels in random 3D images have an animal-like shape, or an artistic rendering of an animal-like shape. A large fraction (~10%) of the projections of these 3D shapes into 2D pronouncedly show the pareidolia phenomenon, in the sense that we believe we can recognize animals and faces in these projections. 4D images, due to the voxel count that increases exponentially with dimension, yield an even larger number of possible animal and face shapes.

To download this post as a CDF, click here. New to CDF? Get your copy for free with this one-time download.

http://blog.wolfram.com/2017/02/23/how-many-animals-and-arp-imals-can-one-find-in-a-random-3d-image/feed/ 5
Analyzing and Translating an Alien Language: Arrival, Logograms and the Wolfram Language http://blog.wolfram.com/2017/01/31/analyzing-and-translating-an-alien-language-arrival-logograms-and-the-wolfram-language/ http://blog.wolfram.com/2017/01/31/analyzing-and-translating-an-alien-language-arrival-logograms-and-the-wolfram-language/#comments Tue, 31 Jan 2017 15:30:32 +0000 Michael Gammon http://blog.internal.wolfram.com/?p=34787 Black and white logogram

If aliens actually visited Earth, world leaders would bring in a scientist to develop a process for understanding their language. So when director Denis Villeneuve began working on the science fiction movie Arrival, he and his team turned to real-life computer scientists Stephen and Christopher Wolfram to bring authentic science to the big screen. Christopher specifically was tasked with analyzing and writing code for a fictional nonlinear visual language. On January 31, he demonstrated the development process he went through in a livecoding event broadcast on LiveEdu.tv.

video link

Color logogram

Scientists and general viewers alike were interested in the story of the Wolframs’ behind-the-scenes contributions to the movie, from Space.com to OuterPlaces.com and others. SlashFilm.com went further, pointing readers to the Science vs. Cinema Arrival episode featuring interviews with the Wolframs, other scientists, Jeremy Renner, Amy Adams and Villeneuve. Wired magazine also interviewed Christopher Wolfram on the subject of the Wolfram Language code he created to lend validity to the computer screens shown in the film. Watch Christopher Wolfram walk you through his development process.

Wolfram Research has a track record of contributing to film and TV. From the puzzles in the television show NUMB3RS to the wormhole experience in Interstellar, Wolfram technology and expertise have enriched some beloved popular art and entertainment. With Arrival, however, Stephen and Christopher consulted more extensively on what Stephen calls “the science texture” of the film.

Science and technology shape our world now more than ever. Science fiction movies are finding a wider audience, and we find these stories are crafted into films by some of the most skilled filmmakers around. If filmmakers such as Villeneuve continue to recognize the importance of getting the science right, science fiction will continue to live up to Arthur C. Clarke’s claim that “science fiction is escape into reality…. [It] concern[s] itself with real issues: the origin of man; our future.”

For more information on the Wolframs’ involvement in Arrival, read Stephen Wolfram’s blog post, “Quick, How Might the Alien Spacecraft Work?

http://blog.wolfram.com/2017/01/31/analyzing-and-translating-an-alien-language-arrival-logograms-and-the-wolfram-language/feed/ 4
Meet the Authors of Hands-on Start to Wolfram Mathematica, Second Edition http://blog.wolfram.com/2017/01/24/meet-the-authors-of-hands-on-start-to-wolfram-mathematica-second-edition/ http://blog.wolfram.com/2017/01/24/meet-the-authors-of-hands-on-start-to-wolfram-mathematica-second-edition/#comments Tue, 24 Jan 2017 17:29:49 +0000 Jeremy Sykes http://blog.internal.wolfram.com/?p=34714 Hands-on Start cover

Jeremy Sykes: To celebrate the release of Hands-on Start to Wolfram Mathematica and Programming with the Wolfram Language (HOS2), now in its second edition, I sat down with the authors. Working with Cliff, Kelvin and Michael as the book’s production manager has been an easy and engaging process. I’m thrilled to see the second edition in print, particularly now in its smaller, more conveniently sized format.

Q: Let’s start with Version 11. What’s new for Version 11 in HOS2 that you’d like to talk about?

Michael: As with any major Mathematica release, there are more new things to talk about than can be discussed in the time we have available. But I’m getting a lot of use out of the new graphics capabilities—the new labeling system, the ability to have callouts on a graph, word clouds, enhanced geographical visualizations and even things you don’t think about, such as the removal of discontinuities when plotting things like tan(x). The second edition of the book also includes updates for working with data, like the ability to process audio information, working with linguistic data, new features for date computation and preparing output for 3D printing. (Also new is an index, to make it easier to find specific topics.)

Q: Getting back to basics, I know that HOS existed in some form long before the book came out. Maybe you could fill out some of the history for us.

Cliff: Twenty years ago, I started at Wolfram on the MathMobile, traveling from city to city, visiting organizations (mostly universities—some companies and government labs as well). The MathMobile was a 30-foot trailer (connected to a truck) with three laptop stations where people would come into the trailer to see Mathematica in action in this mobile computer lab. My job was to walk people through how to get started with Mathematica, sometimes answering technical questions for existing users, and sometimes going through a first overview for non-users. Afterward, I worked in technical support and then in sales, and through these experiences, I had the opportunity to see many types of first-time interactions with Mathematica. Thus, my passion for helping people get started with Mathematica was initiated. Several years ago, I came up with the idea for a free video series showing people how to get started. That was extremely popular. From that, many requests for a book version of the video series came. Then we translated the video series into a book.

Q: Tell me a bit about the partnership behind CKM Media and how you came together for the project.

Cliff: In the late 1990s, Kelvin and I began working closely together on many Wolfram projects relating to academia. We found a lot of shared ideas and approaches to problems, bringing very different strengths to those projects. I tended to look at things from a liberal-arts-college perspective and that of a math student who had strong math skills but not a lot of programming experience. Kelvin often came at things more from the mindset of an engineer at a research university. We found that these different mindsets helped ensure that more members of academia were well represented in those projects. Michael started at Wolfram in the mid-2000s. The three of us worked closely together after his hire. Michael brought a computer science mindset with a focus on data analytics and programmatic solutions to real-world problems. So while we have been good friends for about fifteen years, we also bring such different skill sets to projects and feel we make for a great collaborative team for this book.

Q: What makes HOS a good Mathematica teaching tool?

Kelvin: I think it’s a good teaching tool from two perspectives—one, it’s extremely useful for teaching anyone how to get started with Mathematica. We’ve had lots of great feedback from students, teachers, professors and lots of different types of people in the government and commercial sectors. But also, it’s been a great tool for the classroom. Over the years, we’ve learned a lot from the free Hands-on Start video series. The comments and feedback from educators using the videos for their classes helped shape the philosophy of the book. What we wanted was a slow buildup of material that works well for non-users, and specifically for non-users without any coding experience. As the chapters progress, the examples get more intricate and more interesting, using multiple Mathematica functions. At the same time, we wanted the first few chapters to also show a complete sample project in Mathematica. Then, when syntax conventions are covered, they are framed with a discussion about why that convention is useful for a project. The second thing we wanted to do was show the scope of Mathematica and the Wolfram Language.

There are many good books and tutorials for learning Mathematica, but they often focus on one field or class. Our team wanted to provide a good foundation for how to use Mathematica and the Wolfram Language for a broad range of applications. Even Mathematica users who have focused on a select few functions could learn how to use Mathematica in new types of applications or projects. And it’s been fun to see the results so far.

The first edition has been a recommended or required text in classes like chemistry, economics, physics and mathematics, and in classes specific to teaching Mathematica or the Wolfram Language itself.

Hands-on Start authors

Jeremy: HOS2 is available from our webstore. It’s also available on Amazon. It’s available in the beautiful, perfect-bound 7×10 paperback copy, and also as a fully updated Kindle version. For those who buy the printed book, we have enrolled the book in Kindle’s MatchBook program, which allows you to buy the EPUB at a reduced cost. We also have plans to release on iTunes. For our international users, we plan to release translated versions of HOS2 in Japanese, Chinese and other languages.

Be sure to check out our upcoming series of Hands-on Start to Wolfram Mathematica Training Tutorials. Learn directly from the authors of the book and ask questions during the interactive Q&A. Visit Wolfram Research’s Facebook event page to learn more about upcoming events.

http://blog.wolfram.com/2017/01/24/meet-the-authors-of-hands-on-start-to-wolfram-mathematica-second-edition/feed/ 2
Exploring a Boxing Legend’s Career with the Wolfram Language: Ali at 75 http://blog.wolfram.com/2017/01/17/exploring-a-boxing-legends-career-with-the-wolfram-language-ali-at-75/ http://blog.wolfram.com/2017/01/17/exploring-a-boxing-legends-career-with-the-wolfram-language-ali-at-75/#comments Tue, 17 Jan 2017 17:41:04 +0000 Jofre Espigule-Pons http://blog.internal.wolfram.com/?p=34647 Muhammad Ali (born Cassius Marcellus Clay Jr.; January 17, 1942–June 3, 2016) is considered one of the greatest heavyweight boxers in history, with a record of 56 wins and 5 losses. He remains the only three-time lineal heavyweight champion, so there’s no doubt why he is nicknamed “The Greatest.”

I used the Wolfram Language to create several visualizations to celebrate his work and gain some new insights into his life. Last June, I wrote a Wolfram Community post about Ali’s career. On what would have been The Greatest’s 75th birthday, I wanted to take a minute to explore the larger context of Ali’s career, from late-career boxing stats to poetry.

First, I created a PieChart showing Ali’s record:

bouts = <|"TKO" -> 21, "KO" -> 11, "UD" -> 18, "RTD" -> 5, "SD" -> 1,     "LUD" -> 2, "LSD" -> 2, "LRTD" -> 1|>; PieChart[bouts, ChartStyle -> 24,   ChartLabels ->    Placed[{Map[Style[#, Bold, FontSize -> 14] &, Values[bouts]],      Map[Style[#, FontFamily -> "Helvetica Neue", Bold,         FontSize -> 16] &, Keys[bouts]]}, {"RadialCenter",      "RadialCallout"}], PlotRange -> All,   SectorOrigin -> {Automatic, 1},  ChartLegends -> {"Technical Knockout", "Knockout",     "Unanimous Decision", "Retired", "Split-Decision",     "Lost - Unanimous Decision", "Lost - Split-Decision",     "Lost - Retired"},   PlotLabel ->    Style["Ali's Record", Bold, FontFamily -> "Helvetica Neue",     FontSize -> 22], ImageSize -> 410]
Ali's Record

Ali was dangerous outside the ring as well as inside it, at least for the white establishment in the US. He converted to Islam and changed his name from Cassius Clay, which he called his “slave name,” to Muhammad Ali. Later he refused military service during the Vietnam War, citing his religious beliefs. For this, he was arrested on charges of evading the draft, and he was pulled out of the ring for four years. All this made Ali an icon of racial pride for African Americans and the counterculture generation during the 1960s Civil Rights Movement.

Perhaps a lesser-known fact about Ali is that he played an important role in the emergence of rap, and he was an influential figure in the world of hip-hop music. He earned two Grammy nominations and he wrote several poems, among which is the shortest poem in the English language:


So let’s create a WordCloud of his most popular poems. First, I need to import his poems from a database site like Poetry Soup and do some string processing from the HTML file in order to get the poems as plain strings:

poemsHTML =    Import["http://www.poetrysoup.com/famous/poems/best/muhammad_ali",     "Source"]; poems = StringReplace[    StringCases[poemsHTML,      Shortest["<pre>" ~~ x__ ~~ "</pre>"] -> x], {"\n" -> " ",      Shortest["<" ~~ __ ~~ ">"] -> " "}];

Here are the first three poems:

Take[poems, 3]

Then I get a list of the important words with TextWords and delete the stopwords with DeleteStopwords. Next, I style the word cloud with a boxing glove shape:

WordCloud[  StringDelete[   DeleteStopwords@Flatten[TextWords@poems], {"\[CloseCurlyQuote]",     "ve"}], \!\(\* GraphicsBox[ TagBox[RasterBox[CompressedData[" 1:eJztnb1rXMsZxk2SIo0gqFMlotKlGqcyBFVBnUNKF/cqyiVgfEE3IALq1KpW q8KdQZ1Lgf8AV24NKoLBnUCNGjcbP5u7aL3ZPfPOnJl5P87zwO/6WtjrPefM nJl5P//4488v/vGbJ0+e/PL7b/958cPpn09Ofvj3X//w7Td/e/3LP396ffz3 v7z+1/FPxyd/+vG33374n1/53ZO5ZoQQQgghhBBCCCGEEEIIIYQQQgghhBBC CCGEEEIIIYQQQgghhBBCCCGEEEIIIYQQQgghhBBCCCHrOD4+nr1582b28ePH 2ZcvX2bLuru7m//8+vp6dnJyMtvZ2VH/voRE4OzsbD63SvTp06fZ+fm5+jUQ 4o3Dw8PZ+/fvi+bdJl1dXc22trbUr40Qy2CveXt7W3Xurery8lL9OgmxBube 58+fm869ZeHs+OLFC/XrJkSb58+fz89sWnr79q36PSBEC9gvLQjr7+7urvr9 GOLg4GD28uXLub0XNibsp2EfBniP4Pf4+atXr+br+9OnT9W/M7ELxtLDw4P2 1Ps/4Xtp3xvMn4uLi9nNzc18f/D169dR14Sz9bt37+a2ZevvGdIHjAfLwljt eT+wbmE/0OssjH8Haybn4/R49uzZ3A7iQfBhtLwPmAOaZ+CFPnz4MPcBaY8N 0h6cYTzp/v6+6nlqf39/Pq9X43qsCPEPeDdojxPSBit2F4lw9qoZU4PP6ulv GSvaiOOBvY4XwQZSI4YG9kt8llfhvIB1W3vskPFY3XetqpavHvttC2e8Wjo6 OlIfQ6SM7e3t+XnKg+BbG3u92G96sTXlCjZb7fFE8oAdw6Lfb1WYM2PtgfDh ebjWseJ66AfMv7H+5B6Cf3LMdcJ/OIW5tyyeD+2D/af1cYn3w5g4GMSUeznj 1hauW3uMkWGsn//g/yq9NvjNSvOHI4n5XnaxvjaMiXmBv4x6FGuD2MPy+oC9 Z6nPAfZAD2fb3kK8hfaYI49Yjr1GbErJOxs+ek9xBRrSHnfkf8AnZlWIUSm5 Jtg7qbRYH0sf+NWsqtRuYHlPbU1j7FukDlZ9ECW+ZPgbeO7Ll/YYnDIWz0p4 J6AmTe61WD7PWhdrYemA2CxrQswZ4gNyrmNvb8+8P8W66CvsD2KVrAk5CrnX AZ8DNV6ldi9SjrU81BK7AHIkqDoqef+RcrDvsCTUv8+9Bto964rxo/1ALoQl 5eY84PtHze3TlvbYnAqW1o/cOCnY7qh20h6bU8CS/SI3350xL+2lPT6ngJV8 pNxaX9bOr1GlPT6jY2Uc557/6HfvJ+0xGh0Lyp1/tfuFUpsFO5f2GI2MhbzV XP+DxRi6yEIfGe1xGhXkz2kr1/9uyXY7FeGdl3ouyK8BqHnM/jJytOvS575f I9XU9aSUnQwx9JuEfSyeG+LdYHdAvgt7JT6iqdwzBtc/PSF+f+jZoH5drvD8 MS97956zBGofaQl+kJzaE1z/dJWqj1wjxwZ7InwO8ly050YvNHNzc86AXP/0 leqNU9tHhGceve7+6elp1XtWIuRmpL4n7Z/6kjynVvsU7Jei5i5i3bcg3ONN ddU9+d8j18eQ+G1bC/c30lxEHWlrWq2V4Cn3bxHbGjVXHz3erIwnvLOxh9Oe Q2Ox4JNfp8W9tVxHcVWr59qIsTup8aRR8wR7X8+9vK3WSYM87T9xH9fdXyux tzUkyZ/XfO943J+W+HGo9UI8yKb7jL11hDOiZIxrv9NhM/IUl+NpnbGslM8a wP9prS5PrlL7PbyHrMhL71Ltd1YESWInF3iuqyHxSWjGeayT9b2p5Zr1XrTp DLgO77EFkvFscZ0vqQHWC0/2fquS1puOcK9TsYTW6n8ty2qPDMZcjpO0xg3O Jd4l2W9bt/9arIlKlUtyNgIW8jFrSBKraXEfuipL8zDCu1lT0nw3KzGAY4RY lNR1WrKHpmRlX2rNfuVJUltblHssuV5vPi4L/TKYf1AmaZ5/JJuz5Ho9Sttv Qb9gmaQxiVZqs46VJEfCUzzvqtADVmP+IY6Hyhf2lpL7i31OFElivjzYYoaU U7+hFuzDkC9prRu8V6NIcmaKMJY06jSyF0O+ENsuubeR9vibcqmX8R77s5B0 j1OLKPa6XpLGg0bag0rWwKH6hR7VM//Qmx1ZW5IzUbQcMMkZKVp+sjTuogb0 S8gl3aNEsYNCkj5X2KdGlCQHrQaME5UJObeS+xltby+55mhr4LJ6zEHvtuRe gt8rdS8t5wqUSLIOWKwBVlO5/Z5LiFrvq6YkMZIg0r5+ite8Sa1rYXAOpiXp eRDNDiOp/RApBm9IreNJOQeHJfXHR9rTS/0vU7IlbG9vN5uDkcZOC0nWwGhx DhJfBHIIpyRpjnYJU3qX5UpaHyZSPIw0fyCS/0UiqV28hBY+enxf2BG9z2/J ePScJ7AqnEskYyaa/0WqVr0QW9QiX65dBZsSxrLHHPKprQdDdYkXoP/fVNUq nrtF3YFN7wv4kjAfPYxbiS3MQo+4WpKed6bgixhSqzjS2rVmU/VVPDxHyZrg uUbvsqS236nZYdapVU5FzXqXEt+u9V4LknNRJH9gqpf1gki2p1JJz8y51Myl T9WL81BzS2KL8bCWSyR9rzO/5lHSOnq51LrHqZojHuyIqXzVKPU/pPk5EfLj a6plPkWNPWJqDbH+PpXYvqLY5qXvcw82tJ5qWZO0Rv2TVG9k6/UOJHuzCLYY qa8LeQPU92rprwdj7TOp+nDWz/WoyTD0/T2cZ1OS9h7iHnSzJHbzMYyphZKy sVm2iUpi07z3TJL6ITy8LzXVoyZw6Twc8mFar3kgyRfwHuOeWucXRM6Nr6Fe fQxL+lcN5XhY38el3m24Ns+S1AIAkeJ/WqmVn3Ad8EXn7B+H8l6s53ym6obC 3uRV0jxUPD9Kpl5zMHdv4nkOpvI0vdoIc97Z3vfaPdWzBukC2MlSz2io9oZl O5vE3uw1F0taDwV1DCm5Un64liB2d9NcHDrzW14HJb55yzbdTZLW5Y/UG6OX etUfTa2Lq3GT+NmmP2/ZJpOyiXq0x0jHyJRzAseoR91DKRifeN5YG4d6lVv2 TaRszZbfH+uUMz54BixTL/9Ebazu51Kx5pbPsqvKGRuRetP0lpVe9rlYjb2I MgdzxoWHHBbL6tkXpiZWY7YjzMGc+WfZPuZF0lrk1rCau5SyyVgfs9K6vGBr a8vsmcCTvM5Bq/uf1Bpi2XaYWsNXYW31OvI6B6326fFaByc3ft9rnIFFeZ2D Vsey5HxtrYZMKlfT+vf3Ls9z0GJejCSv3Mo+Gu8L7I1z7jl9EPXleQ5a65GS U5tAWyW9Rzj/2ignH9oilpQTU1KSU1lDsKNI64AuY3HPEUXe56AlP2FubZDe dsXSugmW7nFEpWrpWsdKnH7JuwxnsR52pVyfwwL4/xgD2l45PlmrWIhbK+0f gNqcrWocYm+ca3NZAN+Phfs6BbXukd0DC7Vyx+ZC18qtR/6itN7LJqzZuqIL Oc/ac6gGmqpVlwdrIp5Hbj1qnNeQ81WjfwFtn/3VqidobzTrJ7ToMY46AqhL hs/GGom5gXMd/h+2FeRXSusLSoCtlLXodZQbI2EV2A+0NJTzD4bqU1nAa42p KCo9s1tEq3Z16nvB7mVxv4FaQrS76Et7HNSmdwypxK68GOdjbSW1wL4TthtK X17zd4fobdNLzavVfEHN+j2IIaC/3Za81pJJ0TOvJmUXWbc/hj8wdYasCfac XPdsqkfPFw1go++hsfV8sY9t1fsKnwtbMc97ttXzXdybHjHRkrOgRFij4N8b Yx+D7RX+Cvgv6GPwI9jztedKS1rvSVPxaahNnSvsU+ELxDsE/kGcJ1FPFXMM v2Lvi8/FORT/Ps4TnHM+FdEes0rrmtZDdYiBhRg6yq5KY+m9cXR01Owepnzv rPlADUmz10tvWqxHEntMq3wIKoa050Vvaq9Jqf5KHnu7UP3kPW+3lJp566mc Lw/1tCk9tYjz9wDOb7Vi2VL30ErtNMqmWvmFPVCrNnAqBlsrfpyyL8+1DGtR Y5+Yim+gTZTapCh582MZ67NI7SUYG01tUuT4tFzgnynV7u7u4GfTL0Gtk/da opbmYepzKWqdSuvvRSd3Hkr88xS1TjXqbkUFsZ9SpeYg/fPUOk3VL5+DNM8h tae32huR0tWU4kPHAHtnyo+fqifKOUitCrnU2mPbE9izD+Wfcw5SuaItpoxN /U44B6lcwUagPZ69si7ehXOQylGEni7arMZ+puYg6sJQ1EI1exJMGdR2WSh1 vtasu0/ZUoTegpZAfaWFzTT1ZykK4hrYBomvlaK4BrYlVReSMdvUlPN0LcDc pWkrah8JT+AZUNMVY7P16VFrn7KpqdTutU7vHmyUDUny2kgfGCszTaF3j/bY I4/07glM6Yp1KuxRs6YwZV/IP9Uec+R7cDanpiH6ImxCu8x0pD3WyHp69eSm dJWquU50YcxabLFOk314JoytVK1nos/x8bH2MKEaiXtQP1DxhJh87XFF5DB+ O5YQe5HKXSO24H40lvA8tccUyYdxazHE3oF+wbOjfAv1ZrXHESlnf39fewhR I0U/hH9Y38Kv0JdLe/yQ8aAHMuVPqPGsPXZIPTb1sKBsin7AeOT0GqV0dX9/ rz5eSBu4FvoQ6pFojxXSBmm/X0pPzImPD22kdnV+fq4+Pkh7WHfNpmgDnRaM nbEl1oSZJrC9Ufpij6TpQvuMvliPglxfX2sPw8nq9vZW/fkTG7AecH9x/pFl WAexr5iHRNZxcnKiPTQnIa5/ZAj4p6h2Ygw2kQA7OVVf9P+RHGAvp+qJ8S+k BOZX1BHjP8kY6LMYJ9Qu0H6GxD9cD/OFe7a3t6f+7EgceD6UC312tJ8XiQlr 5qcFH6v2cyKxof9wvbBPQKyR9vMh04D9K77XxcWF+jMh0wPv/KnbarD2oX65 9rMg0+bq6kp7KnQX+uecnp6q33tCFhwcHExmTWTfI2IZnIui9liDTXhnZ0f9 HhMiIVJePuYez3zEI1gz4Kv2Knx3xrmQKMBu8/DwoD2tkkJsLP0MJDLoN2Mt NxHvBuydYVfSvj+E9OTs7Gw+HzVsOLDhIt7n8PBQ/T4QYgHk91xeXs7rPLTY s8KXDp8C1mH2kyZExtHR0Tz3FXPn5uZmvmZi/UJd8Lu7u+9+xc8xf2HDxL4S fw/zjb4EQgghhBBCCCGEEEIIIYQQQgjJ578q61S7 "], {{0, 225}, {225, 0}}, {0,        255}, ColorFunction->RGBColor], BoxForm`ImageTag["Byte", ColorSpace -> "RGB", Interleaving -> True], Selectable->False], DefaultBaseStyle->"ImageGraphics", ImageSize->Automatic, ImageSizeRaw->{225, 225}, PlotRange->{{0, 225}, {0, 225}}]\), ColorFunction -> "SolarColors",   ImageSize -> 500]

With just a glimpse, I can see that he mainly wrote about his opponents, himself and boxing.

In my Community post from last June, I showed how to create the following DateListPlot that shows his victories over time. Note that his suspension period happened just as his performance was rising steeply:

Number of victories over time

I imported the other data from his Wikipedia page, which allowed me to visualize where these fights took place with GeoGraphics and who his opponents were:

Ali's career

Now as a continuation of that previous post, I would like to further analyze Ali’s opponents. For this, I’m going to take the data from the BoxRec.com site, where one can find a record of all of Ali’s opponents. I’m going to skip the parsing process of the relevant data imported from the HTMLs and will directly use a dataset that I created for this purpose (see the attached file at the end of this post).

First, let’s create a CommunityGraphPlot with all of Ali’s opponents. I want the vertexes of the graph to represent the boxers and the edges to indicate if two boxers encountered each other in the ring. Each community here will represent a group of boxers that are more connected to each other than the rest of boxers, and they will each be represented in a different color. For this, I need the list of opponents of each of Ali’s opponents:

dataset = Import["datasetBoxers.m"] boxers = Normal@Normal[dataset[All, "opponents"]]; boxersID = Normal@Keys[dataset];

In addition, I can indicate the number of bouts fought by each boxer by plotting the diameter of the vertexes proportionally and also indicate the losses that Ali had during his career with red edges using VertexSize and VertexLabels, respectively (see the complete code in the attached notebook):

CommunityGraphPlot[  Map[#[[1]] <-> #[[2]] &,    DeleteDuplicates@    Map[Sort,      Flatten[Table[       Map[{boxers[[i, 1]], #} &,         Intersection[boxers[[i, 2]], boxersID]], {i, Length[boxers]}],       1]]],  VertexLabels -> vertexlabels,  VertexSize -> vertexsizes,  EdgeStyle -> rededges, ImageSize -> 620]
CommunityGraphPlot[  Map[#[[1]] <-> #[[2]] &,    DeleteDuplicates@    Map[Sort,      Flatten[Table[       Map[{boxers[[i, 1]], #} &,         Intersection[boxers[[i, 2]], boxersID]], {i, Length[boxers]}],       1]]],  VertexLabels -> vertexlabels,  VertexSize -> vertexsizes,  EdgeStyle -> rededges, ImageSize -> 620]

We can observe that Moore had the largest number of bouts. But was he better than Ali in terms victories over losses?

One way to compare the boxers is by calculating the following ratio for each one:


I can then use a machine learning function such as FindClusters to classify the opponents into different categories, visualized here with a Histogram:

wins = Values@Normal@dataset[All, "wins"]; losses = Values@Normal@dataset[All, "losses"]; draws = Values@Normal@dataset[All, "draws"];  Histogram[FindClusters[(wins - losses)/totalfights], {0.038},  AxesLabel ->    Map[Style[#, FontFamily -> "Helvetica Neue",       FontSize -> 14] &, {"Wins-Losses Ratio", "Boxers"}],  ChartLegends -> {"Great Boxers", "Good Boxers", "\"Bad\" Boxers"},   ImageSize -> 500]
wins = Values@Normal@dataset[All, "wins"]; losses = Values@Normal@dataset[All, "losses"]; draws = Values@Normal@dataset[All, "draws"];  Histogram[FindClusters[(wins - losses)/totalfights], {0.038},  AxesLabel ->    Map[Style[#, FontFamily -> "Helvetica Neue",       FontSize -> 14] &, {"Wins-Losses Ratio", "Boxers"}],  ChartLegends -> {"Great Boxers", "Good Boxers", "\"Bad\" Boxers"},   ImageSize -> 500]

Another way to compare the opponents’ records is by plotting a BubbleChart:

bubbles =    MapThread[    Labeled[List[#1, #2, #3],       Style[#4, Bold, FontFamily -> "Helvetica Neue", FontSize -> 10,        FontColor -> Black, Directive[Opacity[0.7]]],       RandomChoice[{Top, Bottom, Left, Right, Center}]] &, {losses,      wins, totalfights, namesFamily}];  bubblesClusters =    Table[bubbles[[Flatten@Position[clusters, i]]], {i, 3}];  BubbleChart[bubblesClusters, PlotTheme -> "Detailed",   AspectRatio -> 1/GoldenRatio, BubbleScale -> "Diameter",   ChartBaseStyle -> Directive[Opacity[0.7]],  ChartLegends ->    Placed[{"Great Boxers", "Good Boxers", "\"Bad\" Boxers"}, Bottom],  PlotLabel ->    Style["Wins vs. Losses", Bold, FontFamily -> "Helvetica Neue",     FontSize -> 18],  FrameLabel -> {Style["Losses", Bold, FontFamily -> "Helvetica Neue",      FontSize -> 18],     Style["Wins", Bold, FontFamily -> "Helvetica Neue",      FontSize -> 18]},  ImageSize -> 610]
Wins vs. Losses

Under such a classification method, Ali is one of the greatest (as I expected), but Moore is just a “good” boxer, even if he holds the record number of wins. Although this is a nice way to compare boxers, one should be cautious—for example, I noticed that Spinks is classified as a “bad” boxer even though he beat Ali once.

Before concluding the opponents analysis, I will plot Ali’s weight over his career and compare it with the one of his rivals with DateListPlot:

Fight Dates

As one should expect, Ali gained weight over the course of his career. And he had one really heavy opponent, Buster Mathis, who weighed over 250 pounds at the end of his career.

Finally, I would like to point out a fun fact that I discovered thanks to the amazing amount of knowledge built into the Wolfram Language. After winning his first world heavyweight title in 1964, there was a little boom of babies named Cassius, who are now around 52 years old. There would probably be even more people called Cassius now if he hadn’t changed his name to Muhammad Ali:

ListLinePlot[  Partition[   Flatten@Entity["GivenName", {"Cassius", "UnitedStates", "male"}][     EntityProperty["GivenName", "GivenNameDistribution"]], 2],   AxesLabel -> {Style["Years", FontFamily -> "Helvetica Neue",      FontSize -> 16],     Style["Percentage", FontFamily -> "Helvetica Neue",      FontSize -> 16]}, ImageSize -> 500]

The Wolfram Language offers so many possibilities to keep exploring Ali’s life. But I will stop here and encourage you to create your own visualizations and share your ideas on Wolfram Community’s Ali thread.

Download this post as a Computable Document Format (CDF) file along with the accompanying dataset. (Note that you should save the dataset file in the same folder as the notebook in order to load the data needed for the visualizations.) New to CDF? Get your copy for free here.

http://blog.wolfram.com/2017/01/17/exploring-a-boxing-legends-career-with-the-wolfram-language-ali-at-75/feed/ 0
Automotive Reliability in the Wolfram Language http://blog.wolfram.com/2017/01/13/automotive-reliability-in-the-wolfram-language/ http://blog.wolfram.com/2017/01/13/automotive-reliability-in-the-wolfram-language/#comments Fri, 13 Jan 2017 17:39:38 +0000 Nick Lariviere http://blog.internal.wolfram.com/?p=34593 This post originally appeared on Wolfram Community, where the conversation about reliable cars continues. Be sure to check out that conversation and more—we can’t wait to see what you come up with!

For the past couple of years, I’ve been playing with, collecting and analyzing data from used car auctions in my free time with an automotive journalist named Steve Lang to try and get an idea of what the used car market looks like in terms of long-term vehicle reliability. I figured it was about time that I showed off some of the ways that the Wolfram Language has allowed us to parse through information on over one million vehicles (and counting).

Vehicle Class Quality Index Rating

I’ll start off by saying that there isn’t anything terribly elaborate about the process we’re using to collect and analyze the information on these vehicles; it’s mostly a process of reading in reports from our data provider (and cleaning up the data), and then cross-referencing that data with various automotive APIs to get additional information. This data then gets dumped into a database that we use for our analysis, but having all of the tools we need built into the Wolfram Language makes the entire operation something that can be scripted—which greatly streamlines the process. I’ll have to skip over some of the details or this will be a very long post, but I’ll try to cover most of the key elements.

The data we get comes in from a third-party provider that manages used car auctions around the country (unfortunately, our licensing agreement doesn’t allow me to share the data right now), but it’s not very computable at first (the data comes in as a text file report once a week):

text = "01/02/2017 Schaumburg 128 1999 Acura CL 3.0 2D Coupe 131612 \   19UYA2256XL014922 Green A,L,R,Y          9:00 AM Illinois Announcements: Major Transmission Defect, \   Miles Exempt          01/02/2017 Hickory 33 1997 Acura CL 2.2 2D Coupe 217449 \   19UYA1255VL011890 Blue A,L,R,Y          2:00 PM North Carolina Announcements: Major Transmission Defect         01/02/2017 Ft. Bend 46 1995 Acura Integra LS 4D Sedan 98124 \   JH4DB7654SS013119 Green A,R          9:30 AM Texas Announcements: Miles Exempt          01/03/2017 Kansas City 57 1992 Acura Integra LS 4D Sedan \ 174537 \   JH4DB1653NS000122 T/A Yellow A,Y          2:00 PM Kansas Announcements: Structural Damage, Title Absent \      ";

Fortunately, parsing this sort of log-like data into individual records is easy in the Wolfram Language using basic string patterns:

vinPattern = RegularExpression["[A-Z\\d]{17}"]; recordPattern =      DatePattern[{"Month", "Day", "Year"}] ~~ __ ~~     vinPattern ~~ __ ~~        "Announcements:" ~~ __ ~~ "\n";
StringCases[text, Shortest[recordPattern]]

Then it’s mostly a matter of cleaning up the individual records into something more standardized (I’ll spare you some of the hacky details due to artifacts in the data feed). You’ll end up with something like the following:

record = <|"Date" -> "2017-01-02", "ModelYear" -> 1999,       "Make" -> "Acura", "Model" -> "CL",     "TransmissionIssue" -> True,       "EngineIssue" -> False, "Miles" -> 131612,       "VIN" -> "19UYA2256XL014922"|>;

From there, we use the handy Edmunds vehicle API to get more information on an individual vehicle using their VIN decoder:

lookupVIN[vin_String] :=   ImportString[   URLFetch["https://api.edmunds.com/api/vehicle/v2/vins/" <> vin <>      "?fmt=json&api_key=" <> apikey    ], "JSON"]

lookupVIN[vin_String] :=   ImportString[   URLFetch["https://api.edmunds.com/api/vehicle/v2/vins/" <> vin <>      "?fmt=json&api_key=" <> apikey    ], "JSON"]

We then insert the records into an HSQL database (conveniently included with Mathematica), resulting in an easy way to search for the records we want:

SQLSelect[$DataBase, $Table, {"Year", "Miles", "Transmission"},    And[SQLColumn["Make"] == "Nissan", SQLColumn["Model"] == "Cube",     SQLColumn["Year"] <= 2010]] // Short

From there, we can take a quick look at metrics using larger datasets, such as the number of transmission issues for a given set of vehicles for different model years:

Number of transmission issues

Or a histogram of those issues broken down by vehicle mileage:

Issues by vehicle mileage

It also lets us look at industry-wide trends, so we can develop a baseline for what the expected rate of defects for an average vehicle (or vehicle of a certain class) should be:

Yearly defect ratio

lm = LinearModelFit[modeldata, {date, modelyear}, {date, modelyear}]
lm = LinearModelFit[modeldata, {date, modelyear}, {date, modelyear}]

We can then compare a given vehicle to that model:

Powertrain issue rate

We then use that model, as well as other information, to generate a statistical index. We use that index to give vehicles an overall quality rating based on their historical reliability, which ranges from a score of 0 (chronic reliability issues) to 100 (exceptional reliability), with the industry average hovering right around 50:


We also use various gauges to put together informative visualizations of defect rates and the overall quality:

MileageGauge[mileage_, opts___] := With[{color = Which[                mileage <= 100000, Lighter[Red],                100000 <= mileage <= 120000, Lighter[Yellow],                120000 <= mileage <= 130000, Lighter[Blue],                True, Lighter[Green]]},       HorizontalGauge[{mileage, $IndustryAverageMileage}, {50000,          200000},           ScalePadding -> {.08, .1},           GaugeLabels -> {                  Placed[             Style[Row[{"Model average: ",                    AccountingForm[mileage, DigitBlock -> 3],           " miles"}],                FontSize -> 20], Above],                   Placed[             Style[Row[{"Industry average: ",                              AccountingForm[$IndustryAverageMileage, DigitBlock -> 3],                    " miles"}], FontSize -> 16], Below]                  },           ScaleRanges -> {If[                    mileage < $IndustryAverageMileage, {mileage, \         \ $IndustryAverageMileage}, {$IndustryAverageMileage, mileage}]},           ScaleRangeStyle -> color, GaugeStyle -> {Darker[Red], Black},           ImageSize -> 500,          ScaleDivisions -> {7, 7},     GaugeFaceStyle -> Lighter[color, .8],           opts]       ]

announcementGauge[value_] :=     AngularGauge[value, {0, .3},        GaugeLabels -> Style[ToString[N[value, 3]*100] <> "%", 15],        PlotLabel -> Style["Transmission Issues", 15],        ScaleRanges -> {{0, $IndustryAverageIssueRates - .01} ->                 Lighter[Green], {{$IndustryAverageIssueRates - .01,                    $IndustryAverageIssueRates + .01}, {0, .2}}, \      \  {$IndustryAverageIssueRates + .01,               1.5*$IndustryAverageIssueRates} ->                       Lighter[Yellow], {1.5*$IndustryAverageIssueRates, 1} ->            Lighter[Red]},        GaugeStyle -> {RGBColor[{.15, .4, .6}], RGBColor[{.5, .5, .5}]}]

There is a lot more we do to pull all of this together (like the Wolfram Language templating we use to generate the HTML pages and reports), and honestly, there is a whole lot more we could do (my background in statistics is pretty limited, so most of this is pretty rudimentary, and I’m sure others here may already have ideas for improvements in presentation for some of this data). If you’d like to take a look at the site, it’s freely available (Steve has a nice introduction to the site here, and he also writes articles for the page related to practical uses for our findings).

Our original site was called the Long-Term Quality Index, which is still live but showed off my lack of experience in HTML development, so we recently rolled out our newer, WordPress-based venture Dashboard Light, which also includes insights from our auto journalist on his experiences running an independent, used car dealership.

This is essentially a two-man project that Steve and I handle in our (limited) free time, and we’re still getting a handle on presenting the data in a useful way, so if anyone has any suggestions or questions about our methodology, feel free to reach out to us.


Continue the conversation at Wolfram Community.

http://blog.wolfram.com/2017/01/13/automotive-reliability-in-the-wolfram-language/feed/ 1