According to a LinkedIn report published last week, the most promising job in the US in 2019 is data scientist. And if you search for the top “hard skills” needed for 2019, data science is often in the top 10.
Data science, applied computation, predictive analytics… no matter what you call it, in a nutshell it’s gathering insight from data through analysis and knowing what questions to ask to get the right answers. As technology continues to advance, the career landscape also continues to evolve with a greater emphasis on data—so data science has quickly become an essential skill that’s popping up in all sorts of careers, including engineering, business, astronomy, athletics, marketing, economics, farming, meteorology, urban planning, sociology and nursing.
Face it: much of our world is now driven by data. So what does this mean for teachers? We need to equip our children with the tools they’ll need to navigate this changing world, and Wolfram is here to help!
Computational thinking is an integral part of data science because before we can use a computer to analyze data and solve problems, we need to understand enough about the problem we’re trying to solve, which questions to ask and the steps we must take to answer them in order to tell the computer how to do it.
Back in 2016, Stephen Wolfram wrote, “Computational thinking is going to be a defining feature of the future—and it’s an incredibly important thing to be teaching to kids today.” And boy, was he right!
But where do you start? Here are some resources we’ve put together that can help.
Wolfram Challenges — These bite-sized Challenges are a great way for students to practice their computational thinking skills and start thinking outside the box. While knowledge of the Wolfram Language is beneficial, each Challenge includes suggestions for functions to use and examples to help students get started.
Interested in more resources? Here are a few others.
While students don’t need a PhD in math to start learning data science, if they want to do data science they’re going to have to deal with at least some math. And what better way to do that than with Mathematica and the Wolfram Language, central tools for math and science education for over three decades? Not sure if you have access to Mathematica at your campus? Find out if you do.
Here are a few resources your students can use to enhance their math (and Mathematica) skills.
Fast Introduction for Math Students — Students can use this online tutorial to learn about solving math problems in the Wolfram Language—from basic arithmetic to integral calculus and beyond.
Wolfram Problem Generator — This free, online resource is a great way for students to brush up on their math skills. Wolfram Problem Generator provides an unlimited number of AI-generated practice problems for mathematics and statistics, with built-in hints and step-by-step solutions.
Hands-on Start to Wolfram Mathematica — This tutorial helps you get started with Mathematica—learn how to create your first notebook, run calculations, generate visualizations, create interactive models, analyze data and more.
Here are more ways to get started with Mathematica and the Wolfram Language.
Teachers and students have been using Mathematica and the Wolfram Language for over three decades, and they’ve created a remarkable amount of classroom content—from interactive manipulates to student projects.
Here are just a few of the places where you can access these materials to use them in your own courses.
Wolfram Demonstrations Project — Creating visualizations and interactive models can help students spot patterns and gain a deeper understanding of the methodology behind the algorithms used in analyzing the data. Download pre-built, open-code examples from a growing collection of interactive visualizations that span a remarkable range of topics, including non-STEM subjects like fine arts, social sciences or sports. There are even hundreds of examples that explore ways to analyze and visualize data.
Here are links to more teaching materials and other resources to enhance your classroom. Best wishes for the new semester.
]]>
Starting with the structured data from the previous post, we can create a data resource to stash our results. This makes it easy to reference the content quickly for later use. It also saves computational resources—rather than reevaluating all those web-scraping computations, we can retrieve them immediately in a convenient, well-defined manner.
To start creating a data resource in Version 11.3 of Mathematica, go to . This will open a resource object template with fillable slots for describing the dataset, tweaking its structure and content and customizing how it can be accessed. First we’ll fill in a bit of information about the data being submitted. Add a title and description to explain the purpose of the resource:
For private resources, this is all the information needed. But we can also add other metadata, such as info about the contributor, the original data source (in this case http://uselectionatlas.org), related resources and the type and scope of the resource:
Moving down in the template notebook, we can add our existing web-scraping code (i.e. the ElectionAtlasData function) directly under Construction Area:
In the Content Element Initialization area, we add functions to help define some of the elements we want to add. Along with a modified version of the VoteMap code from the previous post, let’s add a simple function for computing the date of each election:
✕
ElectionDate[year_] := Interpreter["ComputedDate"][ "first Tuesday after Nov 1 " <> ToString[year]] |
We can also augment our dataset with a quick summary of each election, extracted using WikipediaData (conveniently, Wikipedia has very consistent page titles):
✕
ElectionSummary[year_] := First@TextCases[ WikipediaData["U.S. presidential election, " <> ToString[year], "SummaryPlaintext"], "Line"] |
Next, we define an Association that represents the full data to include for each entry:
✕
ElectionDataElements[year_] := With[{results = ElectionAtlasData[year]}, year -> <| "Date" -> ElectionDate[year], "Summary" -> ElectionSummary[year], "Candidates" -> Rest@Normal@Keys[results[[1]]], "CandidateTotals" -> Total@results[[All, 2 ;;]], "VoteMap" -> VoteMap[results, year], "VoteCounts" -> Normal[First[#]["StateAbbreviation"] -> # & /@ results] |>]; |
Since we’re working with a fairly large dataset, we can speed up the final deployment by using CloudExport to generate a serialized CloudObject:
✕
CloudExport[<| Table[ElectionDataElements[year], {year, 1824, 2016, 4}]|>, "MX", "ElectionData", Permissions -> "Public"] |
The Content Elements section contains the code for building the full resource—in this case, everything contained in the previously shown CloudObject:
✕
$$Object["FullContent"] = DataResource`$$ContentConversion@<| "Data" -> CloudObject["ElectionData"]|>; |
Under Default Element Specification, we can tell the system what element to use when ResourceData is called on the object. In this case, the only top-level element is "Data":
✕
$$Object["DefaultContentElement"] = "Data"; |
Jumping down to the Create Resource Object section, execute the following code to generate the resource object:
✕
$$ResourceObject = ResourceObject[EvaluationNotebook[]] |
The subheadings in the Deploy Resource Object section represent various ways of deploying a data resource; in this case we’ll deploy publicly to the Wolfram Cloud so we can connect our resource to a web deployment:
✕
CloudDeploy[$$ResourceObject, "ElectionResource", Permissions -> "Public"] |
Now that the resource has been deployed, it can be accessed directly using ResourceObject:
✕
ResourceObject["https://www.wolframcloud.com/objects/bwood/\ ElectionResource"] |
To access the full data (i.e. the "DefaultElement"), use ResourceData—as mentioned in the previous post, Dataset provides a convenient structure for viewing an entry:
✕
ResourceData[ "https://www.wolframcloud.com/objects/bwood/ElectionDataTest"][[-1]\ ] // Dataset |
Next, let’s make a nice, clean layout for displaying the information from a given election. Using DateString, we can customize the display format for showing election dates:
✕
MDYFormat[d_] := DateString[d, {"MonthName", " ", "DayShort", ", ", "Year"}] |
Our summary text can be displayed neatly in a Panel:
✕
Panel[Style[data[[-1]]["Summary"], "Text", LineIndent -> 0], ImageSize -> 500] |
NumberForm is useful for formatting large numbers; we’ll set DigitBlock to 3 to insert comma delimiters:
✕
FormatVoteTotal[total_] := Style[ToString@NumberForm[total, DigitBlock -> 3], "Text"] |
We can then pass everything into a Grid with custom Style settings for optimal display:
✕
ElectionResultsGrid[data_] := Grid[Join[{ Join[{""}, Style[#, "Subsection"] & /@ data["Candidates"]], Join[{Style["National", Bold, "Text"]}, FormatVoteTotal /@ Values@data["CandidateTotals"]]}, Flatten[{ Style[Keys@#, Bold], FormatVoteTotal /@ Values[#[[2, 2 ;; All]]]} ] & /@ data["VoteCounts"]], BaseStyle -> "Text"] |
Lastly, we stack the results vertically using Column:
✕
ElectionDataGrid[totals_] := Column[{ Style[MDYFormat[totals["Date"]], "Title"], totals["VoteMap"], Panel[Style[totals["Summary"], "Text", LineIndent -> 0], ImageSize -> 500], Style["Vote Totals", "Section"], ElectionResultsGrid[totals] }] |
The result is a clean summary of a given Election Atlas entry:
✕
ElectionDataGrid[ Last@ResourceData[ "https://www.wolframcloud.com/objects/bwood/ElectionResource"]] |
Finally, it’s time to create an interactive browser for sharing our results. Using FormPage, we can make a dynamic form that will import the data resource and display our information grid (adding a title using AppearanceRules):
✕
fp = FormPage[<|{"Year", "Select a Year"} -> AutoSubmitting@<| "Interpreter" -> ResourceData[ "https://www.wolframcloud.com/objects/bwood/\ ElectionResource"], "Control" -> PopupMenu |> |>, ElectionDataGrid[#Year] &, AppearanceRules -> <| "Title" -> "US Presidential Election Results"|> ]; |
For viewing and testing within a desktop notebook, a scrollable Pane is a useful way to display this form:
✕
Pane[fp, Scrollbars -> {False, True}, Alignment -> {Center, Top}, ImageSize -> {530, 530}] |
Using CloudDeploy, we can create a web version of the form that is accessible to anyone:
✕
CloudDeploy[fp, "ElectionDataBrowser", Permissions -> "Public"] |
Once this webpage is live, it provides continuous access to the new data resource. Any time the resource is updated, the deployment will pick up the new data as well.
Throughout this series, we’ve covered a full data science workflow—importing and exploring, cleaning and structuring data and finally creating a permanent cloud resource with an interactive web interface. Notably, everything here was done within the Wolfram ecosystem from start to finish, using built-in functionality and remarkably little code.
Although the result in this case is a simple display of historical information, it’s easy to apply the same strategies toward financial dashboards, image processing, linguistic analyses and other advanced deployments. With the breadth of algorithms and visualizations in the Wolfram Language, the possibilities are endless!
For more detail on the functions you read about here, see the Set Up a Personal Data Resource, Make a Grid of Output Data and Set Up a Repeated-Use Form Page workflows.
Download the data resource as a Wolfram Notebook.
]]>Chicken Scratch is an academic trivia game that I originally coded about 20 years ago. At the time I was the Academic Decathlon coach of a large urban high school, and I needed a fun way for my students to remember thousands of factoids for the Academic Decathlon competitions. The game turned out to be beneficial to our team, and so popular that other teams asked to buy it from us. I refreshed the questions each year and continued holding Chicken Scratch tournaments at the next two schools I worked in.
When I retired a couple years ago, I assumed that Chicken Scratch would fade into the shadows of the past. Then I began tinkering with the Wolfram Language, with its functional syntax, awesome visualization tools and curated data, and realized its potential to take the game to the next level. I started developing a new version to leverage many of the Wolfram Language’s powerful features. I’m still adding to and refining the game, but Chicken Scratch is now a fully functional desktop application that generates billions of questions in 15 academic categories. I play it with friends at parties and occasionally still in school settings. I also share the code online via GitHub, which can be seen by visiting this link.
When I first sat down to remake Chicken Scratch, it wasn’t obvious how to build an application in the Wolfram Language. The options for deployment seemed to be geared toward creating interactive documents such as .cdf files, notebooks, presentations and demonstrations. Chicken Scratch has the structure of a more traditional standalone program or web app, with buttons that trigger actions in a rectangular part of the screen. Here’s how I was able to update the game using the Wolfram Language.
The previous version of Chicken Scratch was contained within a fixed rectangle. Wolfram has two functions that define rectangular areas: Panel and Pane. The main difference between them is that Panel has an opaque background, while Pane has a transparent one.
✕
{Panel["Hello!",ImageSize->{100,50}],Pane["Hello!",{100,50}]} |
I contained the whole game inside one framed Panel (shown 1/4-size here).
✕
Framed[Panel[coverPic,Alignment->{Center,Top},ImageSize->{1000,730},Background->White,FrameMargins->None]] |
To display images, text and other content inside the game panel, I used Pane.
✕
Framed[Panel[Row[{Pane[Style["$220",48],197,Alignment->Center],Pane[hand1,180,Alignment->Center],Pane[Style["text\nexample",24,Blue,TextAlignment->Center],198,Alignment->Center],Pane[hand2,,180,Alignment->Center],Pane[Style["$104",48],197,Alignment->Center]},Alignment->Center],Alignment->{Center,Top},ImageSize->{1000,730},Background->White,FrameMargins->None]] |
Now that I could control the appearance of Chicken Scratch, the next step was to control its behavior. In the previous version, the game had a cover image that would change to the main game interface when clicked. Clicking various parts of the interface would cause different reactions from the game. For instance, clicking a correct answer would cause a joyous sound to play and the game to display the categories.
The Wolfram Language has a dizzying array of interface objects that allow a program to react to the user’s actions, including Button, Slider, Checkbox, ListPicker and many more. Read the guide Control Objects in the documentation for a comprehensive list. Chicken Scratch only needs Button, so that’s all I’m going to discuss here. If you understand how to use Button, you can extend that to the other types of control objects.
In Chicken Scratch, clicking a button usually changes the content of a pane.
✕
img=hand1; pane=Pane[img]; Print[pane] button=Button["do something",img=hand2;pane=Pane[img]] |
If you try that code, though, it doesn’t work. The content of the pane needs to change in a visible manner. To accomplish this, wrap the content of the pane in the function Dynamic and then make the button reassign the symbol that’s inside the pane.
✕
img=hand1; pane=Pane[Dynamic[img]]; button=Button["Do Something",img=If[img==hand1,hand2,hand1]]; {pane,button} |
With Panel and Pane as building blocks, and Dynamic and Button for the interactions, I was able to make a fully functional version of the game Chicken Scratch.
Chicken Scratch is a trivia game, so it needs lots of questions. In the Wolfram Language, I can write pods that generate new questions, often millions of unique ones, from a single block of a few dozen lines of code. Currently, the game has 260 such question pods. At first, the pods were part of the main notebook. However, this got unwieldy as the code grew in size. The desktop environment became sluggish, and it took forever to find any particular line of code. I decided to store the question pods in the cloud and have the main interface call them when needed. This worked extremely well.
The function that calls a question pod is URLExecute[podName]. Each question pod is designed to return the following pieces of information:
The main interface file processes these and presents them to the players. Here is an example of the code in one of the question pods. To see the type of output it produces, just execute it as is. To deploy this in the cloud, remove the two commented parts of the code and change TraditionalForm[…] to InputForm[…].
✕
(*CloudDeploy[ Delayed[ APIFunction[{},*) mat=Partition[RandomChoice[Range[-12,12],4],2]; det=Det[mat]; choices={det}; While[Length[choices]<4,try=Round[RandomVariate[NormalDistribution[0,120]]]; If[Not[MemberQ[choices,try]],choices=Append[choices,try]]]; q=HoldComplete[StringForm["If `1` and `2`=|`3`|, then what is the value of `2`?",A==h1,d,A]]/.{h1->matrix[mat]}; mixed=RandomSample[choices]; ans=Position[mixed,choices[[1]]][[1,1]]; TraditionalForm[{q,ans,mixed}](*&]],"CS_pack_Alge6",Permissions"Public"]*) |
Here is the code from a question pod that returns a graphic.
✕
(*CloudDeploy[ Delayed[ APIFunction[{},*) tot=RandomInteger[{100,1000}]; cums=Sort[RandomSample[Range[1,tot],3]]; quant={cums[[1]],cums[[2]]-cums[[1]],cums[[3]]-cums[[2]],tot-cums[[3]]}; pic=PieChart[quant,SectorOrigin->{90°,-1},ChartLegends->{Style["neither A nor B",18],Style["only A",18],Style["both A and B",18],Style["only B",18]}]; qOp=RandomChoice[{"against both measures"->quant[[1]],"for A but against B"->quant[[2]], "for both measures"->quant[[3]], "for B but against A"->quant[[4]],"for A"->quant[[2]]+quant[[3]], "for B"->quant[[3]]+quant[[4]],"for any measure"->quant[[2]]+quant[[3]]+quant[[4]],"for only one measure"->quant[[2]]+quant[[4]]}];q=StringForm["If `1` people voted on measures A and B, how many people voted `2`?",tot,Keys[qOp]]; choices=Take[DeleteDuplicates[Prepend[RandomSample[Range[1,tot-1],4],Values[qOp]]],4]; mixed=RandomSample[choices]; ans=Position[mixed,choices[[1]]][[1,1]]; TraditionalForm[{q,ans,mixed,pic}](*&]],"CS_pack_Grap9",Permissions"Public"]*) |
This system works beautifully, but I must add a couple of cautionary notes. Because the question pods are cloud objects, the data that they return must be web friendly. In other words, any characters that HTML wants in entity form will cause an error. For the question pods that require unusual characters, ToCharacterCode and FromCharacterCode get them safely through the web environment.
Likewise, if a question pod returns an image, it has to be handled in a different way. In this case, I use Hold to pass the commands that produce the image to the interface code where they can be safely executed. Questions can be slow to appear, on rare occasions up to 20 seconds, because the game retrieves them from the cloud. I have it on good authority that there is a better way for me to deploy Chicken Scratch that does not involve the cloud. I believe it, and that would surely speed up the display of questions. However, the advantages of generating questions from one cloud location currently outweigh the slight gain in download speed I’d get from generating them locally.
Feel free to try Chicken Scratch and use it as you see fit. The game is now geared toward general education and the high-school level, but adults enjoy playing as well. If you have any comments or questions, you may contact me.
]]>We call it “Spikey”, and in my life today, it’s everywhere:
It comes from a 3D object—a polyhedron that’s called a rhombic hexecontahedron:
But what is its story, and how did we come to adopt it as our symbol?
Back in 1987, when we were developing the first version of Mathematica, one of our innovations was being able to generate resolution-independent 3D graphics from symbolic descriptions. In our early demos, this let us create wonderfully crisp images of Platonic solids. But as we approached the release of Mathematica 1.0, we wanted a more impressive example. So we decided to take the last of the Platonic solids—the icosahedron—and then make something more complex by a certain amount of stellation (or, more correctly, cumulation). (Yes, that’s what the original notebook interface looked like, 30 years ago…)
At first this was just a nice demo that happened to run fast enough on the computers we were using back then. But quite soon the 3D object it generated began to emerge as the de facto logo for Mathematica. And by the time Mathematica 1.0 was released in 1988, the stellated icosahedron was everywhere:
In time, tributes to our particular stellation started appearing—in various materials and sizes:
But just a year after we released Mathematica 1.0, we were getting ready to release Mathematica 1.2, and to communicate its greater sophistication, we wanted a more sophisticated logo. One of our developers, Igor Rivin, had done his PhD on polyhedra in hyperbolic space—and through his efforts a hyperbolic icosahedron adorned our Version 1.2 materials:
My staff gave me an up-to-date-Spikey T-shirt for my 30th birthday in 1989, with a quote that I guess even after all these years I’d still say:
After Mathematica 1.2, our marketing materials had a whole collection of hyperbolic Platonic solids, but by the time Version 2.0 arrived in 1991 we’d decided our favorite was the hyperbolic dodecahedron:
Still, we continued to explore other “Spikeyforms”. Inspired by the “wood model” style of Leonardo da Vinci’s stellated icosahedron drawing (with amazingly good perspective) for Luca Pacioli’s book De divina proportione, we commissioned a Version 2.0 poster (by Scott Kim) showing five intersecting tetrahedra arranged so that their outermost vertices form a dodecahedron:
Looking through my 1991 archives today, I find some “explanatory” code (by Ilan Vardi)—and it’s nice to see that it all just runs in our latest Wolfram Language (though now it can be written a bit more elegantly):
Over the years, it became a strange ritual that when we were getting ready to launch a new integer version of Mathematica, we’d have very earnest meetings to “pick our new Spikey”. Sometimes there would be hundreds to choose from, generated (most often by Michael Trott) using all kinds of different algorithms:
But though the color palettes evolved, and the Spikeys often reflected (though perhaps in some subtle way) new features in the system, we’ve now had a 30-year tradition of variations on the hyperbolic dodecahedron:
In more recent times, it’s become a bit more streamlined to explore the parameter space—though by now we’ve accumulated hundreds of parameters:
A hyperbolic dodecahedron has 20 points—ideal for celebrating the 20th anniversary of Mathematica in 2008. But when we wanted something similar for the 25th anniversary in 2013 we ran into the problem that there’s no regular polyhedron with 25 vertices. But (essentially using SpherePoints[25]) we managed to create an approximate one—and made a 3D printout of it for everyone in our company, sized according to how long they’d been with us:
In 2009, we were getting ready to launch Wolfram|Alpha—and it needed a logo. There were all sorts of concepts:
We really wanted to emphasize that Wolfram|Alpha works by doing computation (rather than just, say, searching). And for a while we were keen on indicating this with some kind of gear-like motif. But we also wanted the logo to be reminiscent of our longtime Mathematica logo. So this led to one of those classic “the-CEO-must-be-crazy” projects: make a gear mechanism out of Spikey-like forms.
Longtime Mathematica and Wolfram Language user (and Hungarian mechanical engineer) Sándor Kabai helped out, suggesting a “Spikey Gear”:
And then, in a throwback to the Version 2 intersecting tetrahedra, he came up with this:
In 2009, 3D printing was becoming very popular, and we thought it would be nice for Wolfram|Alpha to have a logo that was readily 3D printable. Hyperbolic polyhedra were out: their spikes would break off, and could be dangerous. (And something like the Mathematica Version 4 Spikey, with “safety spikes”, lacked elegance.)
For a while we fixated on the gears idea. But eventually we decided it’d be worth taking another look at ordinary polyhedra. But if we were going to adopt a polyhedron, which one should it be?
There are of course an infinite number of possible polyhedra. But to make a nice logo, we wanted a symmetrical and somehow “regular” one. The five Platonic solids—all of whose faces are identical regular polygons—are in effect the “most regular” of all polyhedra:
Then there are the 13 Archimedean solids, all of whose vertices are identical, and whose faces are regular polygons but of more than one kind:
One can come up with all sorts of categories of “regular” polyhedra. One example is the “uniform polyhedra”, as depicted in a poster for The Mathematica Journal in 1993:
Over the years that Eric Weisstein was assembling what in 1999 became MathWorld, he made an effort to include articles on as many notable polyhedra as possible. And in 2006, as part of putting every kind of systematic data into Mathematica and the Wolfram Language, we started including polyhedron data from MathWorld. The result was that when Version 6.0 was released in 2007, it included the function PolyhedronData that contained extensive data on 187 notable polyhedra:
It had always been possible to generate regular polyhedra in Mathematica and the Wolfram Language, but now it became easy. With the release of Version 6.0 we also started the Wolfram Demonstrations Project, which quickly began accumulating all sorts of polyhedron-related Demonstrations.
One created by my then-10-year-old daughter Catherine (who happens to have continued in geometry-related directions) was “Polyhedral Koalas”—featuring a pull-down for all polyhedra in PolyhedronData[]:
So this was the background when in early 2009 we wanted to “pick a polyhedron” for Wolfram|Alpha. It all came to a head on the evening of Friday, February 6, when I decided to just take a look at things myself.
I still have the notebook I used, and it shows that at first I tried out the rather dubious idea of putting spheres at the vertices of polyhedra:
But (as the Notebook History system recorded) just under two minutes later I’d generated pure polyhedron images—all in the orange we thought we were going to use for the logo:
The polyhedra were arranged in alphabetical order by name, and on line 28, there it was—the rhombic hexecontahedron:
A couple of minutes later, I had homed in on the rhombic hexecontahedron, and at exactly 12:24:24am on February 7, 2009, I rotated it into essentially the symmetrical orientation we now use:
I wondered what it would look like in gray scale or in silhouette, and four minutes later I used ColorSeparate to find out:
I immediately started writing an email—which I fired off at 12:32am:
“I [...] rather like the RhombicHexecontahedron ….
It’s an interesting shape … very symmetrical … I think it might have
about the right complexity … and its silhouette is quite reasonable.”
I’d obviously just copied “RhombicHexecontahedron” from the label in the notebook (and I doubt I could have spelled “hexecontahedron” correctly yet). And indeed from my archives I know that this was the very first time I’d ever written the name of what was destined to become my all-time-favorite polyhedron.
It was dead easy in the Wolfram Language to get a picture of a rhombic hexecontahedron to play with:
✕
PolyhedronData["RhombicHexecontahedron"] |
And by Monday it was clear that the rhombic hexecontahedron was a winner—and our art department set about rendering it as the Wolfram|Alpha logo. We tried some different orientations, but soon settled on the symmetrical “head-on” one that I’d picked. (We also had to figure out the best “focal length”, giving the best foreshortening.)
Like our Version 1.0 stellated icosahedron, the rhombic hexecontahedron has 60 faces. But somehow, with its flower-like five-fold “petal” arrangements, it felt much more elegant. It took a fair amount of effort to find the best facet shading in a 2D rendering to reflect the 3D form. But soon we had the first official version of our logo:
It quickly started to show up everywhere, and in a nod to our earlier ideas, it often appeared on a “geared background”:
A few years later, we tweaked the facet shading slightly, giving what is still today the logo of Wolfram|Alpha:
What is a rhombic hexecontahedron? It’s called a “hexecontahedron” because it has 60 faces, and ἑξηκοντα (hexeconta) is the Greek word for 60. (Yes, the correct spelling is with an “e”, not an “a”.) It’s called “rhombic” because each of its faces is a rhombus. Actually, its faces are golden rhombuses, so named because their diagonals are in the golden ratio ≃1.618:
The rhombic hexecontahedron is a curious interpolation between an icosahedron and a dodecahedron (with an icosidodecahedron in the middle). The 12 innermost points of a rhombic hexecontahedron form a regular icosahedron, while the 20 outermost points form a regular dodecahedron. The 30 “middle points” form an icosidodecahedron, which has 32 faces (20 “icosahedron-like” triangular faces, and 12 “dodecahedron-like” pentagonal faces):
Altogether, the rhombic hexecontahedron has 62 vertices and 120 edges (as well as 120−62+2=60 faces). There are 3 kinds of vertices (“inner”, “middle” and “outer”), corresponding to the 12+30+20 vertices of the icosahedron, icosidodecahedron and dodecahedron. These types of vertices have respectively 3, 4 and 5 edges meeting at them. Each golden rhombus face of the rhombic hexecontahedron has one “inner” vertex where 5 edges meet, one “outer” vertex where 3 edges meet and two “middle” vertices where 4 edges meet. The inner and outer vertices are the acute vertices of the golden rhombuses; the middle ones are the obtuse vertices.
The acute vertices of the golden rhombuses have angle 2 tan^{−1}(ϕ^{−1}) ≈ 63.43°, and the obtuse ones 2 tan^{−1}(ϕ) ≈ 116.57°. The angles allow the rhombic hexecontahedron to be assembled from Zometool using only red struts (the same as for a dodecahedron):
Across the 120 edges of the rhombic hexecontahedron, the 60 “inward-facing hinges” have dihedral angle 4𝜋/5=144°, and the 60 “outward-facing” ones have dihedral angle 2𝜋/5=72°. The solid angles subtended by the inner and outer vertices are 𝜋/5 and 3𝜋/5.
To actually draw a rhombic hexecontahedron, one needs to know 3D coordinates for its vertices. A convenient way to get these is to use the fact that the rhombic hexecontahedron is invariant under the icosahedral group, so that one can start with a single golden rhombus and just apply the 60 matrices that form a 3D representation of the icosahedral group. This gives for example final vertex coordinates {±ϕ,±1,0}, {±1,±ϕ,±(1+ϕ)}, {±2ϕ,0,0}, {±ϕ,±(1+2ϕ),0}, {±(1+ϕ),±(1+ϕ),±(1+ϕ)}, and cyclic permutations of these, with each possible sign being taken.
In addition to having faces that are golden rhombuses, the rhombic hexecontahedron can be constructed out of 20 golden rhombohedra (whose 6 faces are all golden rhombuses):
There are other ways to build rhombic hexecontahedra out of other polyhedra. Five intersecting cubes can do it, as can 182 dodecahedra with touching faces:
Rhombic hexecontahedra don’t tessellate space. But they do interlock in a satisfying way (and, yes, I’ve seen tens of paper ones stacked up this way):
There are also all sorts of ring and other configurations that can be made with them:
Closely related to the rhombic hexecontahedron (“RH”) is the rhombic triacontahedron (“RT”). Both the RH and the RT have faces that are golden rhombuses. But the RH has 60, while the RT has 30. Here’s what a single RT looks like:
RTs fit beautifully into the “pockets” in RHs, leading to forms like this:
The aforementioned Sándor Kabai got enthusiastic about the RH and RT around 2002. And after the Wolfram Demonstrations Project was started, he and Slovenian mathematician Izidor Hafner ended up contributing over a hundred Demonstrations about RH, RT and their many properties:
As soon as we’d settled on a rhombic hexecontahedron Spikey, we started making 3D printouts of it. (It’s now very straightforward to do this with Printout3D[PolyhedronData[...]], and there are also precomputed models available at outside services.)
At our Wolfram|Alpha launch event in May 2009, we had lots of 3D Spikeys to throw around:
But as we prepared for the first post-Wolfram|Alpha holiday season, we wanted to give everyone a way to make their own 3D Spikey. At first we explored using sets of 20 plastic-covered golden rhombohedral magnets. But they were expensive, and had a habit of not sticking together well enough at “Spikey scale”.
So that led us to the idea of making a Spikey out of paper, or thin cardboard. Our first thought was then to create a net that could be folded up to make a Spikey:
My daughter Catherine was our test folder (and still has the object that was created), but it was clear that there were a lot of awkward hard-to-get-there-from-here situations during the folding process. There are a huge number of possible nets (there are already 43,380 even for the dodecahedron and icosahedron)—and we thought that perhaps one could be found that would work better:
But after failing to find any such net, we then had a new (if obvious) idea: since the final structure would be held together by tabs anyway, why not just make it out of multiple pieces? We quickly realized that the pieces could be 12 identical copies of this:
And with this we were able to create our “Paper Sculpture Kits”:
Making the instructions easy to understand was an interesting challenge, but after a few iterations they’re now well debugged, and easy for anyone to follow:
And with paper Spikeys in circulation, our users started sending us all sorts of pictures of Spikeys “on location”:
It’s not clear who first identified the Platonic solids. Perhaps it was the Pythagoreans (particularly living near so many polyhedrally shaped pyrite crystals). Perhaps it was someone long before them. Or perhaps it was a contemporary of Plato’s named Theaetetus. But in any case, by the time of Plato (≈400 BC), it was known that there are five Platonic solids. And when Euclid wrote his Elements (around 300 BC) perhaps the pinnacle of it was the proof that these five are all there can be. (This proof is notably the one that takes the most steps—32—from the original axioms of the Elements.)
Platonic solids were used for dice and ornaments. But they were also given a central role in thinking about nature, with Plato for example suggesting that perhaps everything could in some sense be made of them: earth of cubes, air of octahedra, water of icosahedra, fire of tetrahedra, and the heavens (“ether”) of dodecahedra.
But what about other polyhedra? In the 4th century AD, Pappus wrote that a couple of centuries earlier, Archimedes had discovered 13 other “regular polyhedra”—presumably what are now called the Archimedean solids—though the details were lost. And for a thousand years little more seems to have been done with polyhedra. But in the 1400s, with the Renaissance starting up, polyhedra were suddenly in vogue again. People like Leonardo da Vinci and Albrecht Dürer routinely used them in art and design, rediscovering some of the Archimedean solids—as well as finding some entirely new polyhedra, like the icosidodecahedron.
But the biggest step forward for polyhedra came with Johannes Kepler at the beginning of the 1600s. It all started with an elegant, if utterly wrong, theory. Theologically convinced that the universe must be constructed with mathematical perfection, Kepler suggested that the six planets known at the time might move on nested spheres geometrically arranged so as to just fit the suitably ordered five Platonic solids between them:
In his 1619 book Harmonices mundi (“Harmony of the World”) Kepler argued that many features of music, planets and souls operate according to similar geometric ratios and principles. And to provide raw material for his arguments, Kepler studied polygons and polyhedra, being particularly interested in finding objects that somehow formed complete sets, like the Platonic solids.
He studied possible “sociable polygons”, that together could tile the plane—finding, for example, his “monster tiling” (with pentagons, pentagrams and decagons). He studied “star polyhedra” and found various stellations of the Platonic solids (and in effect the Kepler–Poinsot polyhedra). In 1611 he had published a small book about the hexagonal structure of snowflakes, written as a New Year’s gift for a sometime patron of his. And in this book he discussed 3D packings of spheres (and spherical atoms), suggesting that what’s now called the Kepler packing (and routinely seen in the packing of fruit in grocery stores) is the densest possible packing (a fact that wasn’t formally proved until into the 2000s—as it happens, with the help of Mathematica).
There are polyhedra lurking in Kepler’s various packings. Start from any sphere, then look at its neighbors, and join their centers to make the vertices of a polyhedron. For Kepler’s densest packing, there are 12 spheres touching any given sphere, and the polyhedron one gets is the cuboctahedron, with 12 vertices and 14 faces. But Kepler also discussed another packing, 8% less dense, in which 8 spheres touch a given sphere, and 6 are close to doing so. Joining the centers of these spheres gives a polyhedron called the rhombic dodecahedron, with 14 vertices and 12 faces:
Having discovered this, Kepler started looking for other “rhombic polyhedra”. The rhombic dodecahedron he found has rhombuses composed of pairs of equilateral triangles. But by 1619 Kepler had also looked at golden rhombuses—and had found the rhombic triacontahedron, and drew a nice picture of it in his book, right next to the rhombic dodecahedron:
Kepler actually had an immediate application for these rhombic polyhedra: he wanted to use them, along with the cube, to make a nested-spheres model that would fit the orbital periods of the four moons of Jupiter that Galileo had discovered in 1610.
Why didn’t Kepler discover the rhombic hexecontahedron? I think he was quite close. He looked at non-convex “star” polyhedra. He looked at rhombic polyhedra. But I guess for his astronomical theories he was satisfied with the rhombic triacontahedron, and looked no further.
In the end, of course, it was Kepler’s laws—which have nothing to do with polyhedra—that were Kepler’s main surviving contribution to astronomy. But Kepler’s work on polyhedra—albeit done in the service of a misguided physical theory—stands as a timeless contribution to mathematics.
Over the next three centuries, more polyhedra, with various forms of regularity, were gradually found—and by the early 1900s there were many known to mathematicians:
But, so far as I can tell, the rhombic hexecontahedron was not among them. And instead its discovery had to await the work of a certain Helmut Unkelbach. Born in 1910, he got a PhD in math at the University of Munich in 1937 (after initially studying physics). He wrote several papers about conformal mapping, and—perhaps through studying mappings of polyhedral domains—was led in 1940 to publish a paper (in German) about “The Edge-Symmetric Polyhedra”.
His goal, he explains, is to exhaustively study all possible polyhedra that satisfy a specific, though new, definition of regularity: that their edges are all the same length, and these edges all lie in some symmetry plane of the polyhedron. The main result of his paper is a table containing 20 distinct polyhedra with that property:
Most of these polyhedra Unkelbach knew to already be known. But Unkelbach singles out three types that he thinks are new: two hexakisoctahedra (or disdyakis dodecahedra), two hexakisicosahedra (or dysdyakis triacontahedra), and what he calls the Rhombenhexekontaeder, or in English, the rhombic hexecontahedron. He clearly considers the rhombic hexecontahedron his prize specimen, including a photograph of a model he made of it:
How did he actually “derive” the rhombic hexecontahedron? Basically, he started from a dodecahedron, and identified its two types of symmetry planes:
Then he subdivided each face of the dodecahedron:
Then he essentially considered pushing the centers of each face in or out to a specified multiple α of their usual distance from the center of the dodecahedron:
For α < 1, the resulting faces don’t intersect. But for most values of α, they don’t have equal-length sides. That only happens for the specific case —and in that case the resulting polyhedron is exactly the rhombic hexecontahedron.
Unkelbach actually viewed his 1940 paper as a kind of warmup for a study of more general “k-symmetric polyhedra” with looser symmetry requirements. But it was already remarkable enough that a mathematics journal was being published at all in Germany after the beginning of World War II, and soon after the paper, Unkelbach was pulled into the war effort, spending the next few years designing acoustic-homing torpedoes for the German navy.
Unkelbach never published on polyhedra again, and died in 1968. After the war he returned to conformal mapping, but also started publishing on the idea that mathematical voting theory was the key to setting up a well-functioning democracy, and that mathematicians had a responsibility to make sure it was used.
But even though the rhombic hexecontahedron appeared in Unkelbach’s 1940 paper, it might well have languished there forever, were it not for the fact that in 1946 a certain H. S. M. (“Donald”) Coxeter wrote a short review of the paper for the (fairly new) American Mathematical Reviews. His review catalogs the polyhedra mentioned in the paper, much as a naturalist might catalog new species seen on an expedition. The high point is what he describes as “a remarkable rhombic hexecontahedron”, for which he reports that “its faces have the same shape as those of the triacontahedron, of which it is actually a stellation”.
Polyhedra were not exactly a hot topic in the mathematics of the mid-1900s, but Coxeter was their leading proponent—and was connected in one way or another to pretty much everyone who was working on them. In 1948 he published his book Regular Polytopes. It describes in a systematic way a variety of families of regular polyhedra, in particular showing the great stellated triacontahedron (or great rhombic triacontahedron)—which effectively contains a rhombic hexecontahedron:
But Coxeter didn’t explicitly mention the rhombic hexecontahedron in his book, and while it picked up a few mentions from polyhedron aficionados, the rhombic hexecontahedron remained a basically obscure (and sometimes misspelled) polyhedron.
Crystals had always provided important examples of polyhedra. But by the 1800s, with atomic theory increasingly established, there began to be serious investigation of crystallography, and of how atoms are arranged in crystals. Polyhedra made a frequent appearance, in particular in representing the geometries of repeating blocks of atoms (“unit cells”) in crystals.
By 1850 it was known that there were basically only 14 possible such geometries; among them is one based on the rhombic dodecahedron. A notable feature of these geometries is that they all have specific two-, three-, four- or six-fold symmetries—essentially a consequence of the fact that only certain polyhedra can tessellate space, much as in 2D the only regular polygons that can tile the plane are squares, triangles and hexagons.
But what about for non-crystalline materials, like liquids or glasses? People had wondered since before the 1930s whether at least approximate five-fold symmetries could exist there. You can’t tessellate space with regular icosahedra (which have five-fold symmetry), but maybe you could at least have icosahedral regions with little gaps in between.
None of this was settled when in the early 1980s electron diffraction crystallography on a rapidly cooled aluminum-manganese material effectively showed five-fold symmetry. There were already theories about how this could be achieved, and within a few years there were also electron microscope pictures of grains that were shaped like rhombic triacontahedra:
And as people imagined how these triacontahedra could pack together, the rhombic hexecontahedron soon made its appearance—as a “hole” in a cluster of 12 rhombic triacontahedra:
At first it was referred to as a “20-branched star”. But soon the connection with the polyhedron literature was made, and it was identified as a rhombic hexecontahedron.
Meanwhile, the whole idea of making things out of rhombic elements was gaining attention. Michael Longuet-Higgins, longtime oceanographer and expert on how wind makes water waves, jumped on the bandwagon, in 1987 filing a patent for a toy based on magnetic rhombohedral blocks, that could make a “Kepler Star” (rhombic hexecontahedron) or a “Kepler Ball” (rhombic triacontahedron):
And—although I only just found this out—the rhombohedral blocks that we considered in 2009 for widespread “Spikey making” were actually produced by Dextro Mathematical Toys (aka Rhombo.com), operating out of Longuet-Higgins’s house in San Diego.
The whole question of what can successfully tessellate space—or even tile the plane—is a complicated one. In fact, the general problem of whether a particular set of shapes can be arranged to tile the plane has been known since the early 1960s to be formally undecidable. (One might verify that 1000 of these shapes can fit together, but it can take arbitrarily more computational effort to figure out the answer for more and more of the shapes.)
People like Kepler presumably assumed if a set of shapes was going to tile the plane, they must be able to do so in a purely repetitive pattern. But following the realization that the general tiling problem is undecidable, Roger Penrose in 1974 came up with two shapes that could successfully tile the plane, but not in a repetitive way. By 1976 Penrose (as well as Robert Ammann) had come up with a slightly simpler version:
And, yes, the shapes here are rhombuses, though not golden rhombuses. But with angles 36°,144° and 72°,108°, they arrange with 5- and 10-fold symmetry.
By construction, these rhombuses (or, more strictly, shapes made from them) can’t form a repetitive pattern. But it turns out they can form a pattern that can be built up in a systematic, nested way:
And, yes, the middle of step 3 in this sequence looks rather like our flattened Spikey. But it’s not exactly right; the aspect ratios of the outer rhombuses are off.
But actually, there is still a close connection. Instead of operating in the plane, imagine starting from half a rhombic triacontahedron, made from golden rhombuses in 3D:
Looking at it from above, it looks exactly like the beginning of the nested construction of the Penrose tiling. If one keeps going, one gets the Penrose tiling:
Looked at “from the side” in 3D, one can tell it’s still just identical golden rhombuses:
Putting four of these “Wieringa roofs” together one can form exactly the rhombic hexecontahedron:
But what’s the relation between these nested constructions and the actual way physical quasicrystals form? It’s not yet clear. But it’s still neat to see even hints of rhombic hexecontahedra showing up in nature.
And historically it was through their discussion in quasicrystals that Sándor Kabai came to start studying rhombic hexecontahedra with Mathematica, which in turn led Eric Weisstein to find out about them, which in turn led them to be in Mathematica and the Wolfram Language, which in turn led me to pick one for our logo. And in recognition of this, we print the nestedly constructed Penrose tiling on the inside of our paper Spikey:
Our Wolfram|Alpha Spikey burst onto the scene in 2009 with the release of Wolfram|Alpha. But we still had our long-running and progressively evolving Mathematica Spikey too. So when we built a new European headquarters in 2011 we had not just one, but two Spikeys vying to be on it.
Our longtime art director Jeremy Davis came up with a solution: take one Spikey, but “idealize” it, using just its “skeleton”. It wasn’t hard to decide to start from the rhombic hexecontahedron. But then we flattened it (with the best ratios, of course)—and finally ended up with the first implementation of our now-familiar logo:
When I started writing this piece, I thought the story would basically end here. After all, I’ve now described how we picked the rhombic hexecontahedron, and how mathematicians came up with it in the first place. But before finishing the piece, I thought, “I’d better look through all the correspondence I’ve received about Spikey over the years, just to make sure I’m not missing anything.”
And that’s when I noticed an email from June 2009, from an artist in Brazil named Yolanda Cipriano. She said she’d seen an article about Wolfram|Alpha in a Brazilian news magazine—and had noticed the Spikey—and wanted to point me to her website. It was now more than nine years later, but I followed the link anyway, and was amazed to find this:
I read more of her email: “Here in Brazil this object is called ‘Giramundo’ or ‘Flor Mandacarú’ (Mandacaru Flower) and it is an artistic ornament made with [tissue paper]”.
What?! There was a Spikey tradition in Brazil, and all these years we’d never heard about it? I soon found other pictures on the web. Only a few of the Spikeys were made with paper; most were fabric—but there were lots of them:
I emailed a Brazilian friend who’d worked on the original development of Wolfram|Alpha. He quickly responded “These are indeed familiar objects… and to my shame I was never inquisitive enough to connect the dots”—then sent me pictures from a local arts and crafts catalog:
But now the hunt was on: what were these things, and where had they come from? Someone at our company volunteered that actually her great-grandmother in Chile had made such things out of crochet—and always with a tail. We started contacting people who had put up pictures of “folk Spikeys” on the web. Quite often all they knew was that they got theirs from a thrift shop. But sometimes people would say that they knew how to make them. And the story always seemed to be the same: they’d learned how to do it from their grandmothers.
The typical way to build a folk Spikey—at least in modern times—seems to be to start off by cutting out 60 cardboard rhombuses. The next step is to wrap each rhombus in fabric—and finally to stitch them all together:
OK, but there’s an immediate math issue here. Are these people really correctly measuring out 63° golden rhombuses? The answer is typically no. Instead, they’re making 60° rhombuses out of pairs of equilateral triangles—just like the standard diamond shapes used in quilts. So how then does the Spikey fit together? Well, 60° is not far from 63°, and if you’re sewing the faces together, there’s enough wiggle room that it’s easy to make the polyhedron close even without the angles being precisely right. (There are also “quasi-Spikeys” that—as in Unkelbach’s construction—don’t have rhombuses for faces, but instead have pointier “outside triangles”.)
Folk Spikeys on the web are labeled in all sorts of ways. The most common is as “Giramundos”. But quite often they are called “Estrelas da Felicidade” (“stars of happiness”). Confusingly, some of them are also labeled “Moravian stars”—but actually, Moravian stars are different and much pointier polyhedra (most often heavily augmented rhombicuboctahedra) that happen to have recently become popular, particularly for light fixtures.
Despite quite a bit of investigation, I still don’t know what the full history of the “folk Spikey” is. But here’s what I’ve found out so far. First, at least what survives of the folk Spikey tradition is centered around Brazil (even though we have a few stories of other appearances). Second, the tradition seems to be fairly old, definitely dating from well before 1900 and quite possibly several centuries earlier. So far as I can tell—as is common with folk art—it’s a purely oral tradition, and so far I haven’t found any real historical documentation about it.
My best information has come from a certain Paula Guerra, who sold folk Spikeys at a tourist-oriented cafe she operated a decade ago in the historic town of São Luíz do Paraitinga. She said people would come into her cafe from all over Brazil, see the folk Spikeys and say, “I haven’t seen one of those in 50 years…”
Paula herself learned about folk Spikeys (she calls them “stars”) from an older woman living on a multigenerational local family farm, who’d been making them since she was a little girl, and had been taught how to do it by her mother. Her procedure—which seems to have been typical—was to get cardboard from anywhere (originally, things like hat boxes), then to cover it with fabric scraps, usually from clothes, then to sew the whole perhaps-6″-across object together.
How old is the folk Spikey? Well, we only have oral tradition to go by. But we’ve tracked down several people who saw folk Spikeys being made by relatives who were born around 1900. Paula said that a decade ago she’d met an 80-year-old woman who told her that when she was growing up on a 200-year-old coffee farm there was a shelf of folk Spikeys from four generations of women.
At least part of the folk Spikey story seems to center around a mother-daughter tradition. Mothers, it is said, often made folk Spikeys as wedding presents when their daughters went off to get married. Typically the Spikeys were made from scraps of clothes and other things that would remind the daughters of their childhood—a bit like how quilts are sometimes made for modern kids going to college.
But for folk Spikeys there was apparently another twist: it was common that before a Spikey was sewn up, a mother would put money inside it, for her daughter’s use in an emergency. The daughter would then keep her Spikey with her sewing supplies, where her husband would be unlikely to pick it up. (Some Spikeys seem to have been used as pincushions—perhaps providing an additional disincentive for them to be picked up.)
What kinds of families had the folk Spikey tradition? Starting around 1750 there were many coffee and sugar plantations in rural Brazil, far from towns. And until perhaps 1900 it was common for farmers from these plantations to get brides—often as young as 13—from distant towns. And perhaps these brides—who were typically from well-off families of Portuguese descent, and were often comparatively well educated—came with folk Spikeys.
In time the tradition seems to have spread to poorer families, and to have been preserved mainly there. But around the 1950s—presumably with the advent of roads and urbanization and the move away from living on remote farms—the tradition seems to have all but died out. (In rural schools in southern Brazil there were however apparently girls in the 1950s being taught in art classes how to make folk Spikeys with openings in them—to serve as piggy banks.)
Folk Spikeys seem to have shown up with different stories in different places around Brazil. In the southern border region (near Argentina and Uruguay) there’s apparently a tradition that the “Star of St. Miguel” (aka folk Spikey) was made in villages by healer women (aka “witches”), who were supposed to think about the health of the person being healed while they were sewing their Spikeys.
In other parts of Brazil, folk Spikeys sometimes seem to be referred to by the names of flowers and fruits that look vaguely similar. In the northeast, “Flor Mandacarú” (after flowers on a cactus). In tropical wetland areas, “Carambola” (after star fruit). And in central forest areas “Pindaíva” (after a spiky red fruit).
But the most common current name for a folk Spikey seems to be “Giramundo”—an apparently not-very-recent Portuguese constructed word meaning essentially “whirling world”. The folk Spikey, it seems, was used like a charm, and was supposed to bring good luck as it twirled in the wind. The addition of tails seems to be recent, but apparently it was common to hang up folk Spikeys in houses, perhaps particularly on festive occasions.
It’s often not clear what’s original, and what’s a more recent tradition that happens to have “entrained” folk Spikeys. In the Three Kings’ Day parade (as in the three kings from the Bible) in São Luiz do Paraitinga, folk Spikeys are apparently used to signify the Star of Bethlehem—but this seems to just be a recent thing, definitely not indicative of some ancient religious connection.
We’ve found a couple of examples of folk Spikeys showing up in art exhibitions. One was in a 1963 exhibition about folk art from northeastern Brazil organized by architect Lina Bo Bardi. The other, which happens to be the largest 3D Spikey I’ve ever seen, was in a 1997 exhibition of work by architect and set designer Flávio Império:
So… where did the folk Spikey come from? I still don’t know. It may have originated in Brazil; it may have come from Portugal or elsewhere in Europe. The central use of fabrics and sewing needed to make a “60° Spikey” work might argue against an Amerindian or African origin.
One modern Spikey artisan did say that her great grandmother—who made folk Spikeys and was born in the late 1800s—came from the Romagna region of Italy. (One also said she learned about folk Spikeys from her French-Canadian grandmother.) And I suppose it’s conceivable that at one time there were folk Spikeys all over Europe, but they died out enough generations ago that no oral tradition about them survives. Still, while a decent number of polyhedra appear, for example, in European paintings from earlier centuries, I don’t know of a single Spikey among them. (I also don’t know of any Spikeys in historical Islamic art.)
But ultimately I’m pretty sure that somewhere there’s a single origin for the folk Spikey. It’s not something that I suspect was invented more than once.
I have to say that I’ve gone on “art origin hunts” before. One of the more successful was looking for the first nested (Sierpiński) pattern—which eventually led me to a crypt in a church in Italy, where I could see the pattern being progressively discovered, in signed stone mosaics from just after the year 1200.
So far the Spikey has proved more elusive—and it certainly doesn’t help that the primary medium in which it appears to have been explored involved fabric, which doesn’t keep the way stone does.
Whatever its ultimate origins, Spikey serves us very well as a strong and dignified icon. But sometimes it’s fun to have Spikey “come to life”—and over the years we’ve made various “personified Spikeys” for various purposes:
When you use Wolfram|Alpha, it’ll usually show its normal, geometrical Spikey. But just sometimes your query will make the Spikey “come to life”—as it does for pi queries on Pi Day:
Polyhedra are timeless. You see a polyhedron in a picture from 500 years ago and it’ll look just as clean and modern as a polyhedron from my computer today.
I’ve spent a fair fraction of my life finding abstract, computational things (think cellular automaton patterns). And they too have a timelessness to them. But—try as I might—I have not found much of a thread of history for them. As abstract objects they could have been created at any time. But in fact they are modern, created because of the conceptual framework we now have, and with the tools we have today—and never seen before.
Polyhedra have both timelessness and a rich history that goes back thousands of years. In their appearance, polyhedra remind us of gems. And finding a certain kind of regular polyhedron is a bit like finding a gem out in the geometrical universe of all possible shapes.
The rhombic hexecontahedron is a wonderful such gem, and as I have explored its properties, I have come to have even more appreciation for it. But it is also a gem with a human story—and it is so interesting to see how something as abstract as a polyhedron can connect people across the world with such diverse backgrounds and objectives.
Who first came up with the rhombic hexecontahedron? We don’t know, and perhaps we never will. But now that it is here, it’s forever. My favorite polyhedron.
To comment, please visit the copy of this post at the Stephen Wolfram blog »
]]>
Inspired by a sculpture featuring anamorphic deformation by reflection in a spherical mirror, Erik Mahieu finds the math behind the project and builds his own reflective sphere out of a Christmas ball ornament. This is an excellent project for the artistically inclined mathematician.
The November 26 NASA landing of the InSight lander was described as “seven minutes of terror.” Jeff Bryant uses GeoGraphics to visualize the landing, offering a computational viewpoint of the details of what it takes to successfully land a robotic lander on Mars.
The Wolfram Language is all you need to create a Christmas tree with a conducting branch that holds a candle as a baton, performing any of your favorite Christmas songs! In this version, the trees’ branches move in synchrony to the different instruments playing O Tannenbaum. Add ornaments and a light snowfall to complete a computational white Christmas.
Where is the oldest known intact shipwreck located? This 2,400-year-old archeological gem was recently discovered two kilometers deep at the bottom of the Black Sea, with the exact location undisclosed. Embark on some computational detective work with Vitaliy Kaurov using a wide scope of Wolfram Language tools, from curated data to geography, geometry and more.
Bill Gosper, renowned mathematician and programmer, writes a computational essay on the continued logarithm of π. A continued logarithm is an arbitrarily long bit string, approximating a real number arbitrarily well. Bill does an excellent job making math interesting and accessible, so share this with your math-curious friends!
Using several past Christmas-related posts on Wolfram Community together makes for a charming final product, as Vitaliy demonstrates in this post. The code used for this animation was taken from postcards and other projects created as far back as six years ago. Enjoy the combined talents of users of the Wolfram Language who may have never heard of each other—a true Christmas miracle.
If you haven’t joined Wolfram Community yet, please do so! You can chime in on discussions like the ones featured in this post, show off your own work in groups of your interest and browse the complete list of Staff Picks.
Classical treatments of real algebraic curves rely on rational numbers, making analysis possible only in the theoretical sense. But as author Barry H. Dayton shows in this book, the Wolfram Language’s machine-precision number capabilities enable accurate analysis of extremely complicated curves that often lack rational points. The book is written for those with some understanding of partial derivatives and calculus, as well as a basic knowledge of the Wolfram Language.
A Numerical Approach to Real Algebraic Curves with the Wolfram Language was published through Wolfram Media, the publishing unit of the Wolfram Group, and typeset with our book-publishing template. Dayton is the first Wolfram Media–published author not affiliated with Wolfram Research, with many more such authors to come.
More books from Wolfram Media are in the works, so watch this space! If you’re an author looking to publish your own Wolfram technology–related manuscript, please reach out to Wolfram Media.
Author Eugene Don adds this Mathematica-focused textbook to Schaum’s Outlines, a series of supplementary textbooks trusted by over 40 million students since the 1930s. In this book, Don introduces the reader to the Wolfram Language, gives a reference index of essential Mathematica functions and provides 750 exercises (with answers included). Schaum’s Outline of Mathematica and the Wolfram Language can be used as a support text for all major textbooks required for courses using Mathematica.
Fifty years ago, an approach to reaction kinetics based on mathematical models of reaction kinetics (formal reaction kinetics) emerged. In the years since, there’s been an accelerated development in deterministic and stochastic kinetics. Based on recent papers, authors János Tóth, Attila László Nagy and Dávid Papp present the most important concepts and results in reaction kinetics in an effort to make the information gathered accessible to a wider audience. The book is accompanied by the authors’ Mathematica package ReactionKinectics.
Understanding quantitative aspects of finance and policy analysis is important for the education of students in various disciplines. However, those students often have little quantitative background. Nicholas L. Georgakopoulos discusses these topics and explains how to illustrate them using Mathematica’s powerful visual capabilities. This book introduces the reader to Mathematica, shows how to use Mathematica to produce illustrations and emphasizes discussion of finance and policy.
With their popular book now available in Chinese (as well as English and Japanese), authors Cliff Hastings, Kelvin Mischo and Michael Morrison have expanded their catch-all guide to learning and understanding Mathematica. Updates include new content for 3D printing, graphics capabilities and working with audio, data, dates and linguistic data.
Starting from a collection of simple computer experiments—illustrated in the book by striking computer graphics—Stephen Wolfram shows how their unexpected results force a whole new way of looking at the operation of our universe. Wolfram uses his approach to tackle a remarkable array of fundamental problems in science: from the origin of the Second Law of Thermodynamics to the development of complexity in biology, the computational limitations of mathematics, the possibility of a truly fundamental theory of physics and the interplay between free will and determinism.
Free shipping
Get free ground shipping on A New Kind of Science (paperback) through December 31, 2018. |
Sometimes a syllabus is set in stone. You’ve got to cover X, Y and Z, and no amount of reworking or shifting assignments around can change that. Other factors can play a role too: limited time, limited resources or even a bit of nervousness at trying something new.
But what if you’d like to introduce some new ideas into your lessons—ideas like digital citizenship or computational thinking? Introducing computational thinking to fields that are not traditionally part of STEM can sometimes be a challenge, so feel free to share this journey with your children’s teachers, friends and colleagues.
Computational thinking is a mindset that is complemented by technology, not necessarily bound by it. In fact, some concepts can be as simple as adding a reflective assessment to the end of your lessons, allowing students a chance to uncover their thought processes and engage in metacognitive thought.
While computational thinking most often relates to coding (unsurprising given its connection to computer science), it’s really a way of looking at problems. This means that computational thinking can be introduced into all sorts of classrooms—not just in STEM classes, but in art and music classes, and even in physical education.
Like digital citizenship, computational thinking is a useful skill for students to master before they enter the “real world.” Practicing computational thought enables them to pick up new technologies, utilizing them for work and play. Computational thinking is a transferable skill, and it can act as a lens through which students can view problems outside the classroom.
In a way, practicing computational thinking through such tasks as deconstruction and experimentation can help to abolish the fear of failure that stymies a growth mindset. After all, if failure is a necessary component of success, what’s there to fear? Students who feel comfortable with limited information and unknown variables are more prepared to tackle new ideas.
Given that computational thinking can provide real value for your students, how can you add it to your lessons?
One component of computational thinking is pattern recognition. Pattern recognition can help to determine the build of a system as well as find inefficiencies, perfect for generating an engineering mindset. It can also help to determine the “variables” of a given problem.
One way to practice pattern recognition is to have students look at data and see where they can find repeating data points. The typical thematic analysis assignment found in many English language arts (ELA) classes relates well to the concept of pattern recognition. When looking for symbols in a text, students are searching for repeating patterns.
Going beyond symbols, some digital humanists use computers to analyze punctuation or the overall sentiment of a book or corpus that the human eye might not catch. For example, this teacher has his students perform “distant reading” with Wolfram Mathematica, leading to insights on everything from speeches to rap lyrics:
✕
textRaw=Import["http://www.gutenberg.org/cache/epub/608/pg608.txt"]; |
✕
StringPosition[textRaw,{ "A SPEECH FOR","End of the Project"}] |
✕
areo=StringTake[textRaw,{635,102383}]; |
✕
Row[WordCloud/@{textRaw,areo}] |
If doing such a deep dive isn’t possible, you can still have students be more thoughtful about their theses. Perhaps you can challenge them to look for unusual patterns in their texts. What if every student drew a noun from a hat prior to reading a novel, and they were responsible for noting when that noun appeared?
Take the noun “food,” for example. Primed to notice every incident in which a character eats a meal, a student could begin to see how food is used in a particular novel—as an abstracted symbol, or an incitement of plot or even a tool for characterization.
Just as pattern recognition can be a helpful tool in discovering the possible inner workings of a system, deconstruction and reconstruction allow students to demolish and rebuild the systems they discover. Systems can be found in set formulas, interconnected biological processes or even historical structures.
Changing built-in systems and tinkering with “variables” is the basis of coming up with new algorithms for solving problems. Understanding systems is also inherently valuable, even in the humanities—grammar and syntax underpin language, for example, while “soft skills” like communication are wrapped up in social mores.
Going back to the ELA classroom, perhaps looking at a broad overview of a certain genre could help students see the commonalities of that genre’s books. For a fun question, you could ask, “What makes a graphic novel?” This could be a good way to introduce the idea of critical lenses.
You could also use charts or graphs to deconstruct a genre into its core components. If one component changes, is the book still a part of that genre? You could connect this idea to generators and bots, showing how traits and qualities can be remixed into new forms. (YA readers may enjoy this John Green plot generator!)
Students could add new “members” to the systems they uncover. For example, they could pitch a new graphic novel, or brainstorm alien biology, or try their hand at world-building for a fictional country. Deconstructing a math problem using Wolfram|Alpha could lead to insights on the whys and hows of a particular formula.
Part of computational thinking relies on students’ lack of fear in getting an answer “wrong,” particularly during the early exploratory stages of a project. Fear of failure is common, and it persists well into adulthood. These feelings can stand in the way of making real progress, especially in a classroom where peer pressure is paramount.
To a certain extent, the only way to get a classroom to feel like a true learning environment is to make every student feel that failure is okay. Doing so relies on understanding your students as people, as dealing with personality clashes and students’ individual backgrounds is a huge part of classroom management. Still, one way to banish a fear of failure is to consider ways of incorporating small wins into lessons.
In some fields such as writing or art, professionals must fight through rejection on a near-daily basis. To counter that feeling of failure, some people have created games in order to get past their initial knee-jerk reaction of despair. Some writers and artists hold “100 rejections” challenges, aiming to collect rejection letters. Others engage in “rejection therapy,” in which failure is the end goal, not an unfortunate “game over” end state.
Why not gamify failure in the classroom as well? One example in higher ed comes from an anecdote found in the book Art and Fear. In it, a professor divided a pottery class into two groups. While the first group had to submit one pot for a final grade, the second group had to submit a specific poundage of pots. In the end, members of the second group had the highest grades, as they were unbridled by the stress of perfection. They were able to fail over and over.
To alleviate this stress for your students, you could try emphasizing process over perfection. Rather than having students submit a long-form story as a capstone assignment, perhaps they could be graded on the amount of flash fiction they produce. The very process of iterating story after story imparts useful writing skills. In fact, there might be some unconscious pattern recognition as they go along, wherein the students notice their preferred tropes or storytelling techniques.
You could also emphasize reflection in STEM classes. More and more, educators are stressing the value of writing in math class. For a math assignment, students could notate their problem-solving processes through a program like the notebook-based Mathematica. By reflecting on their decisions through inline comments, students will use metacognition, a powerful learning tool.
Similar to reconstruction, experimentation results in new ideas being extrapolated from recognized patterns. Students can create experiments to figure out what makes systems tick—and without a fear of failure, tinkering becomes play.
Obviously in STEM fields, experimentation is not only expected, but celebrated. But even in the humanities, students can intuit how cause and effect works. In music, changing between a minor key and a major key can shift the perceived mood of a piece, at least to Western ears. Color swaps in a piece of art can have emotive effects.
One idea for experimental writing assignments could be to have students create choose-your-own-adventure stories with branching paths. This type of writing emphasizes the “if this, then that” thought process that’s so vital to creating algorithms or step-by-step instructions. There are several game development–based tools available for creating these stories, but even pen and paper can work.
Some of these ideas might seem a bit simple. And that’s because they are! Computational thinking doesn’t have to be complicated to be useful.
Even with the rote standby of analyzing a text for themes and characters, you can cement the idea of recognizing patterns or breaking down a system. As you become more comfortable with computational thinking, and if the IT resources are available, you can then begin to introduce technology into your lessons. For example, using the Wolfram Language to dig deep into problems using code could vastly aid in analysis and experimentation.
As more people recognize the value of computational thinking, educators are publishing their own lesson plans and ideas online. This online book, for example, offers a treasure-trove of ideas for incorporating computational thinking into lessons by subject. Examples range from robotics to data analysis and more.
Another useful resource is Computational Thinking Initiatives (CTI), a nonprofit group devoted to sharing tools and resources for educators looking to introduce computational thinking into their classrooms. They share pre-developed lesson plans built using the Wolfram Language, offer an AI League and coding challenges, and help engage in community efforts to spread the word about computational thinking. If you want more personalized advice, you can reach out to them with questions.
If you’re interested in exploring the Wolfram Language, you can check out An Elementary Introduction to the Wolfram Language. Otherwise, take a look around this blog under the Education tag to see how other educators are using Wolfram Research tools in their lessons.
]]>Julian Francis, a longtime user of the Wolfram Language, contacted us with a potential submission for the Wolfram Neural Net Repository. The Wolfram Neural Net Repository consists of models that researchers at Wolfram have either trained in house or converted from the original code source, curated, thoroughly tested and finally have rendered the output in a very rich computable knowledge format. Julian was our very first user to go through the process of converting and testing the nets.
We thought it would be interesting to interview him on the entire process of converting the models for the repository so that he could share his experiences and future plans to inspire others.
As a child, I was given a ZX81 (an early British home computer). Inspired by sci-fi television programs, I became fascinated by the idea of endowing the ZX81 with artificial intelligence. This was a somewhat ambitious goal for a computer with 1 KB of RAM! By the time I was at university, I felt that general AI was too hard and ill-defined to make good progress on, so I turned my attention to computer vision. I took the view that by studying computer vision, a field with a more clearly defined objective, we might learn some principles along the way that would be relevant to artificial intelligence. At that time, I was interested in what would now be called deformable part models.
After university I was busy developing my career in IT, and my interest in AI and computer vision waned a little until around 2006, when I stumbled on a book by David MacKay on inference theory and pattern recognition. The book dealt extensively with probabilistic graphical models, which I thought might have strong applications in computer vision (particularly placing deformable part models on a more rigorous mathematical basis). However, in practice I found it was still difficult to build good models, and defining probability distributions over pixels seemed exceptionally challenging. I did keep up my interest in the field, but around 2015 I became aware that major progress in this area was being made by deep learning models (the modern terminology for describing neural networks, with a particular emphasis on having many layers in the network), so I was intrigued by this new approach. In 2016, I’d written a small deep learning library in Mathematica (now retired) to validate those ideas. It would be considered relatively simple by modern standards, but it was good enough to train models such as MNIST, CIFAR-10, basic face detection, etc.
I first came across the repository in a blog by Stephen Wolfram earlier this year. I am a regular reader of his blogs, and find them helpful for keeping up with the latest developments and understanding how they fit in with the overall framework of the Wolfram Language.
The Wolfram Neural Net Repository has a wide range of high-quality models available covering topics such as speech recognition, language modeling and computer vision. The computer vision models (my particular interest) are extensive and include classification, object detection, keypoint detection, mask detection and style transfer models.
I find the Wolfram Neural Net Repository to be very well organized, and it’s straightforward to find relevant models. The models are very user friendly; a model can be loaded in a single line of code. The documentation is also very helpful with straightforward examples showing you how to use the models. From the time you identify a model in the relevant repository, you can be up, running and using that model against your own data/images within a matter of minutes.
Other neural net frameworks, in contrast to the Wolfram Neural Net Repository, can be time-consuming to install and set up. In many frameworks, the architecture is separate from the trained parameters of the model, so you have to manually install each of them and then configure them to work together. The files are not necessarily directly usable, but may require installed tools to unpack and decompress them. Example code can also come with its own set of complex dependencies, all of which will need to be downloaded, installed and configured. Additionally, the deep learning framework itself may not be available on your platform in a convenient form—you may be expected to download, compile and build it yourself. And that process itself can require its own toolchain, which will need to be installed. These processes are not always well documented, and there are many things that can go wrong, requiring a trawl around internet forums to see how other people have resolved these problems. While my experience is that these things can be done, it requires considerable systems knowledge and is time-consuming to resolve.
I’d read several research papers on arXiv and other academic websites. My experience often was that the papers could be difficult to follow, details of the algorithms were missing and it was hard to successfully implement them from scratch. I would search GitHub for reference implementations with source code. There are a number of deep learning frameworks out there, and it was becoming clear that several people were translating models from one framework to another. Additionally, I had converted a face-detection model from a deep learning framework I had developed in Mathematica in 2016 to the Mathematica neural network framework in 2017, so I had some experience in doing this.
A difficulty in deep learning is the immense amount of computation required in order to train up models. Transfer learning is the idea of using one trained network in order to initialize a new neural network for a different task, where some of the knowledge needed for the original task will be helpful for this new task. The idea is that this should at least initialize the network in a better starting point, as compared with a completely random initialization. This has proved crucial to enabling researchers to experiment with different architectures in a reasonable time frame, and to enable the field to make good progress.
For example, object detectors are typically organized in two stages. The first stage (the “base” network) is concerned with transforming the raw pixels into a more abstract representation. Then a second stage is concerned with converting that into representations defining which objects are present in the image and where they are. This enables researchers to break down the question of what is a good neural network for object detection into two separate questions: what is a good “base” network for high-level neural activity descriptions of images, and what is a good architecture for converting these to a semantic output representation, e.g. a list of bounding boxes?
Researchers would typically not attempt to train the whole network from a random initialization, but would pick a standard “base” network and use the weights from that trained model to initialize their new model. This has two advantages: it can save training time from weeks to days or even hours. Secondly, the datasets for image classification are much larger than the datasets we currently have for object detection, so our base network has benefited from the knowledge gained from being trained on millions of images, whereas our datasets for object detection might have only tens of thousands of training examples available. This approach is a good example of transfer learning.
I have converted the SSD-VGG-300 Pascal VOC, the SSD-VGG-512 Pascal VOC and SSD-VGG-512 COCO models. The first two models detect objects from the Pascal VOC dataset, which contains twenty objects (such as cars, horses, people, etc.). There is a trade off on the first two models between speed and accuracy—the second of the models is slower but more accurate. The third model can detect objects from the Microsoft COCO dataset, which can detect eighty different types of objects (including the Pascal VOC objects).
✕
NetModel["SSD-VGG-300 Trained on PASCAL VOC Data"] |
✕
NetModel["SSD-VGG-512 Trained on PASCAL VOC2007, PASCAL VOC2012 and MS-COCO Data"] |
The third model can detect objects from the Microsoft COCO dataset, which can detect eighty different types of objects (including the Pascal VOC objects).
✕
NetModel["SSD-VGG-512 Trained on MS-COCO Data"] |
These detectors are designed to detect which objects are present in an image, and where they are. My main objective was to understand in detail how these models work, and to make these available to the Wolfram community in an easy and accessible form. They are a Mathematica implementation of a family of models referenced by “SSD: Single Shot MultiBox Detector” by Wei Liu et al., a widely referenced paper in the field.
I’d envisage these models being used as the object-detection component in a larger system. You could use the model to do a content-based image search in a photo collection, for example. Or it could be used as a component in an object-tracking system. I could imagine it having applications in intruder detection or traffic management. Object detection is a very new technology, and I am sure there can be many applications that haven’t even been considered yet.
Currently, popular neural network–based object detectors can be grouped into what are considered two-stage detectors and the class of single-stage detectors.
The two-stage detectors have two separate networks. The first is an object proposal network, whose task is to determine the location of possible objects in the image. It is not concerned with what type of object it is, just with drawing a bounding box around that object. They can produce thousands of bounding boxes on one image. Each of those region proposals is then fed into a second neural network that tries to determine if it is an actual object and, if so, what type of object it is. R-CNN and Fast/Faster R-CNN and the Region Proposal networks fall into this category.
The single-stage detectors work by passing the image through a single neural network whose output directly contains information on which objects are in the image and where they are. The YOLO family and the Single Shot Detectors (SSD) family fall into this category.
Generally, the two-stage detectors have achieved greater accuracy. However, the single-stage detectors are much faster. The models that I converted are all based on the Single Shot Detector family with a VGG-type base network. Their closest relatives are the YOLO detectors. There is a YOLO version 2 model in the Wolfram Neural Net Repository. So by comparison, the most accurate model I converted is slower but more accurate than this model.
I have been a Mathematica user since the summer of 1991, so I have a long familiarity with the language. I find that I can write code that expresses my thoughts at exactly the right level of abstraction. I appreciate the multiparadigm approach whereby you can decide for yourself what works best for your particular problem. By using the Wolfram Language, you gain access to all the functionalities available in the extensive range of packages. I find the code I write in the Wolfram Language is typically shorter and clearer than what I write in other languages.
For people new to deep learning, I recommend a mixture of reading blogs and following a video lecture–based course. Medium hosts a number of blogs that you can search for deep learning topics. Google Plus has a deep learning group that can be a good source for keeping up to date on news in the field. I’d also recommend Andrew Ng’s very popular course on machine learning at Coursera. In 2015, Nando de Freitas gave a course at Oxford University, which I found to be thorough but also very accessible. Andrej Karpathy’s CS231n Winter 2016 course is also very good for beginners. The last two courses can be found on YouTube. After following any of these two courses, you should have a reasonable overview of the field. They are not overly mathematical, but a basic knowledge of linear algebra is assumed, and some understanding of the concept of partial differentiation is helpful.
For people new to the Wolfram Language, and especially if you come from a procedural/object-oriented programming background (e.g. C/C++ or Java), I would encourage you to familiarize yourself with concepts such as vectorization (acting on many elements simultaneously), which is usually both more elegant and much faster. I would suggest getting a good understanding of the core language, and then aiming to get at least an overview of the many different packages available. The documentation pages are an excellent way to go about this. Mathematica Stack Exchange can also be a good source of support.
It is a very exciting time to be involved in computer vision, and converting models is a great way to understand how they work in detail. I am working on translating a model for an extremely fast object detector, and I have a number of projects that I’d like to do in the future, including face recognition and object detectors that can recognize a wide range of classes of objects. Watch this space!
]]>Most undergraduate college students chase opportunities for internships in New York, Miami, Seattle and particularly San Francisco at very young but large high-tech companies like Uber, Pinterest, Quora, Expedia and similar internet companies. These companies offer the best salaries, perks, bosses, coworkers, catered lunches and other luxurious amenities available in such large cities. You would seldom hear about any of these people pursuing opportunities in small, lesser-known towns like Ames, Iowa, or Laramie, Wyoming—and Champaign, Illinois, where Wolfram Research is based, is one of those smaller towns.
Many students want to go into computer science, as it’s such a rapidly developing field. They especially want to work in those companies on the West Coast. If you’re in a different field, like natural science, you might think there’s nothing beyond on-campus research for work experience. At Wolfram Research, though, there is.
Wolfram Research is a tight-knit company with a relatively small office where everyone can easily get to know each other. Fortunately, I have been in good company for the nearly two years I have worked here. Most of my full-time colleagues are highly qualified, with prestigious master’s or doctoral degrees in science or engineering. I have picked up a lot from their diverse knowledge related to the subjects I intend to pursue. Like most of them, I am a highly theoretical physicist, an applied mathematician and somewhat of a computer scientist myself, and this is what has compelled me to keep interning here instead of at other companies such as Intel or Boeing. Access to our company library, with its vast collection of books on modern computer science, applied science, mathematics, statistics, intelligent systems and various other subjects, allows everyone to learn about activities happening at much larger companies. This gives an incentive to keep picking up new skills and diversifying one’s industrial skill set for more outside projects. It’s thanks to the good influence of the library and the coworkers on me that I have stayed at Wolfram much longer than most other interns here in order to develop my skill set.
My name is Parik Kapadia and I have been an intern in the Algorithms R&D department at Wolfram Research for nearly two years now. I am also a student at the University of Illinois at Urbana-Champaign, majoring in electrical and computer engineering with a minor in statistics. I’m also about to complete a Certificate in Data Science offered by the university.
Throughout the time I’ve been working at Wolfram, my projects have allowed me to return to subjects I hadn’t studied for two years, since they related to high-school or first-year-college courses. I’ve also worked on projects far beyond what an entry-level employee would take on. Working on such projects as an intern has given me more experience than I would receive at other companies.
My initial training consisted of solving around 1,200 examples for intermediate and advanced calculus from a well-known textbook, Calculus by James Stewart, used by millions of students all over the world. It had already been nearly two years since I completed the three-course sequence of calculus required for engineering majors like myself, and this put me in a prime position to carry out this project. It turned out that the chapters covering Calculus I and II (or AP Calculus AB and BC) had already been completed by previous interns. This meant that I only had to complete the last six chapters of the course, which consisted of Calculus III. This final segment covers topics in multivariable calculus, such as vectors themselves, vector integration and differentiation, partial derivatives, multiple integrals, and vector calculus itself. In turn, these topics are used in physics and engineering, most particularly electromagnetism—one of my favorite topics in theoretical electrical engineering.
The result of this benchmarking project was a huge collection of around 6,000 problems completely solved using the Wolfram Language. The success of this project eventually led to a course in calculus that was released in September.
Since then, I have been coding an endless string of problems in advanced and applied mathematics from a collection of mathematics textbooks. Although I am an engineering major, I have seldom had the opportunity to study these textbooks for any major projects, although research at the graduate degree level for engineering may require the application of these as research becomes more interdisciplinary. Having coded more than 10,000 of these examples since the completion of the calculus book project, I have become a born-again applied mathematician and theoretical physicist, and now feel the need to return to those roots that I previously nourished when I was a high-school student. I am now hoping to potentially work full time at Wolfram Research in the near future.
My first formal, full-fledged project after the end of the calculus project was to collect a large number of examples regarding the use of the RSolveValue function to determine the limiting behavior of recursive sequences. In order to do this, I looked at all the relevant examples on Mathematics Stack Exchange and compiled a notebook showing how well they worked with RSolveValue. This was a very satisfying project, and you can imagine my thrill when Stephen Wolfram used a similar example to one I collected in his blog post announcing Version 11.2.
Another interesting project assigned to me was to check more than 1,200 examples for multivariate limits, which was a new feature in Version 11.2 of Mathematica. This required carefully going through all of the examples manually and doing sanity checks to make sure that the results and plots agreed, and to tweak the plots so that they looked elegant. Here, my knowledge of multivariable calculus came to the rescue, and I helped to select 1,000 examples that were used for the blog post “Limits without Limits in Version 11.2.” As you can see, people like myself work in the background at Wolfram to make sure that all publications and products are of the highest quality, and we take pride in maintaining the highest standards.
As a final project, I would like to mention my role in setting up large benchmarks for the asymptotics features in Version 11.3. I did this by collecting examples of differential equations and integrals from around 15 books, starting from undergraduate mathematics and engineering texts to advanced graduate-level discussions of asymptotic expansions. The challenge here was to make sure that the results from the new asymptotics functions agreed with the intuition and some numerical or symbolic comparisons with built-in functions such as DSolve, Integrate, NIntegrate and Series. The complete benchmark ran into around 4,000 examples and boosted the developers’ confidence in this exciting new set of functions, and some of the examples were used in a blog post after Version 11.3 was released.
A total of 21 months have passed since I began my internship at Wolfram Research, and I am now looking forward to future plans. This internship has facilitated the process of my rebirth into a budding theoretical physicist, applied mathematician and computer scientist, and I shall build my career out of my growth into this level of achievement and how I have established myself while interning at Wolfram Research.
There are endless possibilities for me to hone everything I did here, of course—I gained experience in research and development as well. Government jobs, software and internet, data analytics, health care, bioinformatics, computer hardware and a diverse set of industries will need the expertise I have gained. The projects I have done as part of my internship have given me opportunities for moving on to quality assurance tasks, including debugging and testing Mathematica. Interning at Wolfram has given me the opportunity to make use of my skills and education, learn more about the directions I’d like my career to move in and build mutually beneficial relationships. If you’re a college student looking for a similar experience to my own, apply for an internship at Wolfram.
In this video, Nilsson describes how the built-in knowledge, broad subject coverage and intuitive coding workflow of the Wolfram Language were crucial to the success of his course:
Nilsson’s ultimate goal with the course is to encourage computational exploration in his students by showing them applications relevant to their lives and interests. He notes that professionals in the humanities have increasingly turned toward computational methods for their research, but that many students entering the field are lacking in the coding skills and the conceptual understanding to get started. With the Wolfram Language, he is able to expose students to both in a way they find intuitive and easy to follow.
To introduce fundamental concepts, he shows students a pre-built Wolfram Notebook exploration of John Milton’s Areopagitica featuring a range of text analysis functions from the Wolfram Language. First he retrieves the full text from Project Gutenberg using Import:
✕
textRaw=Import["http://www.gutenberg.org/cache/epub/608/pg608.txt"]; |
He then demonstrates basic strategies for cleaning the text, using StringPosition and StringTake to find and eliminate anything that isn’t part of the actual work (i.e. supplementary content before and after the text):
✕
StringPosition[textRaw,{ "A SPEECH FOR","End of the Project"}] |
✕
areo=StringTake[textRaw,{635,102383}]; |
To quickly show the difference, he makes a WordCloud of the most common words before and after the cleanup process:
✕
Row[WordCloud/@{textRaw,areo}] |
From here, Nilsson demonstrates some common text analyses and visualizations used in the digital humanities, such as making a Histogram of where the word “books” occurs throughout the piece:
✕
Histogram[StringPosition[areo,"books"][[All,1]],{5000}] |
Or computing the average number of words per sentence with WordCount and TextSentences:
✕
N[WordCount[areo]/Length[WordCount[TextSentences[areo]]]] |
Or finding how many unique words are used in the piece with TextWords:
✕
Length[DeleteDuplicates[TextWords[areo]]] |
He also discusses additional exploration outside the text itself, such as using WordFrequencyData to find the historical frequency of words (or n-grams) in typical published English text:
✕
DateListPlot[WordFrequencyData[{"war","peace"},"TimeSeries"]] |
Building this example in a Wolfram Notebook allows Nilsson to mix live code, text, images and results in a highly structured document. And after presenting to the class, he can pass his notebook along to students to try themselves. Even students with no programming experience learn the Wolfram Language quickly, starting their own explorations after just a few days. Throughout the course, Nilsson encourages students to apply the concepts in different ways and try additional methods. “The challenge,” he says, “is getting them to think, ‘Oh, I can count this.’”
Once students are acquainted with the language and the methods, they start formulating research ideas. Nilsson says he is consistently impressed with the ingenuity of their projects, which span a broad range of humanities topics and datasets. For example, here is an analysis comparing phonetic distribution (phoneme counts) between two rap artists’ works:
Students take advantage of the range of visualization types in the Wolfram Language to discover patterns they wouldn’t otherwise have noticed, such as this comparison of social networks in the Bible (using Graph plots):
Nilsson points out how much easier it is for students to do these high-level analyses in the digital age. “What took monks and scholars months and years to accumulate, we can now do in five minutes,” he says. He cites a classic analysis that has been recreated in his class, tracking geographic references in War and Peace with a GeoSmoothHistogram:
✕
loc=Interpreter["Country"]/@TextCases[Rest@StringSplit[ResourceData["War and Peace"],"BOOK"],"Country"]; |
✕
ListAnimate[GeoSmoothHistogram[#,GeoRange->{{-40, 80}, {-20, 120}}]&/@loc] |
When sharing his activities with colleagues in higher education, he says many have been impressed with the depth he’s able to achieve. Some have compared his students’ projects to doctoral-level work—and that’s in a one-semester high-school course. But, he says, “You don’t have to be a doctoral student to do these really interesting analyses. You just have to know how to ask a good question.”
Nilsson also has his students analyze their own writing, measuring and charting key factors over time—from simple concepts like word length and vocabulary size to more advanced properties like sentence complexity. He sees it as an opportunity for them to examine the progression of their writing, empowering them to improve and adapt over time.
Many of these exercises go beyond the realm of simple text analysis, borrowing concepts from fields like network science and matrix algebra. Fortunately, the Wolfram Language makes it easy to represent textual data in different ways. For instance, TextStructure generates structural forms based on the grammar of a natural language text excerpt. Using the "ConstituentGraph" option gives a graph of the phrase structure in each sentence:
✕
cg=Flatten[TextStructure[#,"ConstituentGraphs"]&/@ TextSentences[WikipediaData["computer","SummaryPlaintext"]]]; |
✕
RandomChoice[cg] |
AdjacencyMatrix gives a matrix representation of connectivity within the graph for easier visual inspection and computation:
✕
MatrixPlot@AdjacencyMatrix[%] |
Closeness centrality is a measure of how closely connected a node is to all others in a network. Since each constituent graph represents a network of related words, sentences with a low average closeness centrality can be thought of as simpler. Applying ClosenessCentrality (and Mean) to each graph gives a base measure of how complex each sentence is:
✕
ListPlot[Mean[ClosenessCentrality[#]]&/@cg,Filling->Axis,PlotRange->{0,.3}] |
Using these and other analytical techniques, students produce in-depth research reports based on their findings. Here is a snapshot of one paper from a student who used these strategies to examine sentence complexity in his own writing:
Besides giving students the opportunity to analyze their high-school writing, Nilsson says this exercise also gives upcoming graduates a solid foundation for research analytics that will be useful in their college careers.
Overall, the Wolfram Language has provided Nilsson with the perfect system for research and education in the digital humanities. Since adopting it into his curriculum, he has been able to make real improvements in student understanding and outcomes that he couldn’t have achieved otherwise. He notes that, when attempting similar exploration with Excel, MATLAB, R and other systems, none provided the unique combination of power, usability and built-in knowledge of the Wolfram Language. By wrapping everything into one coherent system, he says, the Wolfram Language gives him “a really potent tool for doing all kinds of analyses that are much more difficult in any other context.”