Wolfram Blog http://blog.wolfram.com News, views, and ideas from the front lines at Wolfram Research. Mon, 20 Nov 2017 23:26:37 +0000 en hourly 1 http://wordpress.org/?v=3.2.1 How to Win at Risk: Exact Probabilities http://blog.wolfram.com/2017/11/20/how-to-win-at-risk-exact-probabilities/ http://blog.wolfram.com/2017/11/20/how-to-win-at-risk-exact-probabilities/#comments Mon, 20 Nov 2017 17:54:43 +0000 Jon McLoone http://blog.internal.wolfram.com/?p=39293 The classic board game Risk involves conquering the world by winning battles that are played out using dice. There are lots of places on the web where you can find out the odds of winning a battle given the number of armies that each player has. However, all the ones that I have seen do this by Monte Carlo simulation, and so are innately approximate. The Wolfram Language makes it so easy to work out the exact values that I couldn’t resist calculating them once and for all.

Risk battle odds flow chart

Here are the basic battle rules: the attacker can choose up to three dice (but must have at least one more army than dice), and the defender can choose up to two (but must have at least two armies to use two). To have the best chances of winning, you always use the most dice possible, so I will ignore the other cases. Both players throw simultaneously and then the highest die from each side is paired, and (if both threw at least two dice) the next highest are paired. The highest die kills an army and, in the event of a draw, the attacker is the loser. This process is repeated until one side runs out of armies.

So my goal is to create a function pBattle[a,d] that returns the probability that the battle ends ultimately as a win for the attacker, given that the attacker started with a armies and the defender started with d armies.

I start by coding the basic game rules. The main case is when both sides have enough armies to fight with at least two dice. There are three possible outcomes for a single round of the battle. The attacker wins twice or loses twice, or both sides lose one army. The probability of winning the battle is therefore the sum of the probabilities of winning after the killed armies are removed multiplied by the probability of that outcome.

pBattle[a_, d_] /; (a >= 3 && d >= 2) := Once[    pWin2[a, d] pBattle[a, d - 2] +      pWin1Lose1[a, d] pBattle[a - 1, d - 1] +      pLose2[a, d] pBattle[a - 2, d]    ];

We also have to cover the case that either side has run low on armies and there is only one game piece at stake.

pBattle[a_, d_] /; (a > 1 && d >= 1) := Once[    pWin1[a, d] pBattle[a, d - 1] + pLose1[a, d] pBattle[a - 1, d]    ];

This sets up a recursive definition that defines all our battle probabilities in terms of the probabilities of subsequent stages of the battle. Once prevents us working those values out repeatedly. We just need to terminate this recursion with the end-of-battle rules. If the attacker has only one army, he has lost (since he must have more armies than dice), so our win probability is zero. If our opponent has run out of armies, then the attacker has won.

pBattle[1, _] = 0; pBattle[_, 0] = 1;

Now we have to work out the probabilities of our five individual attack outcomes: pWin2, pWin1Lose1, pLose2, pWin1 and pLose1.

When using two or three dice, we can describe the distribution as an OrderDistribution of a DiscreteUniformDistribution because we always want to pair the highest throws together.

diceDistribution[n : 3 | 2] :=    OrderDistribution[{DiscreteUniformDistribution[{1, 6}], n}, {n - 1,      n}];

For example, here is one outcome of that distribution; the second number will always be the largest, due to the OrderDistribution part.

RandomVariate[diceDistribution[3]]

The one-die case is just a uniform distribution; our player has to use the value whether it is good or not. However, for programming convenience, I am going to describe a distribution of two numbers, but we will never look at the first.

diceDistribution[1] := DiscreteUniformDistribution[{{1, 6}, {1, 6}}];

So now the probability of winning twice is that both attacker dice are greater than both defenders. The defender must be using two dice, but the attacker could be using two or three.

pWin2[a_, d_] /; a >= 3 && d >= 2 := Once[    Probability[     a1 > d1 &&       a2 > d2, {{a1, a2} \[Distributed]        diceDistribution[Min[a - 1, 3]], {d1, d2} \[Distributed]        diceDistribution[2]}]    ];

The lose-twice probability has a similar definition.

pLose2[a_, d_] := Once[    Probability[     a1 <= d1 &&       a2 <= d2, {{a1, a2} \[Distributed]        diceDistribution[Min[a - 1, 3]], {d1, d2} \[Distributed]        diceDistribution[2]}]    ];

And the draw probability is what’s left.

pWin1Lose1[a_, d_] := Once[1 - pWin2[a, d] - pLose2[a, d]]

The one-army battle could be because the attacker is low on armies or because the defender is. Either way, we look only at the last value of our distributions.

pWin1[a_, d_] /; a === 2 || d === 1 := Once[    Probability[     a2 > d2, {{a1, a2} \[Distributed]        diceDistribution[Min[a - 1, 3]], {d1, d2} \[Distributed]        diceDistribution[Min[d, 2]]}]    ];

And pLose1 is just the remaining case.

pLose1[a_, d_] := 1 - pWin1[a, d];

And we are done. All that is left is to use the function. Here is the exact (assuming fair dice, and no cheating!) probability of winning if the attacker starts with 18 armies and the defender has only six.

pBattle[18, 6]

We can approximate this to 100 decimal places.

N[%, 100]

We can quickly enumerate the probabilities for lots of different starting positions.

table = Text@   Grid[Prepend[     Table[Prepend[Table[pBattle[a, d], {d, 1, 4}],        StringForm["Attack with " <> ToString[a]]], {a, 2, 16}],     Prepend[      Table[StringForm["Defend with " <> ToString[n]], {n, 1, 4}],       ""]], Frame -> All, FrameStyle -> LightGray]

Risk odds table 1

Here are the corresponding numeric values to only 20 decimal places.

N[table, 20]

Risk odds table 2

You can download tables of more permutations here, with exact numbers, and here, approximated to 20 digits.

Of course, this level of accuracy is rather pointless. If you look at the 23 vs. 1 battle, the probability of losing is about half the probability that you will actually die during the first throw of the dice, and certainly far less than the chances of your opponent throwing the board in the air and refusing to play ever again.

Appendix: Code for Generating the Outcomes Graph


vf[{x_, y_}, name_, {w_, h_}] := {Black, Circle[{x, y}, w], Black,     Text[If[StringQ[name], Style[name, 12],       Style[Row[name, "\[ThinSpace]vs\[ThinSpace]"], 9]], {x, y}]}; edge[e_, th_] :=    Property[e, EdgeStyle -> {Arrowheads[th/15], Thickness[th/40]}]; Graph[Flatten[Table[If[a >= 3 && d >= 2,      {       edge[{a, d} -> {a, d - 2}, pWin2[a, d]],       edge[{a, d} -> {a - 1, d - 1}, pWin1Lose1[a, d]],       edge[{a, d} -> {a - 2, d}, pLose2[a, d]]              },      {       edge[{a, d} -> {a, d - 1}, pWin1[a, d]],       edge[{a, d} -> {a - 1, d}, pLose1[a, d]]              }], {a, 2, 6}, {d, 1, 4}]] /. {{a_, 0} -> "Win", {1, d_} ->      "Lose"}, ImageSize -> Full, VertexShapeFunction -> vf,   VertexSize -> 1]


Download this post as a Computable Document Format (CDF) file. New to CDF? Get your copy for free with this one-time download.

]]>
http://blog.wolfram.com/2017/11/20/how-to-win-at-risk-exact-probabilities/feed/ 1
What Is a Computational Essay? http://blog.wolfram.com/2017/11/14/what-is-a-computational-essay/ http://blog.wolfram.com/2017/11/14/what-is-a-computational-essay/#comments Tue, 14 Nov 2017 18:54:55 +0000 Stephen Wolfram http://blog.internal.wolfram.com/?p=39273 A Powerful Way to Express Ideas

People are used to producing prose—and sometimes pictures—to express themselves. But in the modern age of computation, something new has become possible that I’d like to call the computational essay.

I’ve been working on building the technology to support computational essays for several decades, but it’s only very recently that I’ve realized just how central computational essays can be to both the way people learn, and the way they communicate facts and ideas. Professionals of the future will routinely deliver results and reports as computational essays. Educators will routinely explain concepts using computational essays. Students will routinely produce computational essays as homework for their classes.

Here’s a very simple example of a computational essay:

Simple computational essay example

There are basically three kinds of things here. First, ordinary text (here in English). Second, computer input. And third, computer output. And the crucial point is that these all work together to express what’s being communicated.

The ordinary text gives context and motivation. The computer input gives a precise specification of what’s being talked about. And then the computer output delivers facts and results, often in graphical form. It’s a powerful form of exposition that combines computational thinking on the part of the human author with computational knowledge and computational processing from the computer.

But what really makes this work is the Wolfram Language—and the succinct representation of high-level ideas that it provides, defining a unique bridge between human computational thinking and actual computation and knowledge delivered by a computer.

In a typical computational essay, each piece of Wolfram Language input will usually be quite short (often not more than a line or two). But the point is that such input can communicate a high-level computational thought, in a form that can readily be understood both by the computer and by a human reading the essay.

It’s essential to all this that the Wolfram Language has so much built-in knowledge—both about the world and about how to compute things in it. Because that’s what allows it to immediately talk not just about abstract computations, but also about real things that exist and happen in the world—and ultimately to provide a true computational communication language that bridges the capabilities of humans and computers.

An Example

Let’s use a computational essay to explain computational essays.

Let’s say we want to talk about the structure of a human language, like English. English is basically made up of words. Let’s get a list of the common ones.

Generate a list of common words in English:

WordList[]

WordList[]

How long is a typical word? Well, we can take the list of common words, and make a histogram that shows their distribution of lengths.

Make a histogram of word lengths:

Histogram[StringLength[WordList[]]]

Histogram[StringLength[WordList[]]]

Do the same for French:

Histogram[StringLength[WordList[Language -> "French"]]]

Histogram[StringLength[WordList[Language -> "French"]]]

Notice that the word lengths tend to be longer in French. We could investigate whether this is why documents tend to be longer in French than in English, or how this relates to quantities like entropy for text. (Of course, because this is a computational essay, the reader can rerun the computations in it themselves, say by trying Russian instead of French.)

But as something different, let’s compare languages by comparing their translations for, say, the word “computer”.

Find the translations for “computer” in the 10 most common languages:

Take[WordTranslation["computer", All], 10]

Take[WordTranslation["computer", All], 10]

Find the first translation in each case:

First /@ Take[WordTranslation["computer", All], 10]

First /@ Take[WordTranslation["computer", All], 10]

Arrange common languages in “feature space” based on their translations for “computer”:

FeatureSpacePlot[First /@ Take[WordTranslation["computer", All], 40]]

FeatureSpacePlot[First /@ Take[WordTranslation["computer", All], 40]]

From this plot, we can start to investigate all sorts of structural and historical relationships between languages. But from the point of view of a computational essay, what’s important here is that we’re sharing the exposition between ordinary text, computer input, and output.

The text is saying what the basic point is. Then the input is giving a precise definition of what we want. And the output is showing what’s true about it. But take a look at the input. Even just by looking at the names of the Wolfram Language functions in it, one can get a pretty good idea what it’s talking about. And while the function names are based on English, one can use “code captions” to understand it in another language, say Japanese:

FeatureSpacePlot[First/@Take[WordTranslation["computer",All],40]]

FeatureSpacePlot[First /@ Take[WordTranslation["computer", All], 40]]

But let’s say one doesn’t know about FeatureSpacePlot. What is it? If it was just a word or phrase in English, we might be able to look in a dictionary, but there wouldn’t be a precise answer. But a function in the Wolfram Language is always precisely defined. And to know what it does we can start by just looking at its documentation. But much more than that, we can just run it ourselves to explicitly see what it does.

FeatureSpacePlot page

And that’s a crucial part of what’s great about computational essays. If you read an ordinary essay, and you don’t understand something, then in the end you really just have to ask the author to find out what they meant. In a computational essay, though, there’s Wolfram Language input that precisely and unambiguously specifies everything—and if you want to know what it means, you can just run it and explore any detail of it on your computer, automatically and without recourse to anything like a discussion with the author.

Practicalities

How does one actually create a computational essay? With the technology stack we have, it’s very easy—mainly thanks to the concept of notebooks that we introduced with the first version of Mathematica all the way back in 1988. A notebook is a structured document that mixes cells of text together with cells of Wolfram Language input and output, including graphics, images, sounds, and interactive content:

A typical notebook

In modern times one great (and very hard to achieve!) thing is that full Wolfram Notebooks run seamlessly across desktop, cloud and mobile. You can author a notebook in the native Wolfram Desktop application (Mac, Windows, Linux)—or on the web through any web browser, or on mobile through the Wolfram Cloud app. Then you can share or publish it through the Wolfram Cloud, and get access to it on the web or on mobile, or download it to desktop or, now, iOS devices.

Notebook environments

Sometimes you want the reader of a notebook just to look at it, perhaps opening and closing groups of cells. Sometimes you also want them to be able to operate the interactive elements. And sometimes you want them to be able to edit and run the code, or maybe modify the whole notebook. And the crucial point is that all these things are easy to do with the cloud-desktop-mobile system we’ve built.

A New Form of Student Work

Computational essays are great for students to read, but they’re also great for students to write. Most of the current modalities for student work are remarkably old. Write an essay. Give a math derivation. These have been around for millennia. Not that there’s anything wrong with them. But now there’s something new: write a computational essay. And it’s wonderfully educational.

A computational essay is in effect an intellectual story told through a collaboration between a human author and a computer. The computer acts like a kind of intellectual exoskeleton, letting you immediately marshall vast computational power and knowledge. But it’s also an enforcer of understanding. Because to guide the computer through the story you’re trying to tell, you have to understand it yourself.

When students write ordinary essays, they’re typically writing about content that in some sense “already exists” (“discuss this passage”; “explain this piece of history”; …). But in doing computation (at least with the Wolfram Language) it’s so easy to discover new things that computational essays will end up with an essentially inexhaustible supply of new content, that’s never been seen before. Students will be exploring and discovering as well as understanding and explaining.

When you write a computational essay, the code in your computational essay has to produce results that fit with the story you’re telling. It’s not like you’re doing a mathematical derivation, and then some teacher tells you you’ve got the wrong answer. You can immediately see what your code does, and whether it fits with the story you’re telling. If it doesn’t, well then maybe your code is wrong—or maybe your story is wrong.

What should the actual procedure be for students producing computational essays? At this year’s Wolfram Summer School we did the experiment of asking all our students to write a computational essay about anything they knew about. We ended up with 72 interesting essays—exploring a very wide range of topics.

In a more typical educational setting, the “prompt” for a computational essay could be something like “What is the typical length of a word in English” or “Explore word lengths in English”.

There’s also another workflow I’ve tried. As the “classroom” component of a class, do livecoding (or a live experiment). Create or discover something, with each student following along by doing their own computations. At the end of the class, each student will have a notebook they made. Then have their “homework” be to turn that notebook into a computational essay that explains what was done.

And in my experience, this ends up being a very good exercise—that really tests and cements the understanding students have. But there’s also something else: when students have created a computational essay, they have something they can keep—and directly use—forever.

And this is one of the great general features of computational essays. When students write them, they’re in effect creating a custom library of computational tools for themselves—that they’ll be in a position to immediately use at any time in the future. It’s far too common for students to write notes in a class, then never refer to them again. Yes, they might run across some situation where the notes would be helpful. But it’s often hard to motivate going back and reading the notes—not least because that’s only the beginning; there’s still the matter of implementing whatever’s in the notes.

But the point is that with a computational essay, once you’ve found what you want, the code to implement it is right there—immediately ready to be applied to whatever has come up.

Any Subject You Want

What can computational essays be about? Almost anything! I’ve often said that for any field of study X (from archaeology to zoology), there either is now, or soon will be, a “computational X”. And any “computational X” can immediately be explored and explained using computational essays.

But even when there isn’t a clear “computational X” yet,  computational essays can still be a powerful way to organize and present material. In some sense, the very fact that a sequence of computations are typically needed to “tell the story” in an essay helps define a clear backbone for the whole essay. In effect, the structured nature of the computational presentation helps suggest structure for the narrative—making it easier for students (and others) to write essays that are easy to read and understand.

But what about actual subject matter? Well, imagine you’re studying history—say the history of the English Civil War. Well, conveniently, the Wolfram Language has a lot of knowledge about history (as about so many other things) built in. So you can present the English Civil War through a kind of dialog with it. For example, you can ask it for the geography of battles:

GeoListPlot[\!\(\*
NamespaceBox["LinguisticAssistant",
DynamicModuleBox[{Typeset`query$$ = "English Civil War",
      Typeset`boxes$$ = TemplateBox[{"\"English Civil War\"",
RowBox[{"Entity", "[",
RowBox[{"\"MilitaryConflict\"", ",", "\"EnglishCivilWar\""}], "]"}],
        "\"Entity[\\\"MilitaryConflict\\\", \
\\\"EnglishCivilWar\\\"]\"", "\"military conflict\""}, "Entity"],
      Typeset`allassumptions$$ = {{
       "type" -> "Clash", "word" -> "English Civil War",
        "template" -> "Assuming \"${word}\" is ${desc1}. Use as \
${desc2} instead", "count" -> "3",
        "Values" -> {{
          "name" -> "MilitaryConflict",
           "desc" -> "a military conflict",
           "input" -> "*C.English+Civil+War-_*MilitaryConflict-"}, {
          "name" -> "Word", "desc" -> "a word",
           "input" -> "*C.English+Civil+War-_*Word-"}, {
          "name" -> "HistoricalEvent", "desc" -> "a historical event",
            "input" -> "*C.English+Civil+War-_*HistoricalEvent-"}}}, {
       "type" -> "SubCategory", "word" -> "English Civil War",
        "template" -> "Assuming ${desc1}. Use ${desc2} instead",
        "count" -> "4",
        "Values" -> {{
          "name" -> "EnglishCivilWar",
           "desc" -> "English Civil War (1642 - 1651)",
           "input" -> "*DPClash.MilitaryConflictE.English+Civil+War-_*\
EnglishCivilWar-"}, {
          "name" -> "FirstEnglishCivilWar",
           "desc" -> "English Civil War (1642 - 1646)",
           "input" -> "*DPClash.MilitaryConflictE.English+Civil+War-_*\
FirstEnglishCivilWar-"}, {
          "name" -> "SecondEnglishCivilWar",
           "desc" -> "Second English Civil War",
           "input" -> "*DPClash.MilitaryConflictE.English+Civil+War-_*\
SecondEnglishCivilWar-"}, {
          "name" -> "ThirdEnglishCivilWar",
           "desc" -> "Third English Civil War",
           "input" -> "*DPClash.MilitaryConflictE.English+Civil+War-_*\
ThirdEnglishCivilWar-"}}}}, Typeset`assumptions$$ = {},
      Typeset`open$$ = {1, 2}, Typeset`querystate$$ = {
      "Online" -> True, "Allowed" -> True,
       "mparse.jsp" -> 1.305362`6.5672759594240935,
       "Messages" -> {}}},
DynamicBox[ToBoxes[
AlphaIntegration`LinguisticAssistantBoxes["", 4, Automatic,
Dynamic[Typeset`query$$],
Dynamic[Typeset`boxes$$],
Dynamic[Typeset`allassumptions$$],
Dynamic[Typeset`assumptions$$],
Dynamic[Typeset`open$$],
Dynamic[Typeset`querystate$$]], StandardForm],
ImageSizeCache->{265., {7., 17.}},
TrackedSymbols:>{
        Typeset`query$$, Typeset`boxes$$, Typeset`allassumptions$$,
         Typeset`assumptions$$, Typeset`open$$, Typeset`querystate$$}],
DynamicModuleValues:>{},
UndoTrackedVariables:>{Typeset`open$$}],
BaseStyle->{"Deploy"},
DeleteWithContents->True,
Editable->False,
SelectWithContents->True]\)["Battles"]]

You could ask for a timeline of the beginning of the war (you don’t need to say “first 15 battles”, because if one cares, one can just read that from the Wolfram Language code):

TimelinePlot[Take[\!\(\*
NamespaceBox["LinguisticAssistant",
DynamicModuleBox[{Typeset`query$$ = "English Civil War",
       Typeset`boxes$$ = TemplateBox[{"\"English Civil War\"",
RowBox[{"Entity", "[",
RowBox[{"\"MilitaryConflict\"", ",", "\"EnglishCivilWar\""}], "]"}],
         "\"Entity[\\\"MilitaryConflict\\\", \\\"EnglishCivilWar\\\"]\
\"", "\"military conflict\""}, "Entity"],
       Typeset`allassumptions$$ = {{
        "type" -> "Clash", "word" -> "English Civil War",
         "template" -> "Assuming \"${word}\" is ${desc1}. Use as \
${desc2} instead", "count" -> "3",
         "Values" -> {{
           "name" -> "MilitaryConflict",
            "desc" -> "a military conflict",
            "input" -> "*C.English+Civil+War-_*MilitaryConflict-"}, {
           "name" -> "Word", "desc" -> "a word",
            "input" -> "*C.English+Civil+War-_*Word-"}, {
           "name" -> "HistoricalEvent",
            "desc" -> "a historical event",
            "input" -> "*C.English+Civil+War-_*HistoricalEvent-"}}}, {
        "type" -> "SubCategory", "word" -> "English Civil War",
         "template" -> "Assuming ${desc1}. Use ${desc2} instead",
         "count" -> "4",
         "Values" -> {{
           "name" -> "EnglishCivilWar",
            "desc" -> "English Civil War (1642 - 1651)",
            "input" -> "*DPClash.MilitaryConflictE.English+Civil+War-_\
*EnglishCivilWar-"}, {
           "name" -> "FirstEnglishCivilWar",
            "desc" -> "English Civil War (1642 - 1646)",
            "input" -> "*DPClash.MilitaryConflictE.English+Civil+War-_\
*FirstEnglishCivilWar-"}, {
           "name" -> "SecondEnglishCivilWar",
            "desc" -> "Second English Civil War",
            "input" -> "*DPClash.MilitaryConflictE.English+Civil+War-_\
*SecondEnglishCivilWar-"}, {
           "name" -> "ThirdEnglishCivilWar",
            "desc" -> "Third English Civil War",
            "input" -> "*DPClash.MilitaryConflictE.English+Civil+War-_\
*ThirdEnglishCivilWar-"}}}}, Typeset`assumptions$$ = {},
       Typeset`open$$ = {1, 2}, Typeset`querystate$$ = {
       "Online" -> True, "Allowed" -> True,
        "mparse.jsp" -> 1.305362`6.5672759594240935,
        "Messages" -> {}}},
DynamicBox[ToBoxes[
AlphaIntegration`LinguisticAssistantBoxes["", 4, Automatic,
Dynamic[Typeset`query$$],
Dynamic[Typeset`boxes$$],
Dynamic[Typeset`allassumptions$$],
Dynamic[Typeset`assumptions$$],
Dynamic[Typeset`open$$],
Dynamic[Typeset`querystate$$]], StandardForm],
ImageSizeCache->{275., {7., 17.}},
TrackedSymbols:>{
         Typeset`query$$, Typeset`boxes$$, Typeset`allassumptions$$,
          Typeset`assumptions$$, Typeset`open$$,
          Typeset`querystate$$}],
DynamicModuleValues:>{},
UndoTrackedVariables:>{Typeset`open$$}],
BaseStyle->{"Deploy"},
DeleteWithContents->True,
Editable->False,
SelectWithContents->True]\)["Battles"], 15]]

You could start looking at how armies moved, or who won and who lost at different points. At first, you can write a computational essay in which the computations are basically just generating custom infographics to illustrate your narrative. But then you can go further—and start really doing “computational history”. You can start to compute various statistical measures of the progress of the war. You can find ways to quantitatively compare it to other wars, and so on.

Can you make a “computational essay” about art? Absolutely. Maybe about art history. Pick 10 random paintings by van Gogh:




van Gogh paintings output

EntityValue[RandomSample[\!\(\*
NamespaceBox["LinguisticAssistant",
DynamicModuleBox[{Typeset`query$$ = "van gogh", Typeset`boxes$$ =
       TemplateBox[{"\"Vincent van Gogh\"",
RowBox[{"Entity", "[",
RowBox[{"\"Person\"", ",", "\"VincentVanGogh::9vq62\""}], "]"}],
         "\"Entity[\\\"Person\\\", \\\"VincentVanGogh::9vq62\\\"]\"",
         "\"person\""}, "Entity"],
       Typeset`allassumptions$$ = {{
        "type" -> "Clash", "word" -> "van gogh",
         "template" -> "Assuming \"${word}\" is ${desc1}. Use as \
${desc2} instead", "count" -> "4",
         "Values" -> {{
           "name" -> "Person", "desc" -> "a person",
            "input" -> "*C.van+gogh-_*Person-"}, {
           "name" -> "Movie", "desc" -> "a movie",
            "input" -> "*C.van+gogh-_*Movie-"}, {
           "name" -> "SolarSystemFeature",
            "desc" -> "a solar system feature",
            "input" -> "*C.van+gogh-_*SolarSystemFeature-"}, {
           "name" -> "Word", "desc" -> "a word",
            "input" -> "*C.van+gogh-_*Word-"}}}},
       Typeset`assumptions$$ = {}, Typeset`open$$ = {1, 2},
       Typeset`querystate$$ = {
       "Online" -> True, "Allowed" -> True,
        "mparse.jsp" -> 0.472412`6.125865914333281,
        "Messages" -> {}}},
DynamicBox[ToBoxes[
AlphaIntegration`LinguisticAssistantBoxes["", 4, Automatic,
Dynamic[Typeset`query$$],
Dynamic[Typeset`boxes$$],
Dynamic[Typeset`allassumptions$$],
Dynamic[Typeset`assumptions$$],
Dynamic[Typeset`open$$],
Dynamic[Typeset`querystate$$]], StandardForm],
ImageSizeCache->{227., {7., 17.}},
TrackedSymbols:>{
         Typeset`query$$, Typeset`boxes$$, Typeset`allassumptions$$,
          Typeset`assumptions$$, Typeset`open$$,
          Typeset`querystate$$}],
DynamicModuleValues:>{},
UndoTrackedVariables:>{Typeset`open$$}],
BaseStyle->{"Deploy"},
DeleteWithContents->True,
Editable->False,
SelectWithContents->True]\)["NotableArtworks"], 10], "Image"]

Then look at what colors they use (a surprisingly narrow selection):

ChromaticityPlot[%]

ChromaticityPlot[%]

Or maybe one could write a computational essay about actually creating art, or music.

What about science? You could rediscover Kepler’s laws by looking at properties of planets:

\!\(\*
NamespaceBox["LinguisticAssistant",
DynamicModuleBox[{Typeset`query$$ = "planets", Typeset`boxes$$ =
     TemplateBox[{"\"planets\"",
RowBox[{"EntityClass", "[",
RowBox[{"\"Planet\"", ",", "All"}], "]"}],
       "\"EntityClass[\\\"Planet\\\", All]\"", "\"planets\""},
      "EntityClass"],
     Typeset`allassumptions$$ = {{
      "type" -> "Clash", "word" -> "planets",
       "template" -> "Assuming \"${word}\" is ${desc1}. Use as \
${desc2} instead", "count" -> "4",
       "Values" -> {{
         "name" -> "PlanetClass", "desc" -> " referring to planets",
          "input" -> "*C.planets-_*PlanetClass-"}, {
         "name" -> "ExoplanetClass",
          "desc" -> " referring to exoplanets",
          "input" -> "*C.planets-_*ExoplanetClass-"}, {
         "name" -> "MinorPlanetClass",
          "desc" -> " referring to minor planets",
          "input" -> "*C.planets-_*MinorPlanetClass-"}, {
         "name" -> "Word", "desc" -> "a word",
          "input" -> "*C.planets-_*Word-"}}}},
     Typeset`assumptions$$ = {}, Typeset`open$$ = {1, 2},
     Typeset`querystate$$ = {
     "Online" -> True, "Allowed" -> True,
      "mparse.jsp" -> 0.400862`6.054539882441674, "Messages" -> {}}},
DynamicBox[ToBoxes[
AlphaIntegration`LinguisticAssistantBoxes["", 4, Automatic,
Dynamic[Typeset`query$$],
Dynamic[Typeset`boxes$$],
Dynamic[Typeset`allassumptions$$],
Dynamic[Typeset`assumptions$$],
Dynamic[Typeset`open$$],
Dynamic[Typeset`querystate$$]], StandardForm],
ImageSizeCache->{171., {7., 17.}},
TrackedSymbols:>{
       Typeset`query$$, Typeset`boxes$$, Typeset`allassumptions$$,
        Typeset`assumptions$$, Typeset`open$$, Typeset`querystate$$}],
DynamicModuleValues:>{},
UndoTrackedVariables:>{Typeset`open$$}],
BaseStyle->{"Deploy"},
DeleteWithContents->True,
Editable->False,
SelectWithContents->True]\)[{"DistanceFromSun", "OrbitPeriod"}]

ListLogLogPlot[%]

ListLogLogPlot[%]

Maybe you could go on and check it for exoplanets. Or you could start solving the equations of motion for planets.

You could look at biology. Here’s the first beginning of the reference sequence for the human mitochondrion:

GenomeData[{"Mitochondrion", {1, 150}}]

GenomeData[{"Mitochondrion", {1, 150}}]

You can start off breaking it into possible codons:

StringPartition[%, 3]

StringPartition[%, 3]

There’s an immense amount of data about all kinds of things built into the Wolfram Language. But there’s also the Wolfram Data Repository, which contains all sorts of specific datasets. Like here’s a map of state fairgrounds in the US:

GeoListPlot[  ResourceData["U.S. State Fairgrounds"][All, "GeoPosition"]]

GeoListPlot[
 ResourceData["U.S. State Fairgrounds"][All, "GeoPosition"]]

And here’s a word cloud of the constitutions of countries that have been enacted since 2010:

WordCloud[
 StringJoin[
  Normal[ResourceData["World Constitutions"][
    Select[#YearEnacted > \!\(\*
NamespaceBox["LinguisticAssistant",
DynamicModuleBox[{Typeset`query$$ = "year 2010", Typeset`boxes$$ =
           RowBox[{"DateObject", "[",
RowBox[{"{", "2010", "}"}], "]"}],
           Typeset`allassumptions$$ = {{
            "type" -> "MultiClash", "word" -> "",
             "template" -> "Assuming ${word1} is referring to \
${desc1}. Use \"${word2}\" as ${desc2}.", "count" -> "2",
             "Values" -> {{
               "name" -> "PseudoTokenYear", "word" -> "year 2010",
                "desc" -> "a year",
                "input" -> "*MC.year+2010-_*PseudoTokenYear-"}, {
               "name" -> "Unit", "word" -> "year", "desc" -> "a unit",
                 "input" -> "*MC.year+2010-_*Unit-"}}}},
           Typeset`assumptions$$ = {}, Typeset`open$$ = {1},
           Typeset`querystate$$ = {
           "Online" -> True, "Allowed" -> True,
            "mparse.jsp" -> 0.542662`6.186074404594303,
            "Messages" -> {}}},
DynamicBox[ToBoxes[
AlphaIntegration`LinguisticAssistantBoxes["", 4, Automatic,
Dynamic[Typeset`query$$],
Dynamic[Typeset`boxes$$],
Dynamic[Typeset`allassumptions$$],
Dynamic[Typeset`assumptions$$],
Dynamic[Typeset`open$$],
Dynamic[Typeset`querystate$$]], StandardForm],
ImageSizeCache->{86., {7., 18.}},
TrackedSymbols:>{
             Typeset`query$$, Typeset`boxes$$,
              Typeset`allassumptions$$, Typeset`assumptions$$,
              Typeset`open$$, Typeset`querystate$$}],
DynamicModuleValues:>{},
UndoTrackedVariables:>{Typeset`open$$}],
BaseStyle->{"Deploy"},
DeleteWithContents->True,
Editable->False,
SelectWithContents->True]\) &], "Text"]]]]

Quite often one’s interested in dealing not with public data, but with some kind of local data. One convenient source of this is the Wolfram Data Drop. In an educational setting, particular databins (or cloud objects in general) can be set so that they can be read (and/or added to) by some particular group. Here’s a databin that I accumulate for myself, showing my heart rate through the day. Here it is for today:

DataListPlot[TimeSeries[Databin[

DateListPlot[TimeSeries[YourDatabinHere]]

Of course, it’s easy to make a histogram too:

Histogram[TimeSeries[Databin[

Histogram[TimeSeries[YourDatabinHere]]

What about math? A key issue in math is to understand why things are true. The traditional approach to this is to give proofs. But computational essays provide an alternative. The nature of the steps in them is different—but the objective is the same: to show what’s true and why.

As a very simple example, let’s look at primes. Here are the first 50:

Table[Prime[n], {n, 50}]

Table[Prime[n], {n, 50}]

Let’s find the remainder mod 6 for all these primes:

Mod[Table[Prime[n], {n, 50}], 6]

Mod[Table[Prime[n], {n, 50}], 6]

But why do only 1 and 5 occur (well, after the trivial cases of the primes 2 and 3)? We can see this by computation. Any number can be written as 6n+k for some n and k:

Table[6 n + k, {k, 0, 5}]

Table[6 n + k, {k, 0, 5}]

But if we factor numbers written in this form, we’ll see that 6n+1 and 6n+5 are the only ones that don’t have to be multiples:

Factor[%]

Factor[%]

What about computer science? One could for example write a computational essay about implementing Euclid’s algorithm, studying its running time, and so on.

Define a function to give all steps in Euclid’s algorithm:

gcdlist[a_, b_] :=   NestWhileList[{Last[#], Apply[Mod, #]} &, {a, b}, Last[#] != 0 &, 1]

gcdlist[a_, b_] :=
 NestWhileList[{Last[#], Apply[Mod, #]} &, {a, b}, Last[#] != 0 &, 1]

Find the distribution of running lengths for the algorithm for numbers up to 200:

Histogram[Flatten[Table[Length[gcdlist[i, j]], {i, 200}, {j, 200}]]]

Histogram[Flatten[Table[Length[gcdlist[i, j]], {i, 200}, {j, 200}]]]

Or in modern times, one could explore machine learning, starting, say, by making a feature space plot of part of the MNIST handwritten digits dataset:

FeatureSpacePlot[RandomSample[Keys[ResourceData["MNIST"]], 50]]

FeatureSpacePlot[RandomSample[Keys[ResourceData["MNIST"]], 50]]

If you wanted to get deeper into software engineering, you could write a computational essay about the HTTP protocol. This gets an HTTP response from a site:

URLRead["https://www.wolframalpha.com"]

URLRead["https://www.wolfram.com"]

And this shows the tree structure of the elements on the webpage at that URL:

TreeForm[Import["http://www.wolframalpha.com", {"HTML", "XMLObject"}],   VertexLabeling -> False, AspectRatio -> 1/2]

TreeForm[Import["http://www.wolframalpha.com", {"HTML", "XMLObject"}],
  VertexLabeling -> False, AspectRatio -> 1/2]

Or—in a completely different direction—you could talk about anatomy:

AnatomyPlot3D[left foot]

AnatomyPlot3D[Entity["AnatomicalStructure", "LeftFoot"]]

What Makes a Good Computational Essay?

As far as I’m concerned, for a computational essay to be good, it has to be as easy to understand as possible. The format helps quite a lot, of course. Because a computational essay is full of outputs (often graphical) that are easy to skim, and that immediately give some impression of what the essay is trying to say. It also helps that computational essays are structured documents, that deliver information in well-encapsulated pieces.

But ultimately it’s up to the author of a computational essay to make it clear. But another thing that helps is that the nature of a computational essay is that it must have a “computational narrative”—a sequence of pieces of code that the computer can execute to do what’s being discussed in the essay. And while one might be able to write an ordinary essay that doesn’t make much sense but still sounds good, one can’t ultimately do something like that in a computational essay. Because in the end the code is the code, and actually has to run and do things.

So what can go wrong? Well, like English prose, Wolfram Language code can be unnecessarily complicated, and hard to understand. In a good computational essay, both the ordinary text, and the code, should be as simple and clean as possible. I try to enforce this for myself by saying that each piece of input should be at most one or perhaps two lines long—and that the caption for the input should always be just one line long. If I’m trying to do something where the core of it (perhaps excluding things like display options) takes more than a line of code, then I break it up, explaining each line separately.

Another important principle as far as I’m concerned is: be explicit. Don’t have some variable that, say, implicitly stores a list of words. Actually show at least part of the list, so people can explicitly see what it’s like. And when the output is complicated, find some tabulation or visualization that makes the features you’re interested in obvious. Don’t let the “key result” be hidden in something that’s tucked away in the corner; make sure the way you set things up makes it front and center.

Use the structured nature of notebooks. Break up computational essays with section headings, again helping to make them easy to skim. I follow the style of having a “caption line” before each input. Don’t worry if this somewhat repeats what a paragraph of text has said; consider the caption something that someone who’s just “looking at the pictures” might read to understand what a picture is of, before they actually dive into the full textual narrative.

The technology of Wolfram Notebooks makes it straightforward to put in interactive elements, like Manipulate, into computational essays. And sometimes this is very helpful, and perhaps even essential. But interactive elements shouldn’t be overused. Because whenever there’s an element that requires interaction, this reduces the ability to skim the essay.

Sometimes there’s a fair amount of data—or code—that’s needed to set up a particular computational essay. The cloud is very useful for handling this. Just deploy the data (or code) to the Wolfram Cloud, and set appropriate permissions so it can automatically be read whenever the code in your essay is executed.

Notebooks also allow “reverse closing” of cells—allowing an output cell to be immediately visible, even though the input cell that generated it is initially closed. This kind of hiding of code should generally be avoided in the body of a computational essay, but it’s sometimes useful at the beginning or end of an essay, either to give an indication of what’s coming, or to include something more advanced where you don’t want to go through in detail how it’s made.

OK, so if a computational essay is done, say, as homework, how can it be assessed? A first, straightforward question is: does the code run? And this can be determined pretty much automatically. Then after that, the assessment process is very much like it would be for an ordinary essay. Of course, it’s nice and easy to add cells into a notebook to give comments on what’s there. And those cells can contain runnable code—that for example can take results in the essay and process or check them.

Are there principles of good computational essays? Here are a few candidates:

0. Understand what you’re talking about (!)

1. Find the most straightforward and direct way to represent your subject matter

2. Keep the core of each piece of Wolfram Language input to a line or two

3. Use explicit visualization or other information presentation as much as possible

4. Try to make each input+caption independently understandable

5. Break different topics or directions into different subsections

Learning the Language

At the core of computational essays is the idea of expressing computational thoughts using the Wolfram Language. But to do that, one has to know the language. Now, unlike human languages, the Wolfram Language is explicitly designed (and, yes, that’s what I’ve been doing for the past 30+ years) to follow definite principles and to be as easy to learn as possible. But there’s still learning to be done.

One feature of the Wolfram Language is that—like with human languages—it’s typically easier to read than to write. And that means that a good way for people to learn what they need to be able to write computational essays is for them first to read a bunch of essays. Perhaps then they can start to modify those essays. Or they can start creating “notes essays”, based on code generated in livecoding or other classroom sessions.

As people get more fluent in writing the Wolfram Language, something interesting happens: they start actually expressing themselves in the language, and using Wolfram Language input to carry significant parts of the narrative in a computational essay.

When I was writing An Elementary Introduction to the Wolfram Language (which itself is written in large part as a sequence of computational essays) I had an interesting experience. Early in the book, it was decently easy to explain computational exercises in English (“Make a table of the first 10 squares”). But a little later in the book, it became a frustrating process.

It was easy to express what I wanted in the Wolfram Language. But to express it in English was long and awkward (and had a tendency of sounding like legalese). And that’s the whole point of using the Wolfram Language, and the reason I’ve spent 30+ years building it: because it provides a better, crisper way to express computational thoughts.

It’s sometimes said of human languages that the language you use determines how you think. It’s not clear how true this is of human languages. But it’s absolutely true of computer languages. And one of the most powerful things about the Wolfram Language is that it helps one formulate clear computational thinking.

Traditional computer languages are about writing code that describes the details of what a computer should do. The point of the Wolfram Language is to provide something much higher level—that can immediately talk about things in the world, and that can allow people as directly as possible to use it as a medium of computational thinking. And in a sense that’s what makes a good computational essay possible.

The Long Path to Computational Essays

Now that we have full-fledged computational essays, I realize I’ve been on a path towards them for nearly 40 years. At first I was taking interactive computer output and Scotch-taping descriptions into it:

Interactive computer output sketch

By 1981, when I built SMP, I was routinely writing documents that interspersed code and explanations:

Code interspersed with explanations

But it was only in 1986, when I started documenting what became Mathematica and the Wolfram Language, that I started seriously developing a style close to what I now favor for computational essays:

Wolfram Language Version 1 documentation

And with the release of Mathematica 1.0 in 1988 came another critical element: the invention of Wolfram Notebooks. Notebooks arrived in a form at least superficially very similar to the way they are today (and already in many ways more sophisticated than the imitations that started appearing 25+ years later!): collections of cells arranged into groups, and capable of containing text, executable code, graphics, etc.

Early Mac notebooks

At first notebooks were only possible on Mac and NeXT computers. A few years later they were extended to Microsoft Windows and X Windows (and later, Linux). But immediately people started using notebooks both to provide reports about they’d done, and to create rich expository and educational material. Within a couple of years, there started to be courses based on notebooks, and books printed from notebooks, with interactive versions available on CD-ROM at the back:

Notebook publication example

So in a sense the raw material for computational essays already existed by the beginning of the 1990s. But to really make computational essays come into their own required the development of the cloud—as well as the whole broad range of computational knowledge that’s now part of the Wolfram Language.

By 1990 it was perfectly possible to create a notebook with a narrative, and people did it, particularly about topics like mathematics. But if there was real-world data involved, things got messy. One had to make sure that whatever was needed was appropriately available from a distribution CD-ROM or whatever. We created a Player for notebooks very early, that was sometimes distributed with notebooks.

But in the last few years, particularly with the development of the Wolfram Cloud, things have gotten much more streamlined. Because now you can seamlessly store things in the cloud and use them anywhere. And you can work directly with notebooks in the cloud, just using a web browser. In addition, thanks to lots of user-assistance innovations (including natural language input), it’s become even easier to write in the Wolfram Language—and there’s ever more that can be achieved by doing so.

And the important thing that I think has now definitively happened is that it’s become lightweight enough to produce a good computational essay that it makes sense to do it as something routine—either professionally in writing reports, or as a student doing homework.

Ancient Educational History

The idea of students producing computational essays is something new for modern times, made possible by a whole stack of current technology. But there’s a curious resonance with something from the distant past. You see, if you’d learned a subject like math in the US a couple of hundred years ago, a big thing you’d have done is to create a so-called ciphering book—in which over the course of several years you carefully wrote out the solutions to a range of problems, mixing explanations with calculations. And the idea then was that you kept your ciphering book for the rest of your life, referring to it whenever you needed to solve problems like the ones it included.

Well, now, with computational essays you can do very much the same thing. The problems you can address are vastly more sophisticated and wide-ranging than you could reach with hand calculation. But like with ciphering books, you can write computational essays so they’ll be useful to you in the future—though now you won’t have to imitate calculations by hand; instead you’ll just edit your computational essay notebook and immediately rerun the Wolfram Language inputs in it.

I actually only learned about ciphering books quite recently. For about 20 years I’d had essentially as an artwork a curious handwritten notebook (created in 1818, it says, by a certain George Lehman, apparently of Orwigsburg, Pennsylvania), with pages like this:

Ciphering book

I now know this is a ciphering book—that on this page describes how to find the “height of a perpendicular object… by having the length of the shadow given”. And of course I can’t resist a modern computational essay analog, which, needless to say, can be a bit more elaborate.

Find the current position of the Sun as azimuth, altitude:

SunPosition[]

SunPosition[]

Find the length of a shadow for an object of unit height:

1/Tan[SunPosition[][[2]]]

1/Tan[SunPosition[][[2]]]

Given a 10-ft shadow, find the height of the object that made it:

Tan[SunPosition[][[2]]]10ft

Tan[SunPosition[][[2]]]10ft

The Path Ahead

I like writing textual essays (such as blog posts!). But I like writing computational essays more. Because at least for many of the things I want to communicate, I find them a purer and more efficient way to do it. I could spend lots of words trying to express an idea—or I can just give a little piece of Wolfram Language input that expresses the idea very directly and shows how it works by generating (often very visual) output with it.

When I wrote my big book A New Kind of Science (from 1991 to 2002), neither our technology nor the world was quite ready for computational essays in the form in which they’re now possible. My research for the book filled thousands of Wolfram Notebooks. But when it actually came to putting together the book, I just showed the results from those notebooks—including a little of the code from them in notes at the back of the book.

But now the story of the book can be told in computational essays—that I’ve been starting to produce. (Just for fun, I’ve been livestreaming some of the work I’m doing to create these.)  And what’s very satisfying is just how clearly and crisply the ideas in the book can be communicated in computational essays.

There is so much potential in computational essays. And indeed we’re now starting the project of collecting “topic explorations” that use computational essays to explore a vast range of topics in unprecedentedly clear and direct ways. It’ll be something like our Wolfram Demonstrations Project (that now has 11,000+ Wolfram Language–powered Demonstrations). Here’s a typical example I wrote:

The Central Limit Theorem

Computational essays open up all sorts of new types of communication. Research papers that directly present computational experiments and explorations. Reports that describe things that have been found, but allow other cases to be immediately explored. And, of course, computational essays define a way for students (and others) to very directly and usefully showcase what they’ve learned.

There’s something satisfying about both writing—and reading—computational essays. It’s as if in communicating ideas we’re finally able to go beyond pure human effort—and actually leverage the power of computation. And for me, having built the Wolfram Language to be a computational communication language, it’s wonderful to see how it can be used to communicate so effectively in computational essays.

It’s so nice when I get something sent to me as a well-formed computational essay. Because I immediately know that I’m going to get a straight story that I can actually understand. There aren’t going to be all sorts of missing sources and hidden assumptions; there’s just going to be Wolfram Language input that stands alone, and that I can take out and study or run for myself.

The modern world of the web has brought us a few new formats for communication—like blogs, and social media, and things like Wikipedia. But all of these still follow the basic concept of text + pictures that’s existed since the beginning of the age of literacy. With computational essays we finally have something new—and it’s going to be exciting to see all the things it makes possible.

]]>
http://blog.wolfram.com/2017/11/14/what-is-a-computational-essay/feed/ 0
Limits without Limits in Version 11.2 http://blog.wolfram.com/2017/11/09/limits-without-limits-in-version-11-2/ http://blog.wolfram.com/2017/11/09/limits-without-limits-in-version-11-2/#comments Thu, 09 Nov 2017 17:13:23 +0000 Devendra Kapadia http://blog.internal.wolfram.com/?p=39140 Limits lead image

Here are 10 terms in a sequence:

Table[(2/(2 n + 1)) ((2 n)!!/(2 n - 1)!!)^2, {n, 10}]

And here’s what their numerical values are:

N[%]

But what is the limit of the sequence? What would one get if one continued the sequence forever?

In Mathematica and the Wolfram Language, there’s a function to compute that:

DiscreteLimit[(2/(2 n + 1)) ((2 n)!!/(2 n - 1)!!)^2, n -> \[Infinity]]

Limits are a central concept in many areas, including number theory, geometry and computational complexity. They’re also at the heart of calculus, not least since they’re used to define the very notions of derivatives and integrals.

Mathematica and the Wolfram Language have always had capabilities for computing limits; in Version 11.2, they’ve been dramatically expanded. We’ve leveraged many areas of the Wolfram Language to achieve this, and we’ve invented some completely new algorithms too. And to make sure we’ve covered what people want, we’ve sampled over a million limits from Wolfram|Alpha.

Let’s talk about a limit that Hardy and Ramanujan worked out in 1918. But let’s build up to that. First, consider the sequence a(n) that is defined as follows:

a[n_] := (-1)^n/n

Here is a table of the first ten values for the sequence.

Table[a[n], {n, 1, 10}]

The following plot indicates that the sequence converges to 0 as n approaches Infinity.

DiscretePlot[a[n], {n, 1, 40}]

The DiscreteLimit function, which was introduced in Version 11.2, confirms that the limit of this sequence is indeed 0.

DiscreteLimit[a[n], n -> \[Infinity]]

Many sequences that arise in practice (for example, in signal communication) are periodic in the sense that their values repeat themselves at regular intervals. The length of any such interval is called the period of the sequence. As an example, consider the following sequence that is defined using Mod.

a[n_] := Mod[n, 6]

A plot of the sequence shows that the sequence is periodic with period 6.

DiscretePlot[a[n], {n, 0, 20}]

In contrast to our first example, this sequence does not converge, since it oscillates between 0 and 5. Hence, DiscreteLimit returns Indeterminate in this case.

DiscreteLimit[a[n], n -> \[Infinity]]

The new Version 11.2 functions DiscreteMinLimit and DiscreteMaxLimit can be used to compute the lower and upper limits of oscillation, respectively, in such cases. Thus, we have:

DiscreteMinLimit[a[n], n -> \[Infinity]]

DiscreteMaxLimit[a[n], n -> \[Infinity]]

DiscreteMinLimit and DiscreteMaxLimit are often referred to as “lim inf” and “lim sup,” respectively, in the mathematical literature. The traditional underbar and overbar notations for these limits are available, as shown here.

 \!\(\*UnderscriptBox[\(\[MinLimit]\), \(n\* UnderscriptBox["\[Rule]",  TemplateBox[{}, "Integers"]]\[Infinity]\)]\) a[n]

 \!\(\*UnderscriptBox[\(\[MaxLimit]\), \(n\* UnderscriptBox["\[Rule]",  TemplateBox[{}, "Integers"]]\[Infinity]\)]\) a[n]

Our next example is an oscillatory sequence that is built from the trigonometric functions Sin and Cos, and is defined as follows.

a[n_] := Sin[2 n]^2/(2 + Cos[n])

Although Sin and Cos are periodic when viewed as functions over the real numbers, this integer sequence behaves in a bizarre manner and is very far from being a periodic sequence, as confirmed by the following plot.

DiscretePlot[a[n], {n, 1, 100}]

Hence, the limit of this sequence does not exist.

DiscreteLimit[a[n], n -> \[Infinity]]

However, it turns out that for such “densely aperiodic sequences,” the extreme values can be computed by regarding them as real functions. DiscreteMinLimit uses this method to return the answer 0 for the example, as expected.

DiscreteMinLimit[a[n], n -> \[Infinity]]

Using the same method, DiscreteMaxLimit returns a rather messy-looking result in terms of Root objects for this example.

DiscreteMaxLimit[a[n], n -> \[Infinity]]

The numerical value of this result is close to 0.8, as one might have guessed from the graph.

N[%]

Discrete limits also occur in a natural way when we try to compute the value of infinitely nested radicals. For example, consider the problem of evaluating the following nested radical.

Nested radical

The successive terms in the expansion of the radical can be generated by using RSolveValue, since the sequence satisfies a nonlinear recurrence. For example, the third term in the expansion is obtained as follows.

RSolveValue[{r[n + 1] == Sqrt[2 + r[n]], r[1] == Sqrt[2]}, r[3], n]

The value of the infinitely nested radical appears to be 2, as seen from the following plot that is generated using RecurrenceTable.

ListPlot[RecurrenceTable[{r[n + 1] == Sqrt[2 + r[n]],      r[1] == Sqrt[2]}, r[n], {n, 2, 35}]]

Using Version 11.2, we can confirm that the limiting value is indeed 2 by requesting the value r(∞) in RSolveValue, as shown here.

RSolveValue[{r[n + 1] == Sqrt[2 + r[n]], r[1] == 2},   r[\[Infinity]], n]

The study of limits belongs to the branch of mathematics called asymptotic analysis. Asymptotic analysis provides methods for obtaining approximate solutions of problems near a specific value such as 0 or Infinity. It turns out that, in practice, the efficiency of asymptotic approximations often increases precisely in the regime where the corresponding exact computation becomes difficult! A striking example of this phenomenon is seen in the study of integer partitions, which are known to grow extremely fast as the size of the number increases. For example, the number 6 can be partitioned in 11 distinct ways using IntegerPartitions, as shown here.

IntegerPartitions[6] // TableForm

Length[%]

The number of distinct partitions can be found directly using PartitionsP as follows.

PartitionsP[6]

As noted earlier, the number of partitions grows rapidly with the size of the integer. For example, there are nearly 4 trillion partitions of the number 200.

PartitionsP[200]

N[%]

In 1918, Hardy and Ramanujan provided an asymptotic approximation for this number, which is given by the following formula.

asymp[n_] := E^(\[Pi] Sqrt[(2 n)/3])/(4 n Sqrt[3])

The answer given by this estimate for the number 200 is remarkably close to 4 trillion.

asymp[200] // N

With a much larger integer, we get an even better approximation for the number of partitions almost instantaneously, as seen in the following example.

PartitionsP[2000000000] // N // Timing

N[asymp[2000000000] , 20] // Timing

Finally, we can confirm that the asymptotic estimate approaches the number of partitions as n tends to Infinity using DiscreteLimit, which is aware of the Hardy–Ramanujan formula discussed above.

 \!\(\*UnderscriptBox[\(\[Limit]\), \(x \[Rule] 0\)]\) Sin[x]/x

Formally, we say that exact and approximate formulas for the number of partitions are asymptotically equivalent as n approaches Infinity.

Asymptotic notions also play an important rule in the study of function limits. For instance, the small-angle approximation in trigonometry asserts that “sin(x) is nearly equal to x for small values of x.” This may be rephrased as “sin(x) is asymptotically equivalent to x as x approaches 0.” A formal statement of this result can be given using Limit, which computes function limits, as follows.

 \!\(\*UnderscriptBox[\(\[Limit]\), \(x \[Rule] 0\)]\) Sin[x]/x

This plot provides visual confirmation that the limit is indeed 1.

Plot[Sin[x]/x, {x, -20, 20}, PlotRange -> All]

The above limit can also be calculated using L’Hôspital’s rule by computing the derivatives, cos(x) and 1, of the numerator and denominator respectively, as shown here.

Limit[Cos[x]/1, x -> 0]

L’Hôspital’s rule gives a powerful method for evaluating many limits that occur in practice. However, it may require a large number of steps before arriving at the answer. For example, consider the following limit.

 \!\(\*UnderscriptBox[\(\[Limit]\), \(x \[Rule] \[Infinity]\)]\) x^6/E^  x

That limit requires six repeated applications of L’Hôspital’s rule to arrive at the answer 0, since all the intermediate computations give indeterminate results.

Table[ \!\(\*UnderscriptBox[\(\[Limit]\), \(x \[Rule] \[Infinity]\)]\) \!\( \*SubscriptBox[\(\[PartialD]\), \({x, n}\)] \*SuperscriptBox[\(x\), \(6\)]\)/ \!\(\*UnderscriptBox[\(\[Limit]\), \(x \[Rule] \[Infinity]\)]\) \!\( \*SubscriptBox[\(\[PartialD]\), \({x, n}\)] \*SuperscriptBox[\(E\), \(x\)]\), {n, 0, 10}]

Thus, we see that L’Hôspital’s rule has limited utility as a practical algorithm for finding function limits, since it is impossible to decide when the algorithm should stop! Hence, the built-in Limit function uses a combination of series expansions and modern algorithms that works well on inputs involving exponentials and logarithms, the so-called “exp-log” class. In fact, Limit has received a substantial update in Version 11.2 and now handles a wide variety of difficult examples, such as the following, in a rather comprehensive manner (the last two examples work only in the latest release).

 \!\(\*UnderscriptBox[\(\[Limit]\), \(x \[Rule] \[Infinity]\)]\)   Gamma[x + 1/2]/(Gamma[x] Sqrt[x])

 \!\(\*UnderscriptBox[\(\[Limit]\), \(x \[Rule] \[Infinity]\)]\) (  Log[x] (-Log[Log[x]] + Log[Log[x] + Log[Log[x]]]))/  Log[Log[x] + Log[Log[Log[x]]]]

 \!\(\*UnderscriptBox[\(\[Limit]\), \(x \[Rule] \[Infinity]\)]\) E^E^E^  PolyGamma[PolyGamma[PolyGamma[x]]]/x

 \!\(\*UnderscriptBox[\(\[Limit]\), \(x \[Rule] \[Infinity]\)]\)   E^(E^x + x^2) (-Erf[E^-E^x - x] - Erf[x])

As in the cases of sequences, the limits of periodic and oscillatory functions will often not exist. One can then use MaxLimit and MinLimit, which, like their discrete counterparts, give tight bounds for the oscillation of the function near a given value, as in this classic example.

f[x_] := Sin[1/x]

Plot[f[x], {x, -1, 1}]

The graph indicates the function oscillates rapidly between –1 and 1 near 0. These bounds are confirmed by MaxLimit and MinLimit, while Limit itself returns Indeterminate.

{Limit[f[x], x -> 0], MinLimit[f[x], x -> 0], MaxLimit[f[x], x -> 0]}

In the previous example, the limit fails to exist because the function oscillates wildly around the origin. Discontinuous functions provide other types of examples where the limit at a point may fail to exist. We will now consider an example of such a function with a jump discontinuity at the origin and other values. The function is defined in terms of SquareWave and FresnelS, as follows.

g[x_] := (SquareWave[x] FresnelS[x])/x^3

This plot shows the jump discontinuities, which are caused by the presence of SquareWave in the definition of the function.

Plot[{g[x], -Pi/6, Pi/6}, {x, -2, 2},   ExclusionsStyle -> Directive[Red, Dashed]]

We see that the limiting values of the function at 0, for instance, depend on the direction from which we approach the origin. The limiting value from the right (“from above”) can be calculated using the Direction option.

Limit[g[x], x -> 0, Direction -> "FromAbove"]

Similarly, the limit from the left can be calculated as follows.

Limit[g[x], x -> 0, Direction -> "FromBelow"]

The limit, if it exists, is the “two-sided” limit for the function that, in this case, does not exist.

Limit[g[x], x -> 0, Direction -> "TwoSided"]

By default, Limit computes two-sided limits in Version 11.2. This is a change from earlier versions, where it computed the limit from above by default. Hence, we get an Indeterminate result from Limit, with no setting for the Direction option.

Limit[g[x], x -> 0]

Directional limits acquire even more significance in the multivariate case, since there are many possible directions for approaching a given point in higher dimensions. For example, consider the bivariate function f(x,y) that is defined as follows.

f[x_, y_] := (x y)/(x^2 + y^2)

The limiting value of this function at the origin is 0 if we approach it along the x axis, which is given by y=0, since the function has the constant value 0 along this line.

f[x, 0]

Similarly, the limiting value of the function at the origin is 0 if we approach it along the y axis, which is given by x=0.

f[0, y]

However, the limit is 1/2 if we approach the origin along the line y=x, as seen here.

f[x, y] /. {y -> x}

More generally, the limiting value changes as we approach the origin along different lines y=m x.

f[x, y] /. {y -> m x} // Simplify

The directional dependence of the limiting value implies that the true multivariate limit does not exist. In Version 11.2, Limit handles multivariate examples with ease, and quickly returns the expected answer Indeterminate for the limit of this function at the origin.

Limit[f[x, y], {x, y} -> {0, 0}]

A plot of the surface z=f(x,y) confirms the behavior of the function near the origin.

Plot3D[f[x, y], {x, -4, 4}, {y, -4, 4}]

This example indicates that, in general, multivariate limits do not exist. In other cases, such as the following, the limit exists but the computation is subtle.

f[x_, y_] := (x^2 + y^2)/(3^(Abs[x] + Abs[y]) - 1)

This plot indicates that the limit of this function at {0,0} exists and is 0, since the function values appear to approach 0 from all directions.

Plot3D[f[x, y], {x, -1, 1}, {y, -1, 1}, PlotRange -> All]

The answer can be confirmed by applying Limit to the function directly.

Limit[f[x, y], {x, y} -> {0, 0}]

A rich source of multivariate limit examples is provided by the steady stream of inputs that is received by Wolfram|Alpha each day. We acquired around 100,000 anonymized queries to Wolfram|Alpha from earlier years, which were then evaluated using Version 11.2 . Here is a fairly complicated example from this vast collection that Limit handles with ease in the latest version.

f[x_, y_] := Cos[Abs[x] Abs[y]] - 1

Plot3D[f[x, y], {x, -3, 3}, {y, -3, 3}]

Limit[f[x, y], {x, y} -> {0, 0}]

It is a sheer joy to browse through the examples from Wolfram|Alpha, so we decided to share 1,000 nontrivial examples from the collection with you. Sample images of the examples are shown below. The five notebooks with the examples can be downloaded here.

Downloadable notebooks

Version 11.2 evaluates 90% of the entire collection in the benchmark, which is remarkable since the functionality for multivariate limits is new in this release.

Limits pie chart

Version 11.2 also evaluates a higher fraction (96%) of an even larger collection of 1,000,000 univariate limits from Wolfram|Alpha when compared with Version 11.1 (94%). The small percentage difference between the two versions can be explained by noting that most Wolfram|Alpha queries for univariate limits relate to a first or second course in college calculus and are easily computed by Limit in either version.

Limit has been one of the most dependable functions in the Wolfram Language ever since it was first introduced in Version 1 (1988). The improvements for this function, along with DiscreteLimit and other new functions in Version 11.2, have facilitated our journey through the world of limits. I hope that you have enjoyed this brief tour, and welcome any comments or suggestions about the new features.


Download this post as a Computable Document Format (CDF) file. New to CDF? Get your copy for free with this one-time download.

]]>
http://blog.wolfram.com/2017/11/09/limits-without-limits-in-version-11-2/feed/ 0
What Can You Say in One Line of the Wolfram Language? The 2017 One-Liner Competition http://blog.wolfram.com/2017/11/08/what-can-you-say-in-one-line-of-the-wolfram-language-the-2017-one-liner-competition/ http://blog.wolfram.com/2017/11/08/what-can-you-say-in-one-line-of-the-wolfram-language-the-2017-one-liner-competition/#comments Wed, 08 Nov 2017 15:14:34 +0000 Christopher Carlson http://blog.internal.wolfram.com/?p=39028 The One-Liner Competition is a tradition at our annual Wolfram Technology Conference, which took place at our headquarters in Champaign, Illinois, two weeks ago. We challenge attendees to show us the most impressive effects they can achieve with 128 characters or fewer of Wolfram Language code. We are never disappointed, and often surprised by what they show us can be done with the language we work so hard to develop—the language we think is the world’s most powerful and fun.

Melting flags

This year’s winning submissions included melting flags, computer vision and poetry. Read on to see how far you can go with just a few characters of Wolfram Language code…

Honorable Mention
Pedro Fonseca: Dynamically Restyled Wolf (128 characters)

Pedro’s One-Liner submission riffed on another conference competition: use the new ImageRestyle function to make an appealing restyling of the wolf icon.

e = WebImageSearch["wolf", "Thumbnails"]; b = a = Rasterize[Style[\[Wolf], 99]]; i = ImageRestyle; Dynamic[b = i[a, i[b, .8 -> RandomChoice[e]]]]

Output 1Wolfie restyle

To stay within the 128-character limit, Pedro used the \[Wolf] character Wolf character instead of the icon. Embedding the restyling in a Dynamic expression so that it displays endless variations and using random wolf images to restyle the wolf icon were nice touches that favorably impressed the judges.

Honorable Mention
Edmund Robinson: Deep File Explorer (120 characters)

Edmund’s submission actually does something useful! (But as one snarky One-Liner judge pointed out, no submission that does something useful has ever won the competition.) His file explorer uses Dataset to make a nicely formatted browser of file properties, a way to get a quick overview of what’s in a file. A lot of beautiful, useful functionality in 120 characters of code!

Robinson one-line graphicRobinson one-line part 2

Honorable Mention
Daniel Reynolds: Super Name (132 characters)

The judges had fun with Daniel’s name generator. Unfortunately, as submitted, it was four characters over the 128-character limit. Surely an oversight, as it could easily be shortened, but nevertheless the judges were obliged to disqualify the submission. We hope you’ll participate again next year, Daniel.

"CONGRATS! Your new name is " <>   ToString[Capitalize[RandomWord[]]] <> " the ruler of " <>   ToString[Capitalize[RandomWord[]]] <> "sylvania!"

Third Place
Amy Friedman: Autumn Wolframku (83 characters)

Amy’s “Wolframku” generator is itself, at a mere 83 characters, programming poetry. Using WikipediaData to gather autumn-themed words and the brand-new TakeList function to form them poetically, it generates haiku-like verses—often indecipherable, sometimes surprisingly profound.

StringRiffle[  TakeList[RandomChoice[TextWords[WikipediaData["Autumn"]], 11], {3, 5,     3}]]

Amy, a professor of English, has learned the Wolfram Language in part with the encouragement and help of her son Jesse, the youngest-ever prizewinner in our One-Liner Competition, who took second place in 2014 at the age of 13.

Second Place
Peter Roberge: Toy Self-Driving Car (119 characters)

Peter’s submission does an amazing amount of image processing in a few characters of code, adjusting, recognizing and highlighting frames to identify and track vehicles in a video. The smarts are contained in the new ImageContents function, which Peter must have done some sleuthing to discover, since it is included in 11.2 but not yet documented.

e = ExampleData; e@e[e[][[12]]][[4]] /.   x_Graphics :>    HighlightImage[x,     Normal@ImageContents[ImageAdjust[x, {0, 0, 4.7}]][All, 1]]

Output 5Car tracker

First Place
George Varnavides: Animating the Melting Pot (128 characters)

George’s mastery of the Wolfram Language enabled him to squeak in right at the 128-character limit with his submission that expresses the “melting pot” metaphor both graphically—as animations of melting flags—and conceptually, since the melting effect is achieved by melding multiple flag images. His use of Echo, Infix operators, memoization and FoldList shows a deep understanding of the Wolfram Language. Nicely done, George!

f := a = DeleteMissing[#@"Flag" & /@ RandomEntity["Country", 9]] Echo@ListAnimate@    FoldList[ImageRestyle, f[[1]], Thread[.01 -> Rest@a]]~Do~9

Output 6Melting flags

There’s more! We can’t mention every submission individually, but all merit a look. You might even pick up some interesting coding ideas from them. You can see all of the submissions in this downloadable notebook. Thanks to all who participated and impressed us with their ideas and coding prowess! See you again next year.

]]>
http://blog.wolfram.com/2017/11/08/what-can-you-say-in-one-line-of-the-wolfram-language-the-2017-one-liner-competition/feed/ 0
From Aircraft to Optics: Wolfram Innovator Awards 2017 http://blog.wolfram.com/2017/11/02/from-aircraft-to-optics-wolfram-innovator-awards-2017/ http://blog.wolfram.com/2017/11/02/from-aircraft-to-optics-wolfram-innovator-awards-2017/#comments Thu, 02 Nov 2017 17:16:34 +0000 Jesse Dohmann http://blog.internal.wolfram.com/?p=38969

As is tradition at the annual Wolfram Technology Conference, we recognize exceptional users and organizations for their innovative usage of our technologies across a variety of disciplines and fields.

Award winners with Stephen Wolfram

Nominated candidates undergo a vetting process, and are then evaluated by a panel of experts to determine winners. This year we’re excited to announce the recipients of the 2017 Wolfram Innovator Awards.

Youngjoo Chung

Youngjoo Chung

Dr. Chung is the creator of a very extensive symbolic computing and vector analysis package for the Wolfram Language. This package enhances our UI by allowing the user to use and symbolically manipulate expressions in traditional inline notation—the same kind of notation you see when you open up a textbook.

For Wolfram as a company, one of his particular accomplishments has been his presence in the international community of Wolfram Language users. He is the president of the Korean Mathematica Users group, and is very active in arranging Mathematica user conferences across South Korea.

Massimo Fazio

Massimo Fazio

Dr. Fazio is an ophthalmology professor who analyzes data from OCT, a powerful imaging technique that builds a 3D image from a sequence of layered 2D image slices of the eye. Usually this analysis is tedious, which makes what Dr. Fazio was able to do that much more remarkable: he has been using the 3D image processing capabilities of the Wolfram Language to automate the analysis of generated OCT images.

In the future, this automation will be extended to cover the entire diagnostic process of glaucoma just by looking at OCT images—with all computations being done using the Wolfram Cloud architecture, or even embedded within the actual devices that are making these measurements.

David Leigh-Lancaster

Mathematical Methods Computer-Based Exam Team

The Victorian Curriculum and Assessment Authority built a massive system using the Wolfram Language and Mathematica to allow students to go through their entire math education in a computer-based fashion, with actual assessments of student performance taking the form of small computational essays. This is now being done in a dozen or so schools in the state of Victoria.

Accepting the award for the team is Dr. David Leigh-Lancaster, who himself started off studying intuitionistic logic—like classical logic, but without the principle of the excluded middle or double negation rules. Eventually he moved into mathematics teaching and education policy, where he was introduced to Mathematica by secondary school students in the early 1990s; learning from his students, as good teachers often do, he quickly started using it in a very serious way.

His work resulted in the widespread use of Wolfram technologies—nearly 700,000 students in the state of Victoria now have access to our entire educational technology suite, making David instrumental in bringing a very broad license for Wolfram technology to a chunk of the country.

The efforts of David and the team are a neat example of the continuing modernization of mathematics education that’s made possible by the technology that’s been developed here for the last three decades.

David Milner

David Milner

David was introduced to Wolfram technologies last year through Wolfram SystemModeler, using it to fully render and model military vehicles. While past projects have primarily been wheeled vehicles, David recently completed a project conceptualizing a successor to the Sikorsky UH-60 Black Hawk helicopter.

His octocopter simulation is exciting to see, mostly because it’s a good example of how an extremely complex system can be modeled with our technology—all electrical and mechanical components and subsystems were built completely with SystemModeler.

Peter Nilsson

Peter Nilsson

Peter is one of the more unusual recipients of this year’s award, which in turn makes him incredibly interesting. Far from the typical English teacher, Peter organized the very first high-school digital humanities course using the Wolfram Language.

The course starts off having students analyze Hamlet using our text analysis features. This same analysis is then applied to the students’ own writing, allowing them to see their progression through the course and compare and contrast their own writing style with that of Shakespeare.

Peter is also the director of research, innovation and outreach; he has been involved in many efforts to try and capture the knowledge contained in the practice of teaching, as well as the pure content of teaching.

His background is in English and music, but looking at his code, you wouldn’t expect that. It just goes to show that because you started off learning traditionally “nontechnical” subjects doesn’t mean you can’t be as sophisticated a computational thinker as “officially educated” techies—computational thinking spans all disciplines, and Peter has effectively communicated this to his students through his teaching.

Chris Reed

Chris Reed

Dr. Reed is an applied mathematician who has worked on a variety of interesting projects in numerous disciplines at the Aerospace Corporation. Using our technology since 1988, Chris has introduced countless colleagues over the years to Mathematica at Aerospace, where it is now a staple piece of software for the company.

Interestingly, many of Chris’s projects have involved algebraic computations where traditionally a numerical approach would be chosen for the job—we think this is a testament to the unique approach that a symbolic language like the Wolfram Language offers when problem solving.

Chris has attended many Wolfram Tech Conferences over the years, using the Wolfram Language to analyze a variety of interesting problems—including problems with satellite motion: he found a way to change their orbit using much less fuel than traditionally required, translating to potentially larger payloads. Additionally, he has used the Wolfram Language to create queuing simulations and management systems for other companies.

Tarkeshwar Singh

Tarkeshwar Singh

Dr. Singh works for Quiet Light Securities, a proprietary trading company out of Chicago that trades in index options and other markets. When the company first got off the ground, trading was still a very physical activity, with people jumping into trading pits and gesticulating wildly to indicate whether they were buying or selling, so it’s interesting to see the timeline of Quiet Light as it has evolved into the now-computational world of trading.

This evolution, spearheaded by Dr. Singh, includes the automation of operations at Quiet Light using the Wolfram Language and the Computable Document Format (CDF). Going forward, Dr. Singh is working to build an automated trading system—completely from scratch.

Dr. Singh himself has a wonderfully eclectic background: with a PhD in quantum electronics, an MS in financial mathematics and an MBA, it’s not surprising that he’s been able to do such extraordinary things with our technology. In addition to all of these accolades, he’s also a major in the Illinois Air National Guard who deployed to Qatar late last year. Dr. Singh is the perfect example of a Wolfram power user—having used our technology for more than 20 years.

Marco Thiel

Marco Thiel

Well known to people who frequent Wolfram Community, Dr. Thiel is the author of a huge number of fascinating contributions to the forum—his posts cover some really cool stuff on many different topics. One of his posts that quickly rose to popularity on the site was over the study of the Ebola outbreak from 2015. Another popular post of his detailed—using 20 years’ worth of oceanographic data—how the flow of water could transport radioactive particles from the Fukushima nuclear plant to further out in the ocean.

It’s also interesting to note that Dr. Thiel has been using the Wolfram Language in very far-reaching ways—ranging from legal analyses to medical-oriented applications—and he is keen on teaching his students how to do the same: his modeling course aims to teach students how to use real-world data and the Wolfram Language to connect what they know from other courses, effectively producing miniature versions of projects that Dr. Thiel would do himself.

Andrew Yule

Andrew Yule

Assured Flow Solutions (AFS) is a Dallas-based specialty engineering firm within the oil and gas industry. Specifically, they investigate the problem of flow assurance: once you’ve drilled an oil well, how can you ensure oil is actually flowing out of that well?

This might seem like a niche question, but as anyone from Texas will tell you, it’s a very critical question that lies at the heart of the state’s infrastructure and can have a lasting and direct economic impact on consumers. If you drive a car, then you’ve undoubtedly been affected by the work Andrew does: AFS is responsible for keeping oil flowing in many of the world’s major oil fields and wells, especially offshore sites deep in the ocean.

Andrew is the technology manager at AFS, and has been centralizing the computations done in-house—computations that used to be done with a hodgepodge of various tools and methods. Through EnterpriseCDF, Andrew was able to create about 40 unique dashboards for different calculations and workflows that analyze aspects of flow assurance used to keep the oil flowing in different parts of the world.

And That’s a Wrap!

The Innovator Awards are a way to pay homage to people who do great things using Wolfram technology: each recipient has leveraged the language to do exciting things in their community, and it’s through their work that we really see how powerful a tool the Wolfram Language is.

A Note

While we are no doubt happy to announce the recipients of the 2017 Wolfram Innovator Awards, I want to take a moment to highlight the fact that it’s not as diverse a list as it could be—not a single woman was presented with an Innovator Award this year, and none of the winners from any year were under the age of 35. This is due to a variety of factors, but it would be remiss of me to ignore this reality—especially when you can so easily see it.

In past years, we’ve had diverse lists of innovators that better represented the broad spectrum of users that accomplish great things with our technology. Which is why, going forward, we hope to highlight the efforts of innovators from varying backgrounds and give them a platform to showcase their achievements. There are so many people out there using the Wolfram Language for interesting things, and we want the rest of the world to see that!

To read a more in-depth account of the scope of each individual project, visit the Wolfram Innovator Awards website.

]]>
http://blog.wolfram.com/2017/11/02/from-aircraft-to-optics-wolfram-innovator-awards-2017/feed/ 0
Inside Scoops from the 2017 Wolfram Technology Conference http://blog.wolfram.com/2017/11/01/inside-scoops-from-the-2017-wolfram-technology-conference/ http://blog.wolfram.com/2017/11/01/inside-scoops-from-the-2017-wolfram-technology-conference/#comments Wed, 01 Nov 2017 17:08:53 +0000 Swede White http://blog.internal.wolfram.com/?p=38914 Wolfram Technology Conference

Two weeks ago at the Wolfram Technology Conference, a diverse lineup of hands-on training, workshops, talks and networking events were impressively orchestrated over the course of four days, culminating in a one-of-a-kind annual experience for users and enthusiasts of Wolfram technologies. It was a unique experience where researchers and professionals interacted directly with those who build each component of the Wolfram technology stack—Mathematica, Wolfram|Alpha, the Wolfram Language, Wolfram SystemModeler, Wolfram Enterprise Private Cloud and everything in between.

Users from across disciplines, industries and fields also interacted with one another to share how they use Wolfram technologies to successfully innovate at their institutions and organizations. It was not uncommon for software engineers or physicists to glean new tricks and tools from a social scientist or English teacher—or vice versa—a testament to the diversity and wide range of cutting-edge uses Wolfram technologies provide.

A Brief Data Analysis of the Conference

Attendees traveled from 18 countries for the experience, representing fields from mathematical physics to education and curriculum development.

Conference attendee map

One hundred thirty-nine talks were divided into five broad tracks: Information and Data Science; Education; Cloud and Software Development; Visualization and Image Processing; and Science, Math and Engineering.
Presentation topic chart

We can take a look at talk abstracts by track using WordCloud. See if you can guess which ones they correspond to.

Word clouds 1

If you guessed Data Science, Science/Math/Engineering, Cloud/Software Development, Education, and Visualization/Image Processing from left to right by row, you have a keen eye.

We can also look at all talk abstracts divided into nouns, verbs, adjectives and adverbs. Perhaps a Wolfram Technology Conference abstract generator could be built upon this.

Word clouds 2

It was most impressive to see those at the conference who use the Wolfram Language for data science, education or medical image processing be able to ask questions directly to the R&D experts and software developers who make those tools possible for them. As Stephen Wolfram said, “It’s fun to build these things, but it’s perhaps even more fun to see the cool ways people use it.”

A highlight of the conference was the keynote dinner in honor of the 2017 Wolfram Innovator Award winners. Nine individuals and organizations from finance, education, oil and gas, applied research, academia and engineering were represented. We’ll have more on these outstanding individuals in a forthcoming blog post. For now, a tease of where they came from.

Award winner map

Hands-on Workshops

The conference kicked off with hands-on training, where attendees received individualized instruction on how to use the Wolfram Language for their research and professional projects, led by experts in Wolfram technologies for data science and statistical analysis.

Hands-on training session

Abrita Chakravarty, team lead of the Wolfram Technology Group, guided participants through a deep dive into data science with a morning session focused on project workflows, followed by an afternoon workshop devoted to data wrangling and analytical methods. Among the Wolfram Language functions highlighted were FeatureExtraction, DimensionReduce, Classify, Predict and more of Wolfram’s sophisticated machine learning algorithms for highly automated data science applications.

Take a look at Etienne Bernard’s (lead architect in Wolfram’s Advanced Research Group) recent blog post “Building the Automated Data Scientist: The New Classify and Predict” for further explanations of new machine learning features released in Wolfram Language Version 11.2. You can also view a livestream of Etienne demonstrating these features on Twitch.

In addition to hands-on training in data science, Tuseeta Banerjee, a Wolfram certified instructor, led a morning workshop on applied statistical analysis with the Wolfram Language. From hypothesis testing using DistributionFitTest to automated modeling using GeneralizedLinearModelFit, among many other functions, attendees were given the tools necessary to tell a complete analytical story from exploratory analysis and descriptive statistics to predictive analytics and visualization.

Wolfram U has on-demand courses available in data science, statistics, programming and other domains if you’re interested in learning how to use cutting-edge methods and the largest collection of built-in algorithms available in the Wolfram tech stack.

Stephen Wolfram’s Keynote Address

A highlight of the conference was Stephen Wolfram’s annual keynote talk, which covered an incredible amount of information over two and a half hours of live demonstrations in the Wolfram Language.

Celebrating 30 years of R&D at Wolfram Research since the company was founded in 1987, Stephen noted that’s about half the time since modern computer languages were invented. Next year, Wolfram celebrates 30 years of Mathematica—it’s fairly rare for software to remain so widely used, but the sheer amount of innovation that has gone into the product ensures its longevity.

Stephen highlighted some of the many new features in Wolfram Language Version 11.2 and noted ImageIdentify, which was announced in 2015, is at once a pioneer in general-purpose image identification neural networks and still paving the way as a building block in new Wolfram technologies. The neural net has been trained so well it can identify a jack o’ lantern carved in the fashion of Stephen Wolfram’s Wolfram|Alpha person curve.

Jack o' lantern

From there, Stephen touched on everything from cloud and mobile apps to blockchain, with some examples of their uses for individuals, organizations and enterprise. It was a fast-paced, quickfire presentation that covered two Wolfram Language version releases (11.1 and 11.2), the Wolfram Data Repository, SystemModeler 5, Wolfram Player for iOS, Wolfram|Alpha Enterprise and a slew of upcoming functionalities on the horizon. He also hinted at how some large and well-known companies are using the Wolfram tech stack in ways that allow millions of people to interact with them on a daily basis.

Word cloud 3

Stephen told the audience about ongoing efforts in K–12 education and Wolfram Programming Lab, along with resources available to individuals of all ages and organizations of all sizes to learn more about how to use the Wolfram tech stack in their work and projects.

He also pointed to his recent livestreams, open and accessible to anyone, of Wolfram Language design review meetings—a rare glimpse into how software is actually made. You can view the collection of on-demand videos here.

Wolfram R&D Expert Panel

Each year at the Tech Conference, a panel of Wolfram R&D experts preview what’s new in Wolfram technologies and what’s to come. This gives attendees the unique opportunity to learn how Wolfram technologies are made, along with the ability to ask the people who pave the way of innovation at Wolfram questions about future functionalities. This year’s panel included:

  • Arnoud Buzing, Director of Quality and Release Management
  • John Fultz, Director of User Interface Technology
  • Roger Germundsson, Director of Research and Development
  • Tom Wickham-Jones, Director of Kernel Technology

Roger gave an overview of the hundreds of new functions introduced in Wolfram Language 11.2, along with hundreds more improved functions that are continually in development.

Expert panel

Perhaps one of the biggest highlights was a preview of the Wolfram Neural Net Repository, which provides a uniform system for storing neural net models in an immediately computable form.

Neural Net Repository

Including models from the latest research papers, as well as ones trained or created at Wolfram Research, the Wolfram Neural Net Repository is built to be a global resource for neural net models. Classification, image processing, feature extraction and regression are just a few clicks away using the Wolfram tech stack.

Let’s look at some of the more creative uses of the Wolfram Language presented at talks during the conference.

Creative Highlight Number 1: Marathon Viewer

Jeff Bryant and Eila Stiegler demonstrated how the Wolfram Language can be used to analyze races and marathons, using the Illinois Marathon as an example.
Illinois marathon animation

Using data from the race, they showed how using functions like Interpreter can make the pain of wrangling and cleaning data easier. Jeff and Eila were able to take the data and create an animation that shows each runner’s progress through the marathon, with dynamics that indicate volumes of runners at any given time and location. Not only is this incredibly useful for people virtually tracking the progress of runners, but it also has applications for city and urban planning.

Creative Highlight 2: Building an Interactive Game Modeled on Jeopardy!

Robert Nachbar, project director with Wolfram Solutions, demonstrated an interactive game of Jeopardy! built in the Wolfram Language that he modestly said took him about a weekend to build.

Jeopardy! game

Attendees were impressed with its functionality and excited about its application in education. Using built-in Wolfram Language functions like Dynamic and interactive buttons, Robert showed how an API call can be used to create a game of Jeopardy! with existing clues or how a custom game can be built. To demonstrate, he used clues and questions specific to Wolfram Language documentation. Toward the end of the presentation, a brief game was played providing clues to Wolfram Language functions that the audience could then respond to, providing a nice model for learning any topic one might think to program into the game.

Creative Highlight 3: Food Data in Wolfram|Alpha

Andrew Steinacher, developer in Wolfram|Alpha scientific content, gave his third talk in as many years on food and nutrition data in Wolfram|Alpha. The Wolfram Language now has nearly 150,000 searchable (and computable) foods. New computable features include PLU codes, used by grocery stores worldwide, and acidity levels, full nutritional information, ingredients and substitutions, along with barcode recognition for better alignment with international foods.

Wolfram|Alpha food

Future goals for food data in the Wolfram Language include better food and nutrition coverage for the rest of the world, specifically Asia; adding more packaged foods and more available data, such as storage temperatures, packaging dimensions and materials; aligning ingredient entities to chemical entities; new FDA nutrition labels with support for multiple sizes/styles; and computational recipes, including food quantities and nutrition, actions, equipment and substitutions. One can easily imagine how these tools will certainly innovate the food production and food service industries.

Creative Highlight 4: Presenting Presenter Tools

In a something of a meta-talk, Andre Kuzniarek, director of Document and Media Systems, gave a presentation of Presenter Tools, an upcoming feature of Wolfram desktop products. In his talk, he showed how talks created in Wolfram Notebooks can be prepared and presented with convenient formatting tools and dynamic content scaling to match any screen resolution. While some of this functionality already exists in the Wolfram Language, this improved framework elevates presentations to a new level of aesthetics and interactivity.

Wolfram Livecoding Championship

A fun evening highlight of the conference was the Wolfram Livecoding Championship led by Stephen Wolfram. For the contest, Stephen gave challenges to the participants, and they were then tasked with finding a solution to the problem using an elegant piece of Wolfram Language code.

Approximately 20 participants took part in the contest and responded to challenges ranging from finding digit sequences in π to string manipulation to finding the earliest 2016 sunrise in Champaign, Illinois.

Jon McLoone, director of Technical Communication and Strategy at Wolfram Research Europe, took home the prize for the most solved challenges.

Livecoding Championship

The event was streamed live from Wolfram Research and Stephen Wolfram’s Twitch channels, and you can watch the video-on-demand here.

Wolfram Language Logo ImageRestyle Competition


Wolfie restyle samples

This year, a new contest was introduced to see who could use ImageRestyle, new in Wolfram Language Version 11.2, to generate the most creative and interesting version of the Wolfram Language logo. Over 70 entries were received, and contestants were required to use the logo and another image or images to come up with a new machine-generated version of “Wolfie.”

This year’s winner was Emmanuel Garces Madina for the following submission.

Wolfie restyle winner

More Wolfram Technology Conference Posts to Come

Chris Carlson, senior user interface developer, will present a recap of this year’s One-Liner Competition. Also, technical writer Jesse Dohmann will introduce this year’s winners of the Wolfram Technology Innovator Awards.

]]>
http://blog.wolfram.com/2017/11/01/inside-scoops-from-the-2017-wolfram-technology-conference/feed/ 1
Building the Automated Data Scientist: The New Classify and Predict http://blog.wolfram.com/2017/10/10/building-the-automated-data-scientist-the-new-classify-and-predict/ http://blog.wolfram.com/2017/10/10/building-the-automated-data-scientist-the-new-classify-and-predict/#comments Tue, 10 Oct 2017 16:43:19 +0000 Etienne Bernard http://blog.internal.wolfram.com/?p=38653 Automated Data Science

Imagine a baker connecting a data science application to his database and asking it, “How many croissants are we going to sell next Sunday?” The application would simply answer, “According to your recorded data and other factors such as the predicted weather, there is a 90% chance that between 62 and 67 croissants will be sold.” The baker could then plan accordingly. This is an example of an automated data scientist, a system to which you could throw arbitrary data and get insights or predictions in return.

One key component in making this a reality is the ability to learn a predictive model without specifications from humans besides the data. In the Wolfram Language, this is the role of the functions Classify and Predict. For example, let’s train a classifier to recognize morels from hedgehog mushrooms:

c = Classify[{

We can now use the resulting ClassifierFunction on new examples:

c[

c[

And we can obtain a probability for each possibility:
c[

As another example, let’s train a PredictorFunction to predict the average monthly temperature for some US cities:

data = RandomSample[ResourceData["Sample Data: US City Temperature"]]

p = Predict[data ->

Again, we can use the resulting function to make a prediction:

p[<|

And we can obtain a distribution of predictions:

dist = p[<|

As can you see, Classify and Predict do not need to be told what the variables are, what preprocessing to perform or which algorithm to use: they are automated functions.

New Classify and Predict

We introduced Classify and Predict in Version 10 of the Wolfram Language (about three years ago), and have been happy to see it used in various contexts (my favorite involves an astronaut, a plane and a Raspberry Pi). In Version 11.2, we decided to give these functions a complete makeover. The most visible update is the introduction of an information panel in order to get feedback during the training:

Classify progress animation

With it, one can monitor things such as the current best method and the current accuracy, and one can get an idea of how long the training will be—very useful in deciding if it is worth continuing or not! If one wants to stop the training, there are two ways to do it: either with the Stop button or by directly aborting the evaluation. In both cases, the best model that Classify and Predict came up with so far is returned (but the Stop interruption is softer: it waits until the current training is over).

A similar panel is now displayed when using ClassifierInformation and PredictorInformation on a classifier or a predictor:

Classify set 1

We tried to show some useful information about the model, such as its accuracy (on a test set), the time it takes to evaluate new examples and its memory size. More importantly, you can see a “learning curve” on the bottom that shows the value of the loss (the measure that one is trying to minimize) as a function of the number of examples that have been used for training. By pressing the left/right arrows, one can also look at other curves, such as the accuracy as a function of the number of training examples:

Classify set 2

Such curves are useful in figuring out if one needs more data to train on or not (e.g. when the curves are plateauing). We hope that giving easy access to them will ease the modeling workflow (for example, it might reduce the need to use ClassifierMeasurements and PredictorMeasurements).

An important update is the addition of the TimeGoal option, which allows one to specify how long one wishes the training to take, e.g:

c = Classify[{1, 2, 3, 4} -> {

ClassifierInformation[c,

TimeGoal has a different meaning than TimeConstraint: it is not about specifying a maximum amount of time, but really a goal that should be reached. Setting a higher time goal allows the automation system to try additional things in order to find a better model. In my opinion, this makes TimeGoal the most important option of both Classify and Predict (followed by Method and PerformanceGoal).

On the method side, things have changed as well. Each method now has its own documentation page ("LogisticRegression", "NearestNeighbors", etc.) that gives generic information and allows experts to play with the options that are described. We also added two new methods: "DecisionTree" and, more noticeably, "GradientBoostedTrees", which is a favorite of data scientists. Here is a simple prediction example:

data = # -> Sin[2 #] + Cos[#] + RandomReal[] & /@ RandomReal[10, 200];

p = Predict[data, Method ->
Show[ListPlot[List @@@ data, PlotStyle -> Gray, PlotLegends -> {

Under the Hood…

OK, let’s now get to the main change in Version 11.2, which is not directly visible: we reimplemented the way Classify and Predict determine the optimal method and hyperparameters for a given dataset (in a sense, the core of the automation). For those who are interested, let me try to give a simple explanation of how this procedure works for Classify.

A classifier needs to be trained using a method (e.g. "LogisticRegression", "RandomForest", etc.) and each method needs to be given some hyperparameters (such as "L2Regularization" or "NeighborsNumber"). The automation procedure is there to figure out the best configuration (i.e. the best method + hyperparameters) to use according to how well the classifier (trained with this configuration) performs on a test set, but also how fast or how small in memory the classifier is. It is hard to know if a given configuration would perform well without actually training and testing it. The idea of our procedure is to start with many configurations that we believe could perform well (let’s say 100), then train these configurations on small datasets and use the information gathered during these “experiments” to predict how well the configurations would perform on the full dataset. The predictions are not perfect, but they are useful in selecting a set of promising configurations that will be trained on larger datasets in order to gather more information (you might notice some similarities with the Hyperband procedure). This operation is repeated until only a few configurations (sometimes even just one) are trained on the full dataset. Here is a visualization of the loss function for some configurations (each curve represents a different one) that underwent this operation:
Training graph

As you can see, many configurations have been trained on 10 and 40 examples, but just a few of them on 200 examples, and only one of them on 800 examples. We found in our benchmarks that the final configuration obtained is often the optimal one (among the ones present in the initial configuration set). Also, since training on smaller datasets is faster, the time needed for the entire procedure is not much greater than the time needed to train one configuration on the full dataset, which, as you can imagine, is much faster than training all configurations on the full dataset!

Besides being faster than the previous version, this automation strategy was necessary to bring some of the capabilities that I presented above. For example, the procedure directly produces an estimation of model performances and learning curves. Also, it enables the display of a progress bar and quickly produces valid models that can be returned if the Stop button is pressed. Finally, it enables the introduction of the TimeGoal option by adapting the number of intermediate trainings depending on the amount of time available.

We hope that you will find ways to use this new version of Classify and Predict. Don’t hesitate to give us feedback. The road to a fully automated data scientist is still long, but we’re getting closer!


Download this post as a Computable Document Format (CDF) file. New to CDF? Get your copy for free with this one-time download.

]]>
http://blog.wolfram.com/2017/10/10/building-the-automated-data-scientist-the-new-classify-and-predict/feed/ 2
Notebooks in Your Pocket—Wolfram Player for iOS Is Now Shipping http://blog.wolfram.com/2017/10/04/notebooks-in-your-pocket-wolfram-player-for-ios-is-now-shipping/ http://blog.wolfram.com/2017/10/04/notebooks-in-your-pocket-wolfram-player-for-ios-is-now-shipping/#comments Wed, 04 Oct 2017 14:16:19 +0000 John Fultz http://blog.internal.wolfram.com/?p=38437 Ten months ago, I announced the beginning of our open beta program for Wolfram Player for iOS. The beta is over, and we are now shipping Wolfram Player in the App Store. Wolfram Player for iOS joins Wolfram CDF Player on Windows, Mac and Linux as a free platform for sharing your notebook content with the world.

Wolfram Player

Wolfram Player is the first native computational notebook experience ever on iOS. You can now take your notebooks with you and play them offline. Wolfram Player supports notebooks running interfaces backed by Version 11.1 of the Wolfram Language—an 11.2 release will come shortly. Wolfram Player includes the same kernel that you would find in any desktop or cloud release of the Wolfram Language.

Installing and running Wolfram Player on your iPhone or iPad is free. Once installed, you’ll be able to view any notebook or Computable Document Format (CDF) file, including ones with dynamic content. If you have notebooks in Dropbox, Files or any other file-sharing service on iOS, it’s very easy to open them via whatever means the sharing app uses to export files to other apps. Opening a notebook from an email attachment or a webpage is as simple as tapping the file link and choosing to open it in Player. Wolfram Player also has full support of sideloading and AirDrop.

I’m particularly keen on the interface for supporting our cloud products, including the Wolfram Cloud and Wolfram Enterprise Private Cloud. Once you log into a cloud product from Wolfram Player, your account shows up as a server, which can be browsed just like your local file system. We used this feature a lot as we were developing Wolfram Player, and the cloud integration with the mobile and desktop platforms makes it super easy to create, access and view files in a centralized way.

Wolfram Cloud Player login

If you have a Wolfram Cloud subscription, make sure you log into it from the app. This enables functionality in the app, including the ability to interact with Manipulate results and other interfaces. Otherwise, you can enable interactivity through an in-app purchase.

Almost 30 years ago, we introduced the notebook paradigm to the world. We’ve seen the notebook shift in form over time with the inclusion of modern typesetting and user interfaces. Notebooks came to the cloud, and now they can live in your pocket. One might have thought that 30 years would exhaust the possibilities, but in many ways, I feel like we’re just getting started.

Wolfram Notebooks timeline

]]>
http://blog.wolfram.com/2017/10/04/notebooks-in-your-pocket-wolfram-player-for-ios-is-now-shipping/feed/ 9
Computational Microscopy with the Wolfram Language http://blog.wolfram.com/2017/09/29/computational-microscopy-with-the-wolfram-language/ http://blog.wolfram.com/2017/09/29/computational-microscopy-with-the-wolfram-language/#comments Fri, 29 Sep 2017 14:42:13 +0000 Shadi Ashnai http://blog.internal.wolfram.com/?p=38517 Microscopes were invented almost four hundred years ago. But today, there’s a revolution in microscopy (as in so many other fields) associated with computation. We’ve been working hard to make the Wolfram Language a definitive platform for the emerging field of computational microscopy.

It all starts with getting an image of some kind—whether from a light or x-ray microscope, transmission electron microscope (TEM), confocal laser scanning microscope (CLSM), two-photon excitation or a scanning electron microscope (SEM), as well as many more. You can then proceed to enhance images, reconstruct objects and perform measurements, detection, recognition and classification. At last month’s Microscopy & Microanalysis conference, we showed various examples of this pipeline, starting with a Zeiss microscope and a ToupTek digital camera.

Microanalysis tools

Image Acquisition

Use Import to bring standard image file formats into the Wolfram Language. (More exotic file formats generated by microscopes are accessible via BioFormatsLink.) What’s even cooler is that you can also connect to a microscope to stream images directly into CurrentImage.

Once an image is imported, you’re off to the races with all the power of the Wolfram Language.

Brightness Equalization

Often, images acquired by microscopes exhibit uneven illumination. The uneven illumination can be fixed by either adjusting the image background according to a given flat field or by modeling the illumination of the visible background. BrightnessEqualize achieves exactly this.

Here is a raw image of a sugar crystal under the microscope:

img=[image];

Here is a pure image adjustment:

ImageAdjust[img]

And here is the result of brightness equalization using an empirical flat field:

BrightnessEqualize[img, ]

If a flat-field image is not available, construct one. You can segment the background and model its illumination with a second-order polynomial:

mask = ColorNegate[AlphaChannel[RemoveBackground[img]]]

BrightnessEqualize[img, {"Global", 2}, Masking -> mask]

Color Deconvolution

Color deconvolution is a technique to convert images of stained samples into distributions of dye uptake.

Here is a stained sample using hematoxylin C19 and DAB (3,3′-Diaminobenzidine):

img=

The corresponding RGB color for each dye is:

Hematoxylin = ; Diaminobenzidine =

Obtain the transformation matrix from dye concentration to RGB colors:

HDABtoRGB = Transpose[List @@@ {Hematoxylin, Diaminobenzidine}]; MatrixForm[HDABtoRGB]

Compute the inverse transformation from color to dye concentration:

RGBtoHDAB = PseudoInverse[HDABtoRGB]; MatrixForm[RGBtoHDAB]

Perform the actual de-mixing in the log-scale of color intensities, since the color absorption is exponentially proportional to the dye concentration:

logRGB = Log[ImageClip[ColorConvert[img, "RGB"]]] + 2.0;

The color deconvolution into hematoxylin and DAB dye concentration:

HDAB = Map[ImageAdjust, -RGBtoHDAB.ColorSeparate[logRGB]]

False coloring of the dye concentration:

ColorCombine[{{1, 0}, {0, 1}, {0, 1}}.HDAB]

Image Viewing and Manual Measurements

To view large images, use DynamicImage, which is an efficient image pane for zooming, panning, dragging and scrolling in-core or out-of core images:

largeImg = URLDownload[    "http://cache.boston.com/universal/site_graphics/blogs/bigpicture/\ micro_11_14/m08_pollenmix.jpg"];

DynamicImage[largeImg, ImageSize -> 400]

The following code is all it takes to implement a customized interactive interface for radius measurements of circular objects. You can move the position and radius of the superimposed circle via Alt+drag or Command+drag. The radius of the circle is displayed in the top-left corner:

DynamicModule[  {r = 80, center = Import[largeImg, "ImageSize"]/2,    centerMovQ = False},  Panel@EventHandler[    DynamicImage[     largeImg,     Epilog -> {Yellow, Circle[Dynamic[center], Dynamic[r]],        Dynamic@Style[         Text["r=" <> ToString[Round@r] <> "px", Scaled[{0.1, 0.9}]],          FontSize -> 14]},     ImageSize -> 400     ],    {"MouseDown" :> If[       CurrentValue["AltKey"],       If[        EuclideanDistance[center, MousePosition["Graphics"]] < 2 r/3,         center = MousePosition["Graphics"]; centerMovQ = True,         r = EuclideanDistance[center, MousePosition["Graphics"]];         centerMovQ = False        ]       ],     "MouseDragged" :> If[       CurrentValue["AltKey"],       If[        centerMovQ,        center = MousePosition["Graphics"],         r = EuclideanDistance[center, MousePosition["Graphics"]]        ]       ]},    PassEventsDown -> Dynamic[Not[CurrentValue["AltKey"]]]    ]  ]

Focus Stacking

To overcome the shallow depth of field of microscopes, you can collect a focal stack, which is a stack of images, each with a different focal length. You can compress the focal stack into a single image by selectively taking in-focus regions of each image in the stack. The function ImageFocusCombine does exactly that.

stack = {

ImageFocusCombine[stack ]

Here is a reimplementation of ImageFocusCombine to extract the depth and to go one step further and reconstruct a 3D model from a focal stack.

Take the norm of the Laplacian filter as an indicator for a pixel being in or out of focus. The Laplacian filter picks up the high Fourier coefficients, which are subdued first if an image is out of focus:

focusResponses =    Norm[ColorSeparate[LaplacianGaussianFilter[#, {3, 1}]]] & /@ stack; ImageAdjust /@ focusResponses

Then for each pixel, pick the layer that exhibits the largest Laplacian filter norm:

max = Image3DProjection[Image3D[focusResponses], Top, "Max"]; depthVol = nonMaxChannelSuppression[focusResponses];

Multiply the resulting binary volume with the focal stack and add up all layers. Thus, you collect only those pixel values that are in focus:

stack.depthVol

The binary volume depthVol contains the depth information of each pixel. Convert it into a two-dimensional depth map:

depthMap = ImageClip[depthVol.N[Rescale[Range[Length[depthVol]]]]]

The depth information is quite noisy and not equally reliable for all pixel locations. Only edges provide a clear indication if an image region is in focus or not. Thus, use the total focusResponse as a confidence measure for the depth map:

confidenceMap = Total@focusResponses

Take those depth measures with a confidence larger than 0.05 into account:

depthMap *= Binarize[confidenceMap, 0.05]

You can regularize the depth values with MedianFilter and close gaps via FillingTransform:

depthMap = FillingTransform[MedianFilter[depthMap, 3]]

Display the depth map in 3D using the in-focus image as its texture:

ListPlot3D[  Reverse[ImageData[Thumbnail[depthMap]]],  PlotStyle -> Texture[inFocus] ,  BoxRatios -> {1, ImageAspectRatio[depthMap], 1/3},   InterpolationOrder -> 1, Lighting -> {{"Ambient", White}},  Mesh -> False, Axes -> None  ]

An Example with Machine Learning: Pollen Classification

The Wolfram Language has powerful machine learning capabilities that allow implementation of various detection, recognition or classification applications in microscopy.

Here is a small dataset of six flower pollen types that we would like to classify:

pollenData =

Typically, one requires a huge dataset to train a neural network from scratch. However, using other pretrained models, we can do classification using such a small dataset.

Take the VGG-16 network trained on ImageNet available through NetModel:

net = NetModel["VGG-16 Trained on ImageNet Competition Data"]

Remove a few layers at the end that perform the specific classification in this network. This leaves you with a network that generates a feature vector:

featureFunction = Take[net, {1, "fc7"}]

Next, compute the feature vector for all images in the pollen dataset:

training = Flatten[Map[#["Pollen"] &, pollenData]];

features = Map[featureFunction, training];

features // First

The feature vectors live in a 4k-dimensional space. To quickly verify if the feature vectors are suitable to classify the data, reduce the feature space to three dimensions and see that the pollen images appear to group nicely by type:

xyz = DimensionReduce[features, 3, Method -> "TSNE"]; Graphics3D[  MapThread[   Inset[Thumbnail[RemoveBackground@#2, 32], #1] &, {xyz, training}],  BoxRatios -> {1, 1, 1}  ]

To increase the size of the training set and to make it rotation- and reflection-invariant, generate additional data:

MirrorRotate[img_Image] :=   With[{imgs = NestList[ImageRotate, img, 3]},    Join[imgs, Map[ImageReflect, imgs]]]

training =    Flatten@Map[     Thread[Flatten[MirrorRotate /@ #["Pollen"]] -> #["Name"]] &,      pollenData];

With that training data, create a classifier:

pollenClassifier =   Classify[RandomSample@training, FeatureExtractor -> featureFunction]

Test the classifier on some new data samples:

Map[(# -> pollenClassifier[#]) &, {

An Example with Deep Neural Nets: Detecting Mitosis

The previous classifier relied on a pretrained neural network. If you have enough data, you can train a neural network from scratch, a network that automatically learns the relevant features and simultaneously acts as a subsequent classifier.

As an example, let’s talk about detecting cells that undergo mitosis. Here is a simple convolutional neural network that can do the job:

mitosisNet = NetChain[   <|    "layer1" -> {ConvolutionLayer[24, 5, "Stride" -> 2],       BatchNormalizationLayer[], ElementwiseLayer[Ramp]},    "layer2" -> {ConvolutionLayer[48, 3, "Stride" -> 2],       BatchNormalizationLayer[], ElementwiseLayer[Ramp]},    "layer3" -> {ConvolutionLayer[64, 3, "Stride" -> 2],       BatchNormalizationLayer[], ElementwiseLayer[Ramp]},    "layer4" -> {ConvolutionLayer[128, 3, "Stride" -> 2],       BatchNormalizationLayer[], ElementwiseLayer[Ramp]},    "layer5" -> {ConvolutionLayer[128, 3], BatchNormalizationLayer[],       ElementwiseLayer[Ramp]},    "layer6" -> {FlattenLayer[]},    "layer7" -> {LinearLayer[64], ElementwiseLayer[Ramp]},    "layer8" -> {LinearLayer[2], SoftmaxLayer[]}    |>,   "Input" -> NetEncoder[{"Image", 97, "ColorChannels" -> 3}],   "Output" -> NetDecoder[{"Class", {True, False}}]   ]

The data for training and testing has been extracted from the Tumor Proliferation Assessment Challenge 2016. We preprocessed the data into 97×97 images, centered around the actual cells in question.

data =

Use roughly three-quarters of the data for training and the rest for testing:

trainingData = Take[data, 1800];

testData = Drop[data, 1800];

Again, to increase the training set, perform image mirroring and rotation:

trainedMitosisNet = NetTrain[   mitosisNet,   Normal@RandomSample@     AssociationMap[Thread,       KeyMap[Map[MirrorRotate, #] &, trainingData]],    MaxTrainingRounds -> 8,   ValidationSet ->     Normal@RandomSample@      AssociationMap[Thread, KeyMap[Map[MirrorRotate, #] &, testData]]   ]

Training progress readout

Calculate the classifier metrics and verify the effectiveness of the neural network:

cm = ClassifierMeasurements[trainedMitosisNet, Normal@testData]

cm["ConfusionMatrixPlot"]

cm["Accuracy"]

cm["Specificity"]

cm["Error"]

Considering the challenging task, an error rate of less than 10% is comparable to what a pathologist would achieve.

Conclusion

Computational microscopy is an emerging field and an example of how all the different capabilities of the Wolfram Language come to bear. We intend to expand the scope of our functions further to provide the definitive platform for microscope image analysis.

Computational microscopy webinar


Download this post as a Computable Document Format (CDF) file. New to CDF? Get your copy for free with this one-time download.

]]>
http://blog.wolfram.com/2017/09/29/computational-microscopy-with-the-wolfram-language/feed/ 0
It’s Another Impressive Release! Launching Version 11.2 Today http://blog.wolfram.com/2017/09/14/its-another-impressive-release-launching-version-11-2-today/ http://blog.wolfram.com/2017/09/14/its-another-impressive-release-launching-version-11-2-today/#comments Thu, 14 Sep 2017 15:46:40 +0000 Stephen Wolfram http://blog.internal.wolfram.com/?p=38391 Our Latest R&D Output

I’m excited today to announce the latest output from our R&D pipeline: Version 11.2 of the Wolfram Language and Mathematica—available immediately on desktop (Mac, Windows, Linux) and cloud.

It was only this spring that we released Version 11.1. But after the summer we’re now ready for another impressive release—with all kinds of additions and enhancements, including 100+ entirely new functions:

New functions word cloud

We have a very deliberate strategy for our releases. Integer releases (like 11) concentrate on major complete new frameworks that we’ll be building on far into the future. “.1” releases (like 11.2) are intended as snapshots of the latest output from our R&D pipeline–delivering new capabilities large and small as soon as they’re ready.

Version 11.2 has a mixture of things in it—ranging from ones that provide finishing touches to existing major frameworks, to ones that are first hints of major frameworks under construction. One of my personal responsibilities is to make sure that everything we add is coherently designed, and fits into the long-term vision of the system in a unified way.

And by the time we’re getting ready for a release, I’ve been involved enough with most of the new functions we’re adding that they begin to feel like personal friends. So when we’re doing a .1 release and seeing what new functions are going to be ready for it, it’s a bit like making a party invitation list: who’s going to be able to come to the big celebration?

Years back there’d be a nice list, but it would be of modest length. Today, however, I’m just amazed at how fast our R&D pipeline is running, and how much comes out of it every month. Yes, we’ve been consistently building our Wolfram Language technology stack for more than 30 years—and we’ve got a great team. But it’s still a thrill for me to see just how much we’re actually able to deliver to all our users in a .1 release like 11.2.

Advances in Machine Learning

It’s hard to know where to begin. But let’s pick a current hot area: machine learning.

We’ve had functionality that would now be considered machine learning in the Wolfram Language for decades, and back in 2014 we introduced the “machine-learning superfunctions” Classify and Predict—to give broad access to modern machine learning. By early 2015, we had state-of-the-art deep-learning image identification in ImageIdentify, and then, last year, in Version 11, we began rolling out our full symbolic neural net computation system.

Our goal is to push the envelope of what’s possible in machine learning, but also to deliver everything in a nice, integrated way that makes it easy for a wide range of people to use, even if they’re not machine-learning experts. And in Version 11.2 we’ve actually used machine learning to add automation to our machine-learning capabilities.

So, in particular, Classify and Predict are significantly more powerful in Version 11.2. Their basic scheme is that you give them training data, and they’ll learn from it to automatically produce a machine-learning classifier or predictor. But a critical thing in doing this well is to know what features to extract from the data—whether it’s images, sounds, text, or whatever. And in Version 11.2 Classify and Predict have a variety of new kinds of built-in feature extractors that have been pre-trained on a wide range of kinds of data.

But the most obviously new aspect of Classify and Predict is how they select the core machine-learning method to use (as well as hyperparameters for it). (By the way, 11.2 also introduces things like optimized gradient-boosted trees.) And if you run Classify and Predict now in a notebook you’ll actually see them dynamically figuring out and optimizing what they’re doing (needless to say, using machine learning):

Classify and Predict animation

By the way, you can always press Stop to stop the training process. And with the new option TimeGoal you can explicitly say how long the training should be planned to be—from seconds to years.

As a field, machine learning is advancing very rapidly right now (in the course of my career, I’ve seen perhaps a dozen fields in this kind of hypergrowth—and it’s always exciting). And one of the things about our general symbolic neural net framework is that we’re able to take new advances and immediately integrate them into our long-term system—and build on them in all sorts of ways.

At the front lines of this is the function NetModel—to which new trained and untrained models are being added all the time. (The models are hosted in the cloud—but downloaded and cached for desktop or embedded use.) And so, for example, a few weeks ago NetModel got a new model for inferring geolocations of photographs—that’s based on basic research from just a few months ago:

NetModel["ResNet-101 Trained on YFCC100M Geotagged Data"]

NetModel["ResNet-101 Trained on YFCC100M Geotagged Data"]

Now if we give it a picture with sand dunes in it, its top inferences for possible locations seem to center around certain deserts:

GeoBubbleChart[NetModel["ResNet-101 Trained on YFCC100M Geotagged Data"]["<image suppressed>", {"TopProbabilities", 50}]]

GeoBubbleChart[
 NetModel["ResNet-101 Trained on YFCC100M Geotagged Data"][
  CloudGet["https://wolfr.am/dunes"], {"TopProbabilities", 50}]]

NetModel handles networks that can be used for all sorts of purposes—not only as classifiers, but also, for example, as feature extractors.

Building on NetModel and our symbolic neural network framework, we’ve also been able to add new built-in classifiers to use directly from Classify. So now, in addition to things like sentiment, we have NSFW, face age and facial expression (yes, an actual tiger isn’t safe, but in a different sense):

Classify["NSFWImage", "<image suppressed>"]

Classify["NSFWImage",CloudGet["https://wolfr.am/tiger"]]

Our built-in ImageIdentify function (whose underlying network you can access with NetModel) has been tuned and retrained for Version 11.2—but fundamentally it’s still a classifier. One of the important things that’s happening with machine learning is the development of new types of functions, supporting new kinds of workflows. We’ve got a lot of development going on in this direction, but for 11.2 one new (and fun) example is ImageRestyle—that takes a picture and applies the style of another picture to it:

ImageRestyle["<image suppressed>", "<image suppressed>"]

ImageRestyle[\[Placeholder],\[Placeholder]]

And in honor of this new functionality, maybe it’s time to get the image on my personal home page replaced with something more “styled”—though it’s a bit hard to know what to choose:

ImageRestyle[#, [] PerformanceGoal -> "Quality", TargetDevice -> "GPU"] & /@ {insert image,insert image,insert image,insert image,insert image,insert image}

ImageRestyle gallery

ImageRestyle[#, , PerformanceGoal -> "Quality",
   TargetDevice ->
    "GPU"] & /@ {\[Placeholder], \[Placeholder], \[Placeholder], \
\[Placeholder], \[Placeholder], \[Placeholder]}

By the way, another new feature of 11.2 is the ability to directly export trained networks and other machine-learning functionality. If you’re only interested in an actual network, you can get in MXNet format—suitable for immediate execution wherever MXNet is supported. In typical real situations, there’s some pre- and post-processing that’s needed as well—and the complete functionality can be exported in WMLF (Wolfram Machine Learning Format).

Cloud (and iOS) Notebooks

We invented the idea of notebooks back in 1988, for Mathematica 1.0—and over the past 29 years we’ve been steadily refining and extending how they work on desktop systems. About nine years ago we also began the very complex process of bringing our notebook interface to web browsers—to be able to run notebooks directly in the cloud, without any need for local installation.

It’s been a long, hard journey. But between new features of the Wolfram Language and new web technologies (like isomorphic React, Flow, MobX)—and heroic efforts of software engineering—we’re finally reaching the point where our cloud notebooks are ready for robust prime-time use. Like, try this one:

Notebook

We actually do continuous releases of the Wolfram Cloud—but with Version 11.2 of the Wolfram Language we’re able to add a final layer of polish and tuning to cloud notebooks.

You can create and compute directly on the web, and you can immediately “peel off” a notebook to run on the desktop. Or you can start on the desktop, and immediately push your notebook to the cloud, so it can be shared, embedded—and further edited or computed with—in the cloud.

By the way, when you’re using the Wolfram Cloud, you’re not limited to desktop systems. With the Wolfram Cloud App, you can work with notebooks on mobile devices too. And now that Version 11.2 is released, we’re able to roll out a new version of the Wolfram Cloud App, that makes it surprisingly realistic (thanks to some neat UX ideas) to write Wolfram Language code even on your phone.

Talking of mobile devices, there’s another big thing that’s coming: interactive Wolfram Notebooks running completely locally and natively on iOS devices—both tablets and phones. This has been another heroic software engineering project—which actually started nearly as long ago as the cloud notebook project.

The goal here is to be able to read and interact with—but not author—notebooks directly on an iOS device. And so now with the Wolfram Player App that will be released next week, you can have a notebook on your iOS device, and use Manipulate and other dynamic content, as well as read and navigate notebooks—with the whole interface natively adapted to the touch environment.

For years it’s been frustrating when people send me notebook attachments in email, and I’ve had to do things like upload them to the cloud to be able to read them on my phone. But now with native notebooks on iOS, I can immediately just read notebook attachments directly from email.

Mathematical Limits

Math was the first big application of the Wolfram Language (that’s why it was called Mathematica!)… and for more than 30 years we’ve been committed to aggressively pursuing R&D to expand the domain of math that can be made computational. And in Version 11.2 the biggest math advance we’ve made is in the area of limits.

Mathematica 1.0 back in 1988 already had a basic Limit function. And over the years Limit has gradually been enhanced. But in 11.2—as a result of algorithms we’ve developed over the past several years—it’s reached a completely new level.

The simple-minded way to compute a limit is to work out the first terms in a power series. But that doesn’t work when functions increase too rapidly, or have wild and woolly singularities. But in 11.2 the new algorithms we’ve developed have no problem handling things like this:

Limit[E^(E^x + x^2) (-Erf[E^-E^x - x] - Erf[x]), x -> \[Infinity]]

Limit[E^(E^x + x^2) (-Erf[E^-E^x - x] - Erf[x]), x -> \[Infinity]]
"Limit[3*x + Sqrt[9*x^2 + 4*x - Sin[x]], x -> -Infinity]

Limit[(3 x + Sqrt[9 x^2 + 4 x - Sin[x]]), x -> -\[Infinity]]

It’s very convenient that we have a test set of millions of complicated limit problems that people have asked Wolfram|Alpha about over the past few years—and I’m pleased to say that with our new algorithms we can now immediately handle more than 96% of them.

Limits are in a sense at the very core of calculus and continuous mathematics—and to do them correctly requires a huge tower of knowledge about a whole variety of areas of mathematics. Multivariate limits are particularly tricky—with the main takeaway from many textbooks basically being “it’s hard to get them right”. Well, in 11.2, thanks to our new algorithms (and with a lot of support from our algebra, functional analysis and geometry capabilities), we’re finally able to correctly do a very wide range of multivariate limits—saying whether there’s a definite answer, or whether the limit is provably indeterminate.

Version 11.2 also introduces two other convenient mathematical constructs: MaxLimit and MinLimit (sometimes known as lim sup and lim inf). Ordinary limits have a habit of being indeterminate whenever things get funky, but MaxLimit and MinLimit have definite values, and are what come up most often in applications.

So, for example, there isn’t a definite ordinary limit here:

Limit[Sin[x] + Cos[x/4], x -> Infinity]

Limit[Sin[x] + Cos[x/4], x -> \[Infinity]]

But there’s a MaxLimit, that turns out to be a complicated algebraic number:

MaxLimit[Sin[x] + Cos[x/4], x -> \[Infinity]] // FullSimplify

MaxLimit[Sin[x] + Cos[x/4], x -> \[Infinity]] // FullSimplify
N[%]

N[%]

Another new construct in 11.2 is DiscreteLimit, that gives limits of sequences. Like here’s it’s illustrating the Prime Number Theorem:

DiscreteLimit[Prime[n]/(n*Log[n]), n -> Infinity]

DiscreteLimit[Prime[n]/(n Log[n]), n -> \[Infinity]]

And here it’s giving the limiting value of the solution to a recurrence relation:

DiscreteLimit[RSolveValue[{x[n+1] == Sqrt[1 + x[n]+1/x[n]], x[1] == 3}, x[n], n],n->\[Infinity]]

DiscreteLimit[
 RSolveValue[{x[n + 1] == Sqrt[1 + x[n] + 1/x[n]], x[1] == 3}, x[n],
  n], n -> \[Infinity]]

All Sorts of New Data

There’s always new data in the Wolfram Knowledgebase—flowing every second from all sorts of data feeds, and systematically being added by our curators and curation systems. The architecture of our cloud and desktop system allows both new data and new types of data (as well as natural language input for it) to be immediately available in the Wolfram Language as soon as it’s in the Wolfram Knowledgebase.

And between Version 11.1 and Version 11.2, there’ve been millions of updates to the Knowledgebase. There’ve also been some new types of data added. For example—after several years of development—we’ve now got well-curated data on all notable military conflicts, battles, etc. in history:

Entity["MilitaryConflict", "SecondPunicWar"][EntityProperty["MilitaryConflict", "Battles"]]

Entity["MilitaryConflict", "SecondPunicWar"][
 EntityProperty["MilitaryConflict", "Battles"]]
GeoListPlot[%]

GeoListPlot[%]

Another thing that’s new in 11.2 is greatly enhanced predictive caching of data in the Wolfram Language—making it much more efficient to compute with large volumes of curated data from the Wolfram Knowledgebase.

By the way, Version 11.2 is the first new version to be released since the Wolfram Data Repository was launched. And through the Data Repository, 11.2 has access to nearly 600 curated datasets across a very wide range of areas. 11.2 also now supports functions like ResourceSubmit, for programmatically submitting data for publication in the Wolfram Data Repository. (You can also publish data yourself just using CloudDeploy.)

There’s a huge amount of data and types of computations available in Wolfram|Alpha—that with great effort have been brought to the level where they can be relied on, at least for the kind of one-shot usage that’s typical in Wolfram|Alpha. But one of our long-term goals is to take as many areas as possible and raise the level even higher—to the point where they can be built into the core Wolfram Language, and relied on for systematic programmatic usage.

In Version 11.2 an area where this has happened is ocean tides. So now there’s a function TideData that can give tide predictions for any of the tide stations around the world. I actually found myself using this function in a recent livecoding session I did—where it so happened that I needed to know daily water levels in Aberdeen Harbor in 1913. (Watch the Twitch recording to find out why!)

TideData[Entity["City", {"Aberdeen", "Maryland", "UnitedStates"}], "WaterLevel", DateRange[DateObject[{1913, 1, 1}], DateObject[{1913, 12, 31}], "Day"]]

TideData[Entity[
  "City", {"Aberdeen", "Maryland", "UnitedStates"}], "WaterLevel",
 DateRange[DateObject[{1913, 1, 1}], DateObject[{1913, 12, 31}],
  "Day"]]
DateListPlot[%]

DateListPlot[%]

GeoImage

GeoGraphics and related functions have built-in access to detailed maps of the world. They’ve also had access to low-resolution satellite imagery. But in Version 11.2 there’s a new function GeoImage that uses an integrated external service to provide full-resolution satellite imagery:

GeoImage[GeoDisk[Entity["Building", "ThePentagon::qzh8d"], Quantity[0.4, "Miles"]]]

GeoImage[GeoDisk[Entity["Building", "ThePentagon::qzh8d"],
  Quantity[0.4, "Miles"]]]
GeoImage[GeoDisk[Entity["Building", "Stonehenge::46k59"],    Quantity[250, "Feet"]]]

GeoImage[GeoDisk[Entity["Building", "Stonehenge::46k59"],
  Quantity[250, "Feet"]]]

I’ve ended up using GeoImage in each of the two livecoding sessions I did just recently. Yes, in principle one could go to the web and find a satellite image of someplace, but it’s amazing what a different level of utility one reaches when one can programmatically get the satellite image right inside the Wolfram Language—and then maybe feed it to image processing, or visualization, or machine-learning functions. Like here’s a feature space plot of satellite images of volcanos in California:

FeatureSpacePlot[GeoImage /@ GeoEntities[Entity["AdministrativeDivision", {"California", "UnitedStates"}], "Volcano"]]

FeatureSpacePlot[
 GeoImage /@
  GeoEntities[
   Entity["AdministrativeDivision", {"California", "UnitedStates"}],
   "Volcano"]]

We’re always updating and adding all sorts of geo data in the Wolfram Knowledgebase. And for example, as of Version 11.2, we’ve now got high-resolution geo elevation data for the Moon—which came in very handy for our recent precision eclipse computation project.

ListPlot3D[GeoElevationData[GeoDisk[Entity["MannedSpaceMission", "Apollo15"][EntityProperty["MannedSpaceMission", "LandingPosition"]], Quantity[10, "Miles"]]], Mesh -> None]" width="564" height="433

ListPlot3D[
 GeoElevationData[
  GeoDisk[Entity["MannedSpaceMission", "Apollo15"][
    EntityProperty["MannedSpaceMission", "LandingPosition"]],
   Quantity[10, "Miles"]]], Mesh -> None]

Visualization

One of the obvious strengths of the Wolfram Language is its wide range of integrated and highly automated visualization capabilities. Version 11.2 adds some convenient new functions and options. An example is StackedListPlot, which, as its name suggests, makes stacked (cumulative) list plots:

StackedListPlot[RandomInteger[10, {3, 30}]]

StackedListPlot[RandomInteger[10, {3, 30}]]

There’s also StackedDateListPlot, here working with historical time series from the Wolfram Knowledgebase:

StackedDateListPlot[  EntityClass[   "Country", {    EntityProperty["Country", "Population"] -> TakeLargest[10]}][   Dated["Population", All],     "Association"], PlotLabels -> Automatic]

StackedDateListPlot[
 EntityClass[
  "Country", {
   EntityProperty["Country", "Population"] -> TakeLargest[10]}][
  Dated["Population", All],
    "Association"], PlotLabels -> Automatic]
StackedDateListPlot[  EntityClass[   "Country", {    EntityProperty["Country", "Population"] -> TakeLargest[10]}][   Dated["Population", All],   "Association"], PlotLabels -> Automatic, PlotLayout -> "Percentile"]

StackedDateListPlot[
 EntityClass[
  "Country", {
   EntityProperty["Country", "Population"] -> TakeLargest[10]}][
  Dated["Population", All],
  "Association"], PlotLabels -> Automatic, PlotLayout -> "Percentile"]

One of our goals in the Wolfram Language is to make good stylistic choices as automatic as possible. And in Version 11.2 we’ve, for example, added a whole collection of plot themes for AnatomyPlot3D. You can always explicitly give whatever styling you want. But we provide many default themes. You can pick a classic anatomy book look (by the way, all these 3D objects are fully manipulable and computable):

AnatomyPlot3D[Entity["AnatomicalStructure", "LeftHand"], PlotTheme -> "Classic"]

AnatomyPlot3D[Entity["AnatomicalStructure", "LeftHand"],PlotTheme -> "Classic"]

Or you can go for more of a Gray’s Anatomy look:

AnatomyPlot3D[Entity["AnatomicalStructure", "LeftHand"], PlotTheme -> "Vintage"]

AnatomyPlot3D[Entity["AnatomicalStructure", "LeftHand"],
 PlotTheme -> "Vintage"]

Or you can have a “scientific” theme that tries to make different structures as distinct as possible:

AnatomyPlot3D[Entity["AnatomicalStructure", "LeftHand"], PlotTheme -> "Scientific"]

StackedDateListPlot[
 EntityClass[
  "Country", {
   EntityProperty["Country", "Population"] -> TakeLargest[10]}][
  Dated["Population", All],
  "Association"], PlotLabels -> Automatic, PlotLayout -> "Percentile"]

3D Computational Geometry

The Wolfram Language has very strong computational geometry capabilities—that work on both exact surfaces and approximate meshes. It’s a tremendous algorithmic challenge to smoothly handle constructive geometry in 3D—but after many years of work, Version 11.2 can do it:

RegionIntersection[MengerMesh[2, 3],<br />  BoundaryDiscretizeRegion[Ball[{1, 1, 1}]]]

RegionIntersection[MengerMesh[2, 3],
 BoundaryDiscretizeRegion[Ball[{1, 1, 1}]]]

And of course, everything fits immediately into the rest of the system:

Volume[%]

Volume[%]

More Audio

Version 11 introduced a major new framework for large-scale audio processing in the Wolfram Language. We’re still developing all sorts of capabilities based on this framework, especially using machine learning. And in Version 11.2 there are a number of immediate enhancements. There are very practical things, like built-in support for AudioCapture under Linux. There’s also now the notion of a dynamic AudioStream, whose playback can be programmatically controlled.

Another new function is SpeechSynthesize, which creates audio from text:

SpeechSynthesize["hello"]

SpeechSynthesize["hello"]
Spectrogram[%]

Spectrogram[%]

Capture the Screen

The Wolfram Language tries to let you get data wherever you can. One capability added for Version 11.2 is being able to capture images of your computer screen. (Rasterize has been able to rasterize complete notebooks for a long time; CurrentNotebookImage now captures an image of what’s visible from a notebook on your screen.)  Here’s an image of my main (first) screen, captured as I’m writing this post:

CurrentScreenImage[1]

CurrentScreen output

CurrentScreenImage[1]

Of course, I can now do computation on this image, just like I would on any other image. Here’s a map of the inferred “saliency” of different parts of my screen:

ImageSaliencyFilter["<image suppressed>"]//Colorize

ImageSaliencyFilter[CurrentScreenImage[1]]//Colorize

Language Features

Part of developing the Wolfram Language is adding major new frameworks. But another part is polishing the system, and implementing new functions that make doing things in the system ever easier, smoother and clearer.

Here are a few functions we’ve added in 11.2. The first is simple, but useful: TakeList—a function that successively takes blocks of elements from a list:

TakeList[Alphabet[], {2, 5, 3, 4}]

TakeList[Alphabet[], {2, 5, 3, 4}]

Then there’s FindRepeat (a “colleague” of FindTransientRepeat), that finds exact repeats in sequences—here for a Fibonacci sequence mod 10:

FindRepeat[Mod[Array[Fibonacci, 500], 10]]

FindRepeat[Mod[Array[Fibonacci, 500], 10]]

Here’s a very different kind of new feature: an addition to Capitalize that applies the heuristics for capitalizing “important words” to make something “title case”. (Yes, for an individual string this doesn’t look so useful; but it’s really useful when you’ve got 100 strings from different sources to make consistent.)

Capitalize["a new kind of science", "TitleCase"]

Capitalize["a new kind of science", "TitleCase"]

Talking of presentation, here’s a simple but very useful new output format: DecimalForm. Numbers are normally displayed in scientific notation when they get big, but DecimalForm forces “grade school” number format, without scientific notation:

Table[16.5^n, {n, 10}]

Table[16.5^n, {n, 10}]
DecimalForm[Table[16.5^n, {n, 10}]]

DecimalForm[Table[16.5^n, {n, 10}]]

Another language enhancement added in 11.2—though it’s really more of a seed for the future—is TwoWayRule, input as <->. Ever since Version 1.0 we’ve had Rule (->), and over the years we’ve found Rule increasingly useful as an inert structure that can symbolically represent diverse kinds of transformations and connections. Rule is fundamentally one-way: “left-hand side goes to right-hand side”. But one also sometimes needs a two-way version—and that’s what TwoWayRule provides.

Right now TwoWayRule can be used, for example, to enter undirected edges in a graph, or pairs of levels to exchange in Transpose. But in the future, it’ll be used more and more widely.

Graph[{1 <-> 2, 2 <-> 3, 3 <-> 1}]

Graph[{1 <-> 2, 2 <-> 3, 3 <-> 1}]

11.2 has all sorts of other language enhancements. Here’s an example of a somewhat different kind: the functions StringToByteArray and ByteArrayToString, which handle the somewhat tricky issue of converting between raw byte arrays and strings with various encodings (like UTF-8).

Initialization & System Operations

How do you get the Wolfram Language to automatically initialize itself in some particular way? All the way from Version 1.0, you’ve been able to set up an init.m file to run at initialization time. But finally now in Version 11.2 there’s a much more general and programmatic way of doing this—using InitializationValue and related constructs.

It’s made possible by the PersistentValue framework introduced in 11.1. And what’s particularly nice about it is that it allows a whole range of “persistence locations”—so you can store your initialization information on a per-session, per-computer, per-user, or also (new in 11.2) per-notebook way.

Talking about things that go all the way to Version 1.0, here’s a little story. Back in Version 1.0, Mathematica (as it then was) pretty much always used to display how much memory was still available on your computer (and, yes, you had to be very careful back then because there usually wasn’t much). Well, somewhere along the way, as virtual memory became widespread, people started thinking that “available memory” didn’t mean much, and we stopped displaying it. But now, after being gone for 25+ years, modern operating systems have let us bring it back—and there’s a new function MemoryAvailable in Version 11.2. And, yes, for my computer the result has gained about 5 digits relative to what it had in 1988:

MemoryAvailable[]

MemoryAvailable[ ]

Unified Asynchronous Tasks

There’ve been ways to do some kinds of asynchronous or “background” tasks in the Wolfram Language for a while, but in 11.2 there’s a complete systematic framework for it. There’s a thing called TaskObject that symbolically represents an asynchronous task. And there are basically now three ways such a task can be executed. First, there’s CloudSubmit, which submits the task for execution in the cloud. Then there’s LocalSubmit, which submits the task to be executed on your local computer, but in a separate subkernel. And finally, there’s SessionSubmit, which executes the task in idle time in your current Wolfram Language session.

When you submit a task, it’s off getting executed (you can schedule it to happen at particular times using ScheduledTask). The way you “hear back” from the task is through “handler functions”: functions that are set up when you submit the task to “handle” certain events that can occur during the execution of the task (completion, errors, etc.).

There are also functions like TaskSuspend, TaskAbort, TaskWait and so on, that let you interact with tasks “from the outside”. And, yes, when you’re doing big machine-learning trainings, for example, this comes in pretty handy.

Connectivity

We’re always keen to make the Wolfram Language as connected as it can be. And in Version 11.2 we’ve added a variety of features to achieve that. In Version 11 we introduced the Authentication option, which lets you give credentials in functions like URLExecute. Version 11 already allowed for PermissionsKey (a.k.a. an “app id”). In 11.2 you can now give an explicit username and password, and you can also use SecuredAuthenticationKey to provide OAuth credentials. It’s tricky stuff, but I’m pleased with how cleanly we’re able to represent it using the symbolic character of the Wolfram Language—and it’s really useful when you’re, for example, actually working with a bunch internal websites or APIs.

Back in Version 10 (2014) we introduced the very powerful idea of using APIFunction to provide a symbolic specification for a web API—that could be deployed to the cloud using CloudDeploy. Then in Version 10.2 we introduced MailReceiverFunction, which responds not to web requests, but instead to receiving mail messages. (By the way, in 11.2 we’ve considerably strengthened SendMail, notably adding various authentication and address validation capabilities.)

In Version 11, we introduced the channel framework, which allows for publish-subscribe interactions between Wolfram Language instances (and external programs)—enabling things like chat, as well as a host of useful internal services. Well, in our continual path of automating and unifying, we’re introducing in 11.2 ChannelReceiverFunction—which can be deployed to the cloud to respond to whatever messages are sent on a particular channel.

In the low-level software engineering of the Wolfram Language we’ve used sockets for a long time. A few years ago we started exposing some socket functionality within the language. And now in 11.2 we have a full socket framework. The socket framework supports both traditional TCP sockets, as well as modern ZeroMQ sockets.

External Programs

Ever since the beginning, the Wolfram Language has been able to communicate with external C programs—actually using its native WSTP (Wolfram Symbolic Transfer Protocol) symbolic expression transfer protocol. Years ago J/Link and .NetLink enabled seamless connection to Java and .Net programs. RLink did the same for R. Then there are things like LibraryLink, that allow direct connection to DLLs—or RunProcess for running programs from the shell.

But 11.2 introduces a new form of external program communication: ExternalEvaluate. ExternalEvaluate is for doing computation in languages which—like the Wolfram Language—support REPL-style input/output. The two first examples available in 11.2 are Python and NodeJS.

Here’s a computation done with NodeJS—though this would definitely be better done directly in the Wolfram Language:

ExternalEvaluate["NodeJS", "Math.sqrt(50)"]

ExternalEvaluate["NodeJS", "Math.sqrt(50)"]

Here’s a Python computation (yes, it’s pretty funky to use & for BitAnd):

ExternalEvaluate["Python", "[ i & 10 for i in range(10)]"]

ExternalEvaluate["Python", "[ i & 10 for i in range(10)]"]

Of course, the place where things start to get useful is when one’s accessing large external code bases or libraries. And what’s nice is that one can use the Wolfram Language to control everything, and to analyze the results. ExternalEvaluate is in a sense a very lightweight construct—and one can routinely use it even deep inside some piece of Wolfram Language code.

There’s an infrastructure around ExternalEvaluate, aimed at connecting to the correct executable, appropriately converting types, and so on. There’s also StartExternalSession, which allows you to start a single external session, and then perform multiple evaluations in it.

The Whole List

So is there still more to say about 11.2? Yes! There are lots of new functions and features that I haven’t mentioned at all. Here’s a more extensive list:

New features

But if you want to find out about 11.2, the best thing to do is to actually run it. I’ve actually been running pre-release versions of 11.2 on my personal machines for a couple of months. So by now I’m taking the new features and functions quite for granted—even though, earlier on, I kept on saying “this is really useful; how could we have not had this for 30 years?”. Well, realistically, it’s taken building everything we have so far—not only to provide the technical foundations, but also to seed the ideas, for 11.2. But now our work on 11.2 is done, and 11.2 is ready to go out into the world—and deliver the latest results from our decades of research and development.

]]>
http://blog.wolfram.com/2017/09/14/its-another-impressive-release-launching-version-11-2-today/feed/ 24