Imagine a baker connecting a data science application to his database and asking it, “How many croissants are we going to sell next Sunday?” The application would simply answer, “According to your recorded data and other factors such as the predicted weather, there is a 90% chance that between 62 and 67 croissants will be sold.” The baker could then plan accordingly. This is an example of an automated data scientist, a system to which you could throw arbitrary data and get insights or predictions in return.

One key component in making this a reality is the ability to learn a predictive model without specifications from humans besides the data. In the Wolfram Language, this is the role of the functions `Classify` and `Predict`. For example, let’s train a classifier to recognize morels from hedgehog mushrooms:

We can now use the resulting `ClassifierFunction` on new examples:

And we can obtain a probability for each possibility:

As another example, let’s train a `PredictorFunction` to predict the average monthly temperature for some US cities:

Again, we can use the resulting function to make a prediction:

And we can obtain a distribution of predictions:

As can you see, `Classify` and `Predict` do not need to be told what the variables are, what preprocessing to perform or which algorithm to use: they are automated functions.

We introduced `Classify` and `Predict` in Version 10 of the Wolfram Language (about three years ago), and have been happy to see it used in various contexts (my favorite involves an astronaut, a plane and a Raspberry Pi). In Version 11.2, we decided to give these functions a complete makeover. The most visible update is the introduction of an information panel in order to get feedback during the training:

With it, one can monitor things such as the current best method and the current accuracy, and one can get an idea of how long the training will be—very useful in deciding if it is worth continuing or not! If one wants to stop the training, there are two ways to do it: either with the Stop button or by directly aborting the evaluation. In both cases, the best model that `Classify` and `Predict` came up with so far is returned (but the Stop interruption is softer: it waits until the current training is over).

A similar panel is now displayed when using `ClassifierInformation` and `PredictorInformation` on a classifier or a predictor:

We tried to show some useful information about the model, such as its accuracy (on a test set), the time it takes to evaluate new examples and its memory size. More importantly, you can see a “learning curve” on the bottom that shows the value of the loss (the measure that one is trying to minimize) as a function of the number of examples that have been used for training. By pressing the left/right arrows, one can also look at other curves, such as the accuracy as a function of the number of training examples:

Such curves are useful in figuring out if one needs more data to train on or not (e.g. when the curves are plateauing). We hope that giving easy access to them will ease the modeling workflow (for example, it might reduce the need to use `ClassifierMeasurements` and `PredictorMeasurements`).

An important update is the addition of the `TimeGoal` option, which allows one to specify how long one wishes the training to take, e.g:

`TimeGoal` has a different meaning than `TimeConstraint`: it is not about specifying a maximum amount of time, but really a goal that should be reached. Setting a higher time goal allows the automation system to try additional things in order to find a better model. In my opinion, this makes `TimeGoal` the most important option of both `Classify` and `Predict` (followed by `Method` and `PerformanceGoal`).

On the method side, things have changed as well. Each method now has its own documentation page (`"LogisticRegression"`, `"NearestNeighbors"`, etc.) that gives generic information and allows experts to play with the options that are described. We also added two new methods: `"DecisionTree"` and, more noticeably, `"GradientBoostedTrees"`, which is a favorite of data scientists. Here is a simple prediction example:

OK, let’s now get to the main change in Version 11.2, which is not directly visible: we reimplemented the way `Classify` and `Predict` determine the optimal method and hyperparameters for a given dataset (in a sense, the core of the automation). For those who are interested, let me try to give a simple explanation of how this procedure works for `Classify`.

A classifier needs to be trained using a method (e.g. `"LogisticRegression"`, `"RandomForest"`, etc.) and each method needs to be given some hyperparameters (such as `"L2Regularization"` or `"NeighborsNumber"`). The automation procedure is there to figure out the best configuration (i.e. the best method + hyperparameters) to use according to how well the classifier (trained with this configuration) performs on a test set, but also how fast or how small in memory the classifier is. It is hard to know if a given configuration would perform well without actually training and testing it. The idea of our procedure is to start with many configurations that we believe could perform well (let’s say 100), then train these configurations on small datasets and use the information gathered during these “experiments” to predict how well the configurations would perform on the full dataset. The predictions are not perfect, but they are useful in selecting a set of promising configurations that will be trained on larger datasets in order to gather more information (you might notice some similarities with the Hyperband procedure). This operation is repeated until only a few configurations (sometimes even just one) are trained on the full dataset. Here is a visualization of the loss function for some configurations (each curve represents a different one) that underwent this operation:

As you can see, many configurations have been trained on 10 and 40 examples, but just a few of them on 200 examples, and only one of them on 800 examples. We found in our benchmarks that the final configuration obtained is often the optimal one (among the ones present in the initial configuration set). Also, since training on smaller datasets is faster, the time needed for the entire procedure is not much greater than the time needed to train one configuration on the full dataset, which, as you can imagine, is much faster than training all configurations on the full dataset!

Besides being faster than the previous version, this automation strategy was necessary to bring some of the capabilities that I presented above. For example, the procedure directly produces an estimation of model performances and learning curves. Also, it enables the display of a progress bar and quickly produces valid models that can be returned if the Stop button is pressed. Finally, it enables the introduction of the `TimeGoal` option by adapting the number of intermediate trainings depending on the amount of time available.

We hope that you will find ways to use this new version of Classify and Predict. Don’t hesitate to give us feedback. The road to a fully automated data scientist is still long, but we’re getting closer!

Wolfram Player is the first native computational notebook experience ever on iOS. You can now take your notebooks with you and play them offline. Wolfram Player supports notebooks running interfaces backed by Version 11.1 of the Wolfram Language—an 11.2 release will come shortly. Wolfram Player includes the same kernel that you would find in any desktop or cloud release of the Wolfram Language.

Installing and running Wolfram Player on your iPhone or iPad is free. Once installed, you’ll be able to view any notebook or Computable Document Format (CDF) file, including ones with dynamic content. If you have notebooks in Dropbox, Files or any other file-sharing service on iOS, it’s very easy to open them via whatever means the sharing app uses to export files to other apps. Opening a notebook from an email attachment or a webpage is as simple as tapping the file link and choosing to open it in Player. Wolfram Player also has full support of sideloading and AirDrop.

I’m particularly keen on the interface for supporting our cloud products, including the Wolfram Cloud and Wolfram Enterprise Private Cloud. Once you log into a cloud product from Wolfram Player, your account shows up as a server, which can be browsed just like your local file system. We used this feature a lot as we were developing Wolfram Player, and the cloud integration with the mobile and desktop platforms makes it super easy to create, access and view files in a centralized way.

If you have a Wolfram Cloud subscription, make sure you log into it from the app. This enables functionality in the app, including the ability to interact with `Manipulate` results and other interfaces. Otherwise, you can enable interactivity through an in-app purchase.

Almost 30 years ago, we introduced the notebook paradigm to the world. We’ve seen the notebook shift in form over time with the inclusion of modern typesetting and user interfaces. Notebooks came to the cloud, and now they can live in your pocket. One might have thought that 30 years would exhaust the possibilities, but in many ways, I feel like we’re just getting started.

]]>It all starts with getting an image of some kind—whether from a light or x-ray microscope, transmission electron microscope (TEM), confocal laser scanning microscope (CLSM), two-photon excitation or a scanning electron microscope (SEM), as well as many more. You can then proceed to enhance images, reconstruct objects and perform measurements, detection, recognition and classification. At last month’s Microscopy & Microanalysis conference, we showed various examples of this pipeline, starting with a Zeiss microscope and a ToupTek digital camera.

Use `Import` to bring standard image file formats into the Wolfram Language. (More exotic file formats generated by microscopes are accessible via BioFormatsLink.) What’s even cooler is that you can also connect to a microscope to stream images directly into `CurrentImage`.

Once an image is imported, you’re off to the races with all the power of the Wolfram Language.

Often, images acquired by microscopes exhibit uneven illumination. The uneven illumination can be fixed by either adjusting the image background according to a given flat field or by modeling the illumination of the visible background. `BrightnessEqualize` achieves exactly this.

Here is a raw image of a sugar crystal under the microscope:

Here is a pure image adjustment:

And here is the result of brightness equalization using an empirical flat field:

If a flat-field image is not available, construct one. You can segment the background and model its illumination with a second-order polynomial:

Color deconvolution is a technique to convert images of stained samples into distributions of dye uptake.

Here is a stained sample using hematoxylin C19 and DAB (3,3′-Diaminobenzidine):

The corresponding RGB color for each dye is:

Obtain the transformation matrix from dye concentration to RGB colors:

Compute the inverse transformation from color to dye concentration:

Perform the actual de-mixing in the log-scale of color intensities, since the color absorption is exponentially proportional to the dye concentration:

The color deconvolution into hematoxylin and DAB dye concentration:

False coloring of the dye concentration:

To view large images, use `DynamicImage`, which is an efficient image pane for zooming, panning, dragging and scrolling in-core or out-of core images:

The following code is all it takes to implement a customized interactive interface for radius measurements of circular objects. You can move the position and radius of the superimposed circle via **Alt**+drag or **Command**+drag. The radius of the circle is displayed in the top-left corner:

To overcome the shallow depth of field of microscopes, you can collect a focal stack, which is a stack of images, each with a different focal length. You can compress the focal stack into a single image by selectively taking in-focus regions of each image in the stack. The function `ImageFocusCombine` does exactly that.

Here is a reimplementation of `ImageFocusCombine` to extract the depth and to go one step further and reconstruct a 3D model from a focal stack.

Take the norm of the Laplacian filter as an indicator for a pixel being in or out of focus. The Laplacian filter picks up the high Fourier coefficients, which are subdued first if an image is out of focus:

Then for each pixel, pick the layer that exhibits the largest Laplacian filter norm:

Multiply the resulting binary volume with the focal stack and add up all layers. Thus, you collect only those pixel values that are in focus:

The binary volume `depthVol` contains the depth information of each pixel. Convert it into a two-dimensional depth map:

The depth information is quite noisy and not equally reliable for all pixel locations. Only edges provide a clear indication if an image region is in focus or not. Thus, use the total `focusResponse` as a confidence measure for the depth map:

Take those depth measures with a confidence larger than 0.05 into account:

You can regularize the depth values with `MedianFilter` and close gaps via `FillingTransform`:

Display the depth map in 3D using the in-focus image as its texture:

The Wolfram Language has powerful machine learning capabilities that allow implementation of various detection, recognition or classification applications in microscopy.

Here is a small dataset of six flower pollen types that we would like to classify:

Typically, one requires a huge dataset to train a neural network from scratch. However, using other pretrained models, we can do classification using such a small dataset.

Take the VGG-16 network trained on ImageNet available through `NetModel`:

Remove a few layers at the end that perform the specific classification in this network. This leaves you with a network that generates a feature vector:

Next, compute the feature vector for all images in the pollen dataset:

The feature vectors live in a 4k-dimensional space. To quickly verify if the feature vectors are suitable to classify the data, reduce the feature space to three dimensions and see that the pollen images appear to group nicely by type:

To increase the size of the training set and to make it rotation- and reflection-invariant, generate additional data:

With that training data, create a classifier:

Test the classifier on some new data samples:

The previous classifier relied on a pretrained neural network. If you have enough data, you can train a neural network from scratch, a network that automatically learns the relevant features and simultaneously acts as a subsequent classifier.

As an example, let’s talk about detecting cells that undergo mitosis. Here is a simple convolutional neural network that can do the job:

The data for training and testing has been extracted from the Tumor Proliferation Assessment Challenge 2016. We preprocessed the data into 97×97 images, centered around the actual cells in question.

Use roughly three-quarters of the data for training and the rest for testing:

Again, to increase the training set, perform image mirroring and rotation:

Calculate the classifier metrics and verify the effectiveness of the neural network:

Considering the challenging task, an error rate of less than 10% is comparable to what a pathologist would achieve.

Computational microscopy is an emerging field and an example of how all the different capabilities of the Wolfram Language come to bear. We intend to expand the scope of our functions further to provide the definitive platform for microscope image analysis.

I’m excited today to announce the latest output from our R&D pipeline: Version 11.2 of the Wolfram Language and Mathematica—available immediately on desktop (Mac, Windows, Linux) and cloud.

It was only this spring that we released Version 11.1. But after the summer we’re now ready for another impressive release—with all kinds of additions and enhancements, including 100+ entirely new functions:

We have a very deliberate strategy for our releases. Integer releases (like 11) concentrate on major complete new frameworks that we’ll be building on far into the future. “.1” releases (like 11.2) are intended as snapshots of the latest output from our R&D pipeline–delivering new capabilities large and small as soon as they’re ready.

Version 11.2 has a mixture of things in it—ranging from ones that provide finishing touches to existing major frameworks, to ones that are first hints of major frameworks under construction. One of my personal responsibilities is to make sure that everything we add is coherently designed, and fits into the long-term vision of the system in a unified way.

And by the time we’re getting ready for a release, I’ve been involved enough with most of the new functions we’re adding that they begin to feel like personal friends. So when we’re doing a .1 release and seeing what new functions are going to be ready for it, it’s a bit like making a party invitation list: who’s going to be able to come to the big celebration?

Years back there’d be a nice list, but it would be of modest length. Today, however, I’m just amazed at how fast our R&D pipeline is running, and how much comes out of it every month. Yes, we’ve been consistently building our Wolfram Language technology stack for more than 30 years—and we’ve got a great team. But it’s still a thrill for me to see just how much we’re actually able to deliver to all our users in a .1 release like 11.2.

It’s hard to know where to begin. But let’s pick a current hot area: machine learning.

We’ve had functionality that would now be considered machine learning in the Wolfram Language for decades, and back in 2014 we introduced the “machine-learning superfunctions” `Classify` and `Predict`—to give broad access to modern machine learning. By early 2015, we had state-of-the-art deep-learning image identification in `ImageIdentify`, and then, last year, in Version 11, we began rolling out our full symbolic neural net computation system.

Our goal is to push the envelope of what’s possible in machine learning, but also to deliver everything in a nice, integrated way that makes it easy for a wide range of people to use, even if they’re not machine-learning experts. And in Version 11.2 we’ve actually used machine learning to add automation to our machine-learning capabilities.

So, in particular, `Classify` and `Predict` are significantly more powerful in Version 11.2. Their basic scheme is that you give them training data, and they’ll learn from it to automatically produce a machine-learning classifier or predictor. But a critical thing in doing this well is to know what features to extract from the data—whether it’s images, sounds, text, or whatever. And in Version 11.2 `Classify` and `Predict` have a variety of new kinds of built-in feature extractors that have been pre-trained on a wide range of kinds of data.

But the most obviously new aspect of `Classify` and `Predict` is how they select the core machine-learning method to use (as well as hyperparameters for it). (By the way, 11.2 also introduces things like optimized gradient-boosted trees.) And if you run `Classify` and `Predict` now in a notebook you’ll actually see them dynamically figuring out and optimizing what they’re doing (needless to say, using machine learning):

By the way, you can always press Stop to stop the training process. And with the new option `TimeGoal` you can explicitly say how long the training should be planned to be—from seconds to years.

As a field, machine learning is advancing very rapidly right now (in the course of my career, I’ve seen perhaps a dozen fields in this kind of hypergrowth—and it’s always exciting). And one of the things about our general symbolic neural net framework is that we’re able to take new advances and immediately integrate them into our long-term system—and build on them in all sorts of ways.

At the front lines of this is the function `NetModel`—to which new trained and untrained models are being added all the time. (The models are hosted in the cloud—but downloaded and cached for desktop or embedded use.) And so, for example, a few weeks ago `NetModel` got a new model for inferring geolocations of photographs—that’s based on basic research from just a few months ago:

✕ NetModel["ResNet-101 Trained on YFCC100M Geotagged Data"] |

Now if we give it a picture with sand dunes in it, its top inferences for possible locations seem to center around certain deserts:

✕ GeoBubbleChart[ NetModel["ResNet-101 Trained on YFCC100M Geotagged Data"][ CloudGet["https://wolfr.am/dunes"], {"TopProbabilities", 50}]] |

`NetModel` handles networks that can be used for all sorts of purposes—not only as classifiers, but also, for example, as feature extractors.

Building on `NetModel` and our symbolic neural network framework, we’ve also been able to add new built-in classifiers to use directly from `Classify`. So now, in addition to things like sentiment, we have NSFW, face age and facial expression (yes, an actual tiger isn’t safe, but in a different sense):

✕ Classify["NSFWImage",CloudGet["https://wolfr.am/tiger"]] |

Our built-in `ImageIdentify` function (whose underlying network you can access with `NetModel`) has been tuned and retrained for Version 11.2—but fundamentally it’s still a classifier. One of the important things that’s happening with machine learning is the development of new types of functions, supporting new kinds of workflows. We’ve got a lot of development going on in this direction, but for 11.2 one new (and fun) example is `ImageRestyle`—that takes a picture and applies the style of another picture to it:

✕ ImageRestyle[\[Placeholder],\[Placeholder]] |

And in honor of this new functionality, maybe it’s time to get the image on my personal home page replaced with something more “styled”—though it’s a bit hard to know what to choose:

✕ ImageRestyle[#, , PerformanceGoal -> "Quality", TargetDevice -> "GPU"] & /@ {\[Placeholder], \[Placeholder], \[Placeholder], \ \[Placeholder], \[Placeholder], \[Placeholder]} |

By the way, another new feature of 11.2 is the ability to directly export trained networks and other machine-learning functionality. If you’re only interested in an actual network, you can get in MXNet format—suitable for immediate execution wherever MXNet is supported. In typical real situations, there’s some pre- and post-processing that’s needed as well—and the complete functionality can be exported in WMLF (Wolfram Machine Learning Format).

We invented the idea of notebooks back in 1988, for Mathematica 1.0—and over the past 29 years we’ve been steadily refining and extending how they work on desktop systems. About nine years ago we also began the very complex process of bringing our notebook interface to web browsers—to be able to run notebooks directly in the cloud, without any need for local installation.

It’s been a long, hard journey. But between new features of the Wolfram Language and new web technologies (like isomorphic React, Flow, MobX)—and heroic efforts of software engineering—we’re finally reaching the point where our cloud notebooks are ready for robust prime-time use. Like, try this one:

We actually do continuous releases of the Wolfram Cloud—but with Version 11.2 of the Wolfram Language we’re able to add a final layer of polish and tuning to cloud notebooks.

You can create and compute directly on the web, and you can immediately “peel off” a notebook to run on the desktop. Or you can start on the desktop, and immediately push your notebook to the cloud, so it can be shared, embedded—and further edited or computed with—in the cloud.

By the way, when you’re using the Wolfram Cloud, you’re not limited to desktop systems. With the Wolfram Cloud App, you can work with notebooks on mobile devices too. And now that Version 11.2 is released, we’re able to roll out a new version of the Wolfram Cloud App, that makes it surprisingly realistic (thanks to some neat UX ideas) to write Wolfram Language code even on your phone.

Talking of mobile devices, there’s another big thing that’s coming: interactive Wolfram Notebooks running completely locally and natively on iOS devices—both tablets and phones. This has been another heroic software engineering project—which actually started nearly as long ago as the cloud notebook project.

The goal here is to be able to read and interact with—but not author—notebooks directly on an iOS device. And so now with the Wolfram Player App that will be released next week, you can have a notebook on your iOS device, and use `Manipulate` and other dynamic content, as well as read and navigate notebooks—with the whole interface natively adapted to the touch environment.

For years it’s been frustrating when people send me notebook attachments in email, and I’ve had to do things like upload them to the cloud to be able to read them on my phone. But now with native notebooks on iOS, I can immediately just read notebook attachments directly from email.

Math was the first big application of the Wolfram Language (that’s why it was called Mathematica!)… and for more than 30 years we’ve been committed to aggressively pursuing R&D to expand the domain of math that can be made computational. And in Version 11.2 the biggest math advance we’ve made is in the area of limits.

Mathematica 1.0 back in 1988 already had a basic `Limit` function. And over the years `Limit` has gradually been enhanced. But in 11.2—as a result of algorithms we’ve developed over the past several years—it’s reached a completely new level.

The simple-minded way to compute a limit is to work out the first terms in a power series. But that doesn’t work when functions increase too rapidly, or have wild and woolly singularities. But in 11.2 the new algorithms we’ve developed have no problem handling things like this:

✕ Limit[E^(E^x + x^2) (-Erf[E^-E^x - x] - Erf[x]), x -> \[Infinity]] |

✕ Limit[(3 x + Sqrt[9 x^2 + 4 x - Sin[x]]), x -> -\[Infinity]] |

It’s very convenient that we have a test set of millions of complicated limit problems that people have asked Wolfram|Alpha about over the past few years—and I’m pleased to say that with our new algorithms we can now immediately handle more than 96% of them.

Limits are in a sense at the very core of calculus and continuous mathematics—and to do them correctly requires a huge tower of knowledge about a whole variety of areas of mathematics. Multivariate limits are particularly tricky—with the main takeaway from many textbooks basically being “it’s hard to get them right”. Well, in 11.2, thanks to our new algorithms (and with a lot of support from our algebra, functional analysis and geometry capabilities), we’re finally able to correctly do a very wide range of multivariate limits—saying whether there’s a definite answer, or whether the limit is provably indeterminate.

Version 11.2 also introduces two other convenient mathematical constructs: `MaxLimit` and `MinLimit` (sometimes known as lim sup and lim inf). Ordinary limits have a habit of being indeterminate whenever things get funky, but `MaxLimit` and `MinLimit` have definite values, and are what come up most often in applications.

So, for example, there isn’t a definite ordinary limit here:

✕ Limit[Sin[x] + Cos[x/4], x -> \[Infinity]] |

But there’s a `MaxLimit`, that turns out to be a complicated algebraic number:

✕ MaxLimit[Sin[x] + Cos[x/4], x -> \[Infinity]] // FullSimplify |

✕ N[%] |

Another new construct in 11.2 is `DiscreteLimit`, that gives limits of sequences. Like here’s it’s illustrating the Prime Number Theorem:

✕ DiscreteLimit[Prime[n]/(n Log[n]), n -> \[Infinity]] |

And here it’s giving the limiting value of the solution to a recurrence relation:

✕
DiscreteLimit[ RSolveValue[{x[n + 1] == Sqrt[1 + x[n] + 1/x[n]], x[1] == 3}, x[n], n], n -> \[Infinity]] |

There’s always new data in the Wolfram Knowledgebase—flowing every second from all sorts of data feeds, and systematically being added by our curators and curation systems. The architecture of our cloud and desktop system allows both new data and new types of data (as well as natural language input for it) to be immediately available in the Wolfram Language as soon as it’s in the Wolfram Knowledgebase.

And between Version 11.1 and Version 11.2, there’ve been millions of updates to the Knowledgebase. There’ve also been some new types of data added. For example—after several years of development—we’ve now got well-curated data on all notable military conflicts, battles, etc. in history:

✕ Entity["MilitaryConflict", "SecondPunicWar"][ EntityProperty["MilitaryConflict", "Battles"]] |

✕ GeoListPlot[%] |

Another thing that’s new in 11.2 is greatly enhanced predictive caching of data in the Wolfram Language—making it much more efficient to compute with large volumes of curated data from the Wolfram Knowledgebase.

By the way, Version 11.2 is the first new version to be released since the Wolfram Data Repository was launched. And through the Data Repository, 11.2 has access to nearly 600 curated datasets across a very wide range of areas. 11.2 also now supports functions like `ResourceSubmit`, for programmatically submitting data for publication in the Wolfram Data Repository. (You can also publish data yourself just using `CloudDeploy`.)

There’s a huge amount of data and types of computations available in Wolfram|Alpha—that with great effort have been brought to the level where they can be relied on, at least for the kind of one-shot usage that’s typical in Wolfram|Alpha. But one of our long-term goals is to take as many areas as possible and raise the level even higher—to the point where they can be built into the core Wolfram Language, and relied on for systematic programmatic usage.

In Version 11.2 an area where this has happened is ocean tides. So now there’s a function `TideData` that can give tide predictions for any of the tide stations around the world. I actually found myself using this function in a recent livecoding session I did—where it so happened that I needed to know daily water levels in Aberdeen Harbor in 1913. (Watch the Twitch recording to find out why!)

✕ TideData[Entity[ "City", {"Aberdeen", "Maryland", "UnitedStates"}], "WaterLevel", DateRange[DateObject[{1913, 1, 1}], DateObject[{1913, 12, 31}], "Day"]] |

✕ DateListPlot[%] |

`GeoGraphics` and related functions have built-in access to detailed maps of the world. They’ve also had access to low-resolution satellite imagery. But in Version 11.2 there’s a new function `GeoImage` that uses an integrated external service to provide full-resolution satellite imagery:

✕ GeoImage[GeoDisk[Entity["Building", "ThePentagon::qzh8d"], Quantity[0.4, "Miles"]]] |

✕
GeoImage[GeoDisk[Entity["Building", "Stonehenge::46k59"], Quantity[250, "Feet"]]] |

I’ve ended up using `GeoImage` in each of the two livecoding sessions I did just recently. Yes, in principle one could go to the web and find a satellite image of someplace, but it’s amazing what a different level of utility one reaches when one can programmatically get the satellite image right inside the Wolfram Language—and then maybe feed it to image processing, or visualization, or machine-learning functions. Like here’s a feature space plot of satellite images of volcanos in California:

✕ FeatureSpacePlot[ GeoImage /@ GeoEntities[ Entity["AdministrativeDivision", {"California", "UnitedStates"}], "Volcano"]] |

We’re always updating and adding all sorts of geo data in the Wolfram Knowledgebase. And for example, as of Version 11.2, we’ve now got high-resolution geo elevation data for the Moon—which came in very handy for our recent precision eclipse computation project.

✕ ListPlot3D[ GeoElevationData[ GeoDisk[Entity["MannedSpaceMission", "Apollo15"][ EntityProperty["MannedSpaceMission", "LandingPosition"]], Quantity[10, "Miles"]]], Mesh -> None] |

One of the obvious strengths of the Wolfram Language is its wide range of integrated and highly automated visualization capabilities. Version 11.2 adds some convenient new functions and options. An example is `StackedListPlot`, which, as its name suggests, makes stacked (cumulative) list plots:

✕ StackedListPlot[RandomInteger[10, {3, 30}]] |

There’s also `StackedDateListPlot`, here working with historical time series from the Wolfram Knowledgebase:

✕ StackedDateListPlot[ EntityClass[ "Country", { EntityProperty["Country", "Population"] -> TakeLargest[10]}][ Dated["Population", All], "Association"], PlotLabels -> Automatic] |

✕ StackedDateListPlot[ EntityClass[ "Country", { EntityProperty["Country", "Population"] -> TakeLargest[10]}][ Dated["Population", All], "Association"], PlotLabels -> Automatic, PlotLayout -> "Percentile"] |

One of our goals in the Wolfram Language is to make good stylistic choices as automatic as possible. And in Version 11.2 we’ve, for example, added a whole collection of plot themes for `AnatomyPlot3D`. You can always explicitly give whatever styling you want. But we provide many default themes. You can pick a classic anatomy book look (by the way, all these 3D objects are fully manipulable and computable):

✕
AnatomyPlot3D[Entity["AnatomicalStructure", "LeftHand"],PlotTheme -> "Classic"] |

Or you can go for more of a Gray’s Anatomy look:

✕ AnatomyPlot3D[Entity["AnatomicalStructure", "LeftHand"], PlotTheme -> "Vintage"] |

Or you can have a “scientific” theme that tries to make different structures as distinct as possible:

✕ StackedDateListPlot[ EntityClass[ "Country", { EntityProperty["Country", "Population"] -> TakeLargest[10]}][ Dated["Population", All], "Association"], PlotLabels -> Automatic, PlotLayout -> "Percentile"] |

The Wolfram Language has very strong computational geometry capabilities—that work on both exact surfaces and approximate meshes. It’s a tremendous algorithmic challenge to smoothly handle constructive geometry in 3D—but after many years of work, Version 11.2 can do it:

✕ RegionIntersection[MengerMesh[2, 3], BoundaryDiscretizeRegion[Ball[{1, 1, 1}]]] |

And of course, everything fits immediately into the rest of the system:

✕ Volume[%] |

Version 11 introduced a major new framework for large-scale audio processing in the Wolfram Language. We’re still developing all sorts of capabilities based on this framework, especially using machine learning. And in Version 11.2 there are a number of immediate enhancements. There are very practical things, like built-in support for `AudioCapture` under Linux. There’s also now the notion of a dynamic `AudioStream`, whose playback can be programmatically controlled.

Another new function is `SpeechSynthesize`, which creates audio from text:

✕ SpeechSynthesize["hello"] |

✕ Spectrogram[%] |

The Wolfram Language tries to let you get data wherever you can. One capability added for Version 11.2 is being able to capture images of your computer screen. (`Rasterize` has been able to rasterize complete notebooks for a long time; `CurrentNotebookImage` now captures an image of what’s visible from a notebook on your screen.) Here’s an image of my main (first) screen, captured as I’m writing this post:

✕ CurrentScreenImage[1] |

Of course, I can now do computation on this image, just like I would on any other image. Here’s a map of the inferred “saliency” of different parts of my screen:

✕ ImageSaliencyFilter[CurrentScreenImage[1]]//Colorize |

Part of developing the Wolfram Language is adding major new frameworks. But another part is polishing the system, and implementing new functions that make doing things in the system ever easier, smoother and clearer.

Here are a few functions we’ve added in 11.2. The first is simple, but useful: `TakeList`—a function that successively takes blocks of elements from a list:

✕ TakeList[Alphabet[], {2, 5, 3, 4}] |

Then there’s `FindRepeat` (a “colleague” of `FindTransientRepeat`), that finds exact repeats in sequences—here for a Fibonacci sequence mod 10:

✕ FindRepeat[Mod[Array[Fibonacci, 500], 10]] |

Here’s a very different kind of new feature: an addition to `Capitalize` that applies the heuristics for capitalizing “important words” to make something “title case”. (Yes, for an individual string this doesn’t look so useful; but it’s really useful when you’ve got 100 strings from different sources to make consistent.)

✕ Capitalize["a new kind of science", "TitleCase"] |

Talking of presentation, here’s a simple but very useful new output format: `DecimalForm`. Numbers are normally displayed in scientific notation when they get big, but `DecimalForm` forces “grade school” number format, without scientific notation:

✕ Table[16.5^n, {n, 10}] |

✕ DecimalForm[Table[16.5^n, {n, 10}]] |

Another language enhancement added in 11.2—though it’s really more of a seed for the future—is `TwoWayRule`, input as <->. Ever since Version 1.0 we’ve had `Rule` (->), and over the years we’ve found `Rule` increasingly useful as an inert structure that can symbolically represent diverse kinds of transformations and connections. `Rule` is fundamentally one-way: “left-hand side goes to right-hand side”. But one also sometimes needs a two-way version—and that’s what `TwoWayRule` provides.

Right now `TwoWayRule` can be used, for example, to enter undirected edges in a graph, or pairs of levels to exchange in `Transpose`. But in the future, it’ll be used more and more widely.

✕ Graph[{1 <-> 2, 2 <-> 3, 3 <-> 1}] |

11.2 has all sorts of other language enhancements. Here’s an example of a somewhat different kind: the functions `StringToByteArray` and `ByteArrayToString`, which handle the somewhat tricky issue of converting between raw byte arrays and strings with various encodings (like UTF-8).

How do you get the Wolfram Language to automatically initialize itself in some particular way? All the way from Version 1.0, you’ve been able to set up an init.m file to run at initialization time. But finally now in Version 11.2 there’s a much more general and programmatic way of doing this—using `InitializationValue` and related constructs.

It’s made possible by the `PersistentValue` framework introduced in 11.1. And what’s particularly nice about it is that it allows a whole range of “persistence locations”—so you can store your initialization information on a per-session, per-computer, per-user, or also (new in 11.2) per-notebook way.

Talking about things that go all the way to Version 1.0, here’s a little story. Back in Version 1.0, Mathematica (as it then was) pretty much always used to display how much memory was still available on your computer (and, yes, you had to be very careful back then because there usually wasn’t much). Well, somewhere along the way, as virtual memory became widespread, people started thinking that “available memory” didn’t mean much, and we stopped displaying it. But now, after being gone for 25+ years, modern operating systems have let us bring it back—and there’s a new function `MemoryAvailable` in Version 11.2. And, yes, for my computer the result has gained about 5 digits relative to what it had in 1988:

✕ MemoryAvailable[ ] |

There’ve been ways to do some kinds of asynchronous or “background” tasks in the Wolfram Language for a while, but in 11.2 there’s a complete systematic framework for it. There’s a thing called `TaskObject` that symbolically represents an asynchronous task. And there are basically now three ways such a task can be executed. First, there’s `CloudSubmit`, which submits the task for execution in the cloud. Then there’s `LocalSubmit`, which submits the task to be executed on your local computer, but in a separate subkernel. And finally, there’s `SessionSubmit`, which executes the task in idle time in your current Wolfram Language session.

When you submit a task, it’s off getting executed (you can schedule it to happen at particular times using `ScheduledTask`). The way you “hear back” from the task is through “handler functions”: functions that are set up when you submit the task to “handle” certain events that can occur during the execution of the task (completion, errors, etc.).

There are also functions like `TaskSuspend`, `TaskAbort`, `TaskWait` and so on, that let you interact with tasks “from the outside”. And, yes, when you’re doing big machine-learning trainings, for example, this comes in pretty handy.

We’re always keen to make the Wolfram Language as connected as it can be. And in Version 11.2 we’ve added a variety of features to achieve that. In Version 11 we introduced the `Authentication` option, which lets you give credentials in functions like `URLExecute`. Version 11 already allowed for `PermissionsKey` (a.k.a. an “app id”). In 11.2 you can now give an explicit username and password, and you can also use `SecuredAuthenticationKey` to provide OAuth credentials. It’s tricky stuff, but I’m pleased with how cleanly we’re able to represent it using the symbolic character of the Wolfram Language—and it’s really useful when you’re, for example, actually working with a bunch internal websites or APIs.

Back in Version 10 (2014) we introduced the very powerful idea of using `APIFunction` to provide a symbolic specification for a web API—that could be deployed to the cloud using `CloudDeploy`. Then in Version 10.2 we introduced `MailReceiverFunction`, which responds not to web requests, but instead to receiving mail messages. (By the way, in 11.2 we’ve considerably strengthened `SendMail`, notably adding various authentication and address validation capabilities.)

In Version 11, we introduced the channel framework, which allows for publish-subscribe interactions between Wolfram Language instances (and external programs)—enabling things like chat, as well as a host of useful internal services. Well, in our continual path of automating and unifying, we’re introducing in 11.2 `ChannelReceiverFunction`—which can be deployed to the cloud to respond to whatever messages are sent on a particular channel.

In the low-level software engineering of the Wolfram Language we’ve used sockets for a long time. A few years ago we started exposing some socket functionality within the language. And now in 11.2 we have a full socket framework. The socket framework supports both traditional TCP sockets, as well as modern ZeroMQ sockets.

Ever since the beginning, the Wolfram Language has been able to communicate with external C programs—actually using its native WSTP (Wolfram Symbolic Transfer Protocol) symbolic expression transfer protocol. Years ago J/Link and .NetLink enabled seamless connection to Java and .Net programs. RLink did the same for R. Then there are things like `LibraryLink`, that allow direct connection to DLLs—or `RunProcess` for running programs from the shell.

But 11.2 introduces a new form of external program communication: `ExternalEvaluate`. `ExternalEvaluate` is for doing computation in languages which—like the Wolfram Language—support REPL-style input/output. The two first examples available in 11.2 are Python and NodeJS.

Here’s a computation done with NodeJS—though this would definitely be better done directly in the Wolfram Language:

✕ ExternalEvaluate["NodeJS", "Math.sqrt(50)"] |

Here’s a Python computation (yes, it’s pretty funky to use & for `BitAnd`):

✕ ExternalEvaluate["Python", "[ i & 10 for i in range(10)]"] |

Of course, the place where things start to get useful is when one’s accessing large external code bases or libraries. And what’s nice is that one can use the Wolfram Language to control everything, and to analyze the results. `ExternalEvaluate` is in a sense a very lightweight construct—and one can routinely use it even deep inside some piece of Wolfram Language code.

There’s an infrastructure around `ExternalEvaluate`, aimed at connecting to the correct executable, appropriately converting types, and so on. There’s also `StartExternalSession`, which allows you to start a single external session, and then perform multiple evaluations in it.

So is there still more to say about 11.2? Yes! There are lots of new functions and features that I haven’t mentioned at all. Here’s a more extensive list:

But if you want to find out about 11.2, the best thing to do is to actually run it. I’ve actually been running pre-release versions of 11.2 on my personal machines for a couple of months. So by now I’m taking the new features and functions quite for granted—even though, earlier on, I kept on saying “this is really useful; how could we have not had this for 30 years?”. Well, realistically, it’s taken building everything we have so far—not only to provide the technical foundations, but also to seed the ideas, for 11.2. But now our work on 11.2 is done, and 11.2 is ready to go out into the world—and deliver the latest results from our decades of research and development.

]]>In our continued efforts to make it easier for students to learn and understand math and science concepts, the Wolfram|Alpha team has been hard at work this summer expanding our step-by-step solutions. Since the school year is just beginning, we’re excited to announce some new features.

We’re continuously working to expand our list of step-by-step topics in Wolfram|Alpha; in fact, we’ve nearly doubled the number of areas covered. We also continue to add more—over 60 topics have step-by-step coverage in domains such as algebra, calculus, geometry, linear algebra, discrete math, statistics and chemistry. Be sure to check out our examples page to see more areas that have step-by-step solutions. And with the new intermediate steps feature, expect the coverage to grow over the next few months.

It’s always nice to see a Wolfram|Alpha query covered by a sea of orange step-by-step solution buttons—something you’ll be seeing a lot more frequently as we continue to expand our collection of solution topics.

In addition to new areas of coverage, all step-by-step topics have been improved by adding more detail through expandable intermediate steps. Let’s use local extrema of *x*^3–10*x* + 1 as an example. Right off the bat, you’ll notice the new appearance.

In this new redesign, the steps are broken into their own blocks, the hints have a new look and there’s a new type of button that gives you the ability to drill down further and see the detailed math involved in arriving at the result of the step. In the example above, steps 3, 4 and 5 have such a button.

Let’s expand step 4:

This functionality is important for keeping the step-by-step solutions readable, while still providing all relevant information. In these steps for finding extrema, it’s important to know how to find *f*′(*x*), its roots and where it doesn’t exist. If these steps were laid out in a linear fashion, it would be easy to get lost in the steps pertinent to finding the extrema. The main steps are now the outline of how to find the extrema, and the intermediate steps provided by the new button give the specific details used in each step.

It’s sometimes the case that there are multiple details one would want within a step. In cases like this, only one set of intermediate steps is shown at a time, and clicking another will replace the one currently expanded.

Intermediate steps open a new door on the types of step-by-step solutions we can provide. In the coming months, we’ll be rolling out more content that utilizes this new feature.

So what are step-by-step solutions exactly? Wolfram|Alpha has pioneered step-by-step solutions for nearly 10 years, and we continue to be the industry standard. These solutions show how to get to an answer—not just what the answer is. Let’s ask for an integral. The main results shown are calculations, i.e. the answer to the query, supplementary information and even open code. But we can go further and see how one could find the answer. Let’s click the step-by-step solution button to see.

One might think these are precomputed solutions we grab from a large table, but that’s not the case. A curated table of solutions wouldn’t be feasible because there are an infinite number of math problems. Instead we start from scratch, building a stack of functionality meant to handle any query thrown at it. The Wolfram Language is the perfect language for a project like this. Under the hood we make use of the language’s full suite of mathematical capability, along with its highly expressive, symbolic paradigm.

If the Wolfram Language can compute something, can’t we just construct the step-by-step solutions by tracing through the algorithms used? In theory, yes. For example, the Wolfram Language computes most first-order derivatives similar to the way humans do—by continually using a large table of identities. Most of the time, however, there are faster and more sophisticated algorithms a human wouldn’t possibly execute by hand. When computing an integral, for example, most likely the Risch algorithm or a Mellin convolution of Meijer G-functions is being used. Instead, our step-by-step solutions take the approach a human would most likely take—that is, using heuristics to look for substitutions, integrate by parts, etc.

The use of the Wolfram Language makes it possible to aim for the highest quality step-by-step solutions. Furthermore, our solutions are reviewed by a team of PhD’s who critique the accuracy, readability and didacticism of the solutions. As a result, Wolfram|Alpha can be thought of as a high-end virtual tutor—one with in-depth explanations that don’t miss a detail, for only $5/month. Over 80 major universities (including nearly the entire Ivy League) trust our solutions enough to have site licenses for Wolfram|Alpha Pro. This automatically puts this tutor in the palm of many students’ hands.

We’re excited to see step-by-step solutions grow, and we hope you are too. As always, your feedback is appreciated as we strive to make Wolfram|Alpha even more useful for students everywhere. Let us know what areas you’d like to see step-by-step solutions for!

]]>Last week, I read Michael Berry’s paper, “Laplacian Magic Windows.” Over the years, I have read many interesting papers by this longtime Mathematica user, but this one stood out for its maximizing of the product of simplicity and unexpectedness. Michael discusses what he calls the magic window. For 70+ years, we have known about holograms, and now we know about magic windows. So what exactly is a magic window? Here is a sketch of the optics of one:

Parallel light falls onto a glass sheet that is planar on the one side and has some gentle surface variation on the other side (bumps in the above image are vastly overemphasized; the bumps of a real magic window would be minuscule). The light gets refracted by the magic window (the deviation angles of the refracted light rays are also overemphasized in the graphic) and falls onto a wall. Although the window bumpiness shows no recognizable shape or pattern, the light density variations on the wall show a clearly recognizable image. Starting with the image that one wants to see on the wall, one can always construct a window that shows the image one has selected. The variations in the thickness of the glass are assumed to be quite small, and the imaging plane is assumed to be not too far away so that the refracted light does not form caustics—as one sees them, for instance, at the bottom of a swimming pool in sunny conditions.

Now, how should the window surface look to generate any pre-selected image on the wall? It turns out that the image visible on the wall is the `Laplacian` of the window surface. Magic windows sound like magic, but they are just calculus (differentiation, to be precise) in action. Isn’t this a neat application of multivariate calculus? Schematically, these are the mathematical steps involved in a magic window.

Implementation-wise, the core steps are the following:

And while magic windows are a 2017 invention, their roots go back hundreds of years to so-called magic mirrors. Magic mirrors are the mirror equivalent of magic windows: they too can act as optical Laplace operators (see the following).

Expressed more mathematically: Let the height of the bumpy side of the glass surface be *f*(*x*,*y*). Then the intensity of the light brightness on the wall is approximately Δ_{x,y} *f*(*x*,*y*), where Δ is the Laplacian ∂^{2}./∂*x*^{2}+∂^{2}./∂*y*^{2}. Michael calls such a window a “magic window.” It is magic because the glass surface height *f*(*x*,*y*) does not in any way resemble Δ _{x,y} *f*(*x*,*y*).

It sounds miraculous that a window can operate as a Laplace operator. So let’s do some numerical experiments to convince ourselves that this does really work. Let’s start with a goat that we want to use as the image to be modeled. We just import a cute-looking Oberhasli dairy goat from the internet.

The gray values of the pixels can be viewed as a function *h*: ℝ*ℝ->[0,1]. Interpolation allows us to use this function constructively.

Here is a 3D plot of the goat function `ifGoat`.

And we can solve the Poisson equation with the image as the right-hand side: Δ *f = image* using `NDSolveValue`.

We will use Dirichlet boundary conditions for now. (But the boundary conditions will not matter for the main argument.)

The Poisson equation solution is quite a smooth function; the inverse of the Laplace operator is a smoothing operation. No visual trace of the goat seems to be left.

Overall it is smooth, and it is also still smooth when zoomed in.

Even when repeatedly zoomed in.

The overall shape of the Poisson equation solution can be easily understood through the Green’s function of the Laplace operator.

We calculate and visualize the first few terms (individually) of the double sum from the Green’s function.

Taking 25^{2} terms into account, we have the following approximations for the Poisson equation solution and its Laplacian. The overall shape is the same as the previous numerical solution of the Poisson equation.

For this small number of Fourier modes, the outline of goat is recognizable, but its details aren’t.

Applying the Laplace operator to the PDE solutions recovers (by construction) a version of the goat. Due to finite element discretization and numerical differentiation, the resulting goat is not quite the original one.

A faster and less discretization-dependent way to solve the Poisson equation uses the fast Fourier transform (FFT).

This solution recovers the goat more faithfully. Here is the recovered goat after interpolating the function values.

Taking into account that any physical realization of a magic window made from glass will unavoidably have imperfections, a natural question to ask is: What happens if one adds small perturbations to the solution of the Poisson equation?

The next input modifies each grid point randomly by a perturbation of relative size 10^{-p}. We see that for this goat, the relative precision of the surface has to be on the order of 10^{-6} or better—a challenging but realizable mechanical accuracy.

To see how the goat emerges after differentiation (Δ = ∂^{2} ./∂*x*^{2}+∂^{2} ./∂*y*^{2}), here are the partial derivatives shown.

And because we have ∂^{2} ./∂*x*^{2}+∂^{2} ./∂*y*^{2} =(∂./∂*x*+ⅈ ∂./∂*y*)(∂./∂*x*–ⅈ ∂./∂*y*), we also look at the Wirtinger derivatives.

We could also just use a simple finite difference formula to get the goat back. This avoids any interpolation artifacts and absolutely faithfully reproduces the original goat.

The differentiation can even be carried out as an image processing operation.

So far, nothing really interesting. We integrated and differentiated a function. Let’s switch gears and consider the refraction of a set of parallel light rays on a glass sheet.

We consider the lower parts of the glass sheet planar and the upper part slightly wavy with an explicit description *height* = *f*(*x*,*y*). The index of refraction is *n*, and we follow the light rays (coming from below) up to the imaging plane at *height* = *Z*. Here is a small `Manipulate` that visualizes this situation for the example surface *f*(*x*,*y*) = 1 + ε (cos(α *x*) + cos(β *y*).

We do want the upper surface of the glass nearly planar, so we use the factor ε in the previous equation.

The reason we want the upper surface mostly planar is that we want to avoid rays that “cross” near the surface and form caustics. We want to be in a situation where the density of the rays is position dependent, but the rays do not yet cross. This restricts the values of *n*, *Z* and the height of the surface modulation.

Now let’s do the refraction experiment with the previous solution of the Laplace equation as the height of the upper glass surface. To make the surface variations small, we multiply that solution by 0.0001.

We use the median refractive index of glass, *n* = 1.53.

Instead of using `lightRay`, we will use a compiled version for faster numerical evaluation.

In absolute units, say the variations in glass height are at most 1 mm; we look at the refracted rays a few meters behind the glass window. We will use about 3.2 million lights rays (4^{2} per pixel).

Displaying all endpoints of the rays gives a rather strong Moiré effect. But the goat is visible—a true refraction goat!

If we accumulate the number of points that arrive in a small neighborhood of the given points {*X*,*Y*} in the plane *height*=*Z*, the goat becomes much more visible. (This is what would happen if we would observe the brightness of the light that goes through the glass sheet and we assume that the intensities are additive.) To do the accumulation efficiently, we use the function `Nearest`.

Note that looking into the light that comes through the window would not show the goat because the light that would fall into the eye would mostly come from a small spatial region due to the mostly parallel light rays.

The appearance of the Laplacian of the surface of the glass sheet is not restricted to only parallel light. In the following, we use a point light source instead of parallel light. This means that the effect would also be visible by using artificial light sources, rather than sunlight with a magic window.

So why is the goat visible in the density of rays after refraction? At first, it seems quite surprising whether either a parallel or point source shines on the window.

On second thought, one remembers Maxwell’s geometric meaning of the Laplace operator:

… where indicates the average of *f* on a sphere centered at with radius ρ. Here is a quick check of the last identity for two and three dimensions.

At a given point in the imaging plane, we add up the light rays from different points of the glass surface. This means we carry out some kind of averaging operation.

So let’s go back to the general refraction formula and have a closer look. Again we assume that the upper surface is mostly flat and that the parameter ε is small. The position {*X*,*Y*} of the light ray in the imaging plane can be calculated in closed form as a function of the surface *g*(*x*,*y*), the starting coordinates of the light ray {*x*,*y*}, the index of refraction *n* and the distance of the imaging plane *Z*.

That is a relatively complicated-looking formula. For a nearly planar upper glass surface (small ε), we have the following approximate coordinates for the {*X*,*Y*} coordinates of the imaging plane where we observe the light rays in terms of the coordinate {*x*,*y*} of the glass surface.

This means in zeroth order we have {*X,Y*} ≈ {*x,y*}. And the deviation of the light ray position in the imaging plane is proportional (*n*–1)*Z*. (Higher-order corrections to {*X*,*Y*} ≈ {*x*,*y*} we could get from Newton iterations, but we do not need them here.)

The density of rays is the inverse of the Jacobian for going from {*x*,*y*} to {*X*,*Y*}. (Think on the change of variable formulas for 1:1 transforms for multivariate integration.)

Quantifying the size of the resulting expression shows that it is indeed a large expression. This is quite a complex formula. For a quadratic function in *x* and *y*, we can get some feeling for the density as a function of the physical parameters ε, *n* and *Z* as well as the parameters that describe the surface by varying them in an interactive demonstration. For large values of *n*, *Z* and ε, we see how caustics arise.

For nearly planar surfaces (first order in ε), the density is equal to the Laplacian of the surface heights (in *x*,*y* coordinates). This is the main “trick” in the construction of magic windows.

This explains why the goat appears as the intensity pattern of the light rays after refraction. This means glass sheets act effectively as a Laplace operator.

Using Newton’s root-finding method, we could calculate the intensity in *X*,*Y* coordinates, but the expression explains heuristically why refraction on a nearly planar surface behaves like an optical Laplace operator. For more details, see this article.

Now we could model a better picture of the light ray density by pre-generating a matrix of points in the imaging plane using, say, 10 million rays, and record where they fall within the imaging plane. This time we model the solution of the Poisson equation using `ListDeconvolve`.

The approximate solution of the Poisson equation is not quite as smooth as the global solutions, but the goat is nevertheless invisible.

We adjust the brightness/darkness through a power law (a crude approximation for a Weber–Fechner perception).

If the imaging plane is too far away, we do get caustics (that remind me of the famous cave paintings from Lascaux).

If the image plane is even further away, the goat slowly becomes unrecognizable.

Although not practically realizable, we also show what the goat would look like for negative *Z*; now it seems much more sheep-like.

Here is a small animation showing the shape of the goat as a function of the distance *Z* of the imaging plane from the upper surface.

Even if the image is just made from a few lines (rather than each pixel having a non-white or non-black value), the solution of the Poisson equation is a smooth function, and the right-hand side is not recognizable in a plot of the solution.

But after refraction on a glass sheet (or applying the Laplacian), we see Homer quite clearly.

Despite the very localized curve-like structures that make the Homer image, the resulting Poisson equation solution again looks quite smooth. Here is the solution textured with its second derivative (the purple line will be used in the next input).

The next graphic shows a cross-section of the Poisson equation solution together with its (scaled) first and second derivatives with respect to *x* along the purple line of the last graphic. The lines show up quite pronounced in the second derivatives.

Let’s repeat a modification of the previous experiment to see how precise the surface would have to be to show Homer. We add some random waves to the Homer solution.

Again we see that the surface would have to be correct at the (10^{-6}) level or better.

Or one can design a nearly planar window that will project one’s most favorite physics equation on the wall when the Sun is shining.

When looking at the window, one will not notice any formulas. But this time, the solution of the Poisson equation has more overall structures.

But the refracted light will make physics equations. The resulting window is perfect for the entrance of, say, physics department buildings.

Now that we’re at the end of this post, let us mention that one can also implement the Laplacian through a mirror, rather than a window. See Michael Berry’s paper from 2006, “Oriental Magic Mirrors and the Laplacian Image” (see this article as well). Modifying the above function for refracting a light ray to reflecting a light ray and assuming a mostly flat mirror surface, we see the Laplacian of the mirror surface in the reflected light intensity.

Making transparent materials and mirrors of arbitrary shape, now called free-form optics, is considered the next generation of modern optics and will have wide applications in science, technology, architecture and art (see here). I think that a few years from now, when the advertising industry recognizes their potential, we will see magic windows with their unexpected images behind them everywhere.

The upcoming August 21, 2017, total solar eclipse is a fascinating event in its own right. It’s also interesting to note that on April 8, 2024, there will be another total solar eclipse whose path will cut nearly perpendicular to the one this year.

With a bit of styling work and zooming in, you can see that the city of Carbondale, Illinois, is very close to this crossing point. If you live there, you will be able to see a total solar eclipse twice in only seven years.

Let’s aim for additional precision in our results and compute where the intersection of these two lines is. First, we request the two paths and remove the `GeoPosition` heads.

Then we can use `RegionIntersection` to find the intersection of the paths and convert the result to a `GeoPosition`.

By zooming in even further and getting rid of some of the style elements that are not useful at high zoom levels, we can try to pin down the crossing point of the two eclipse center paths.

It appears that the optimal place to be to see the longest eclipse for both locations would be just southwest of Carbondale, Illinois, near the east side of Cedar Lake along South Poplar Curve Road where it meets with Salem Road—although anywhere in the above image would see a total solar eclipse for both of the eclipses.

Because of this crossing point, I expect a lot of people will be planning to observe both the 2017 and 2024 solar eclipses. Don’t wait until the last minute to plan your trip!

So how often do eclipse paths intersect? We can find out with a little bit of data exploration. First, we need to get the dates, lines and types of all eclipses within a range of dates. Let’s limit ourselves to 40 years in the past to 40 years in the future for a total of about 80 years.

Keep only the eclipses for which path data is available.

Generate a set of all pairs of eclipses.

Now we define a function to test if a given pair of eclipse paths intersect.

Finally, we apply that function to all pairs of eclipses and keep only those for which an intersection occurs.

It turns out that there are a little over one hundred eclipses that intersect during this timespan.

We can visualize the distribution of these eclipse paths to see that many occur over the oceans. The chances of an intersection being within driving distance of a given land location is far lower.

So take advantage of the 2017 and 2024 eclipses if you live near Carbondale, Illinois! It’s unlikely you will see such an occurrence anytime soon without making a lot more effort.

On August 21, 2017, an event will happen across parts of the Western Hemisphere that has not been seen by most people in their lifetimes. A total eclipse of the Sun will sweep across the face of the United States and nearby oceans. Although eclipses of this type are not uncommon across the world, the chance of one happening near you is quite small and is often a once-in-a-lifetime event unless you happen to travel the world regularly. This year, the total eclipse will be within driving distance of most people in the lower 48 states.

Total eclipses of the Sun are a result of the Moon moving in front of the Sun, from the point of view of an observer on the Earth. The Moon’s shadow is quite small and only makes contact with the Earth’s surface in a small region, as shown in the following illustration.

We can make use of 3D graphics in the Wolfram Language to create a more realistic visualization of this event. First, we will want to make use of a texture to make the Earth look more realistic.

We can apply the texture to a rotated spherical surface as follows.

We represent the Earth’s shadow as a cone.

The Moon can be represented by a simple `Sphere` that is offset from the center of the scene, while its orbit is a simple dashed 3D path. Both are parameterized since the Moon’s orbit will precess in time. It’s useful to be able to supply values to these functions to get the shadow to line up where we want.

As with the Earth’s shadow, we represent the Moon’s shadow as a cone.

Finally, we create some additional scene elements for use as annotations.

Now we just need to assemble the scene. We want the Moon to be directly in line with the Sun, so we use 0° as one of the parameters to achieve that. To precess the orbit in such a way that the shadow falls on North America, we use 70°. The rest is just styling information.

This means that due to the eccentric orbit, sometimes the Moon is further away from Earth than at other times; it also means that, due to the orbital inclination, it may be above or below the plane of the Earth-Sun orbit. Usually when the Moon passes “between” the Earth and the Sun, it is either “above” or “below” the Sun from the point of view of an observer on the Earth’s surface. The geometry is affected by other effects but, from time to time, the geometry is just right, and the Moon actually blocks part or all of the Sun’s disk. On August 21, 2017, the geometry will be “just right,” and from some places on Earth the Moon will cover at least part of the Sun.

Besides illustrating eclipse geometry, we can also make use of the Wolfram Language via `GeoGraphics` to create various maps showing where the eclipse is visible. With very little code, you can get elaborate results. For example, we can combine the functionality of `SolarEclipse` with `GeoGraphics` to show where the path of the 2017 total solar eclipse can be seen. Totality will be visible in a narrow strip that cuts right across the central United States.

So which states are going to able to see the total solar eclipse? The following example can be used to determine that. First, we retrieve the polygon corresponding to the total phase for the upcoming eclipse.

Suppose you want to zoom in on a particular state to see a bit more detail. At this level, we are only interested in the path of totality and the center line. Once again, we use `SolarEclipse` to obtain the necessary elements.

Then we just use `GeoGraphics` to easily generate a map of the state in question—Wyoming, in this case.

We can make use of the Wolfram Data Repository to obtain additional eclipse information, such as timing of the eclipse at various locations.

We can use that data to construct annotated time marks for various points along the eclipse path.

Then we just combine the elements.

Of course, even if the eclipse is happening, there is no guarantee that you will be able to witness it. If the weather doesn’t cooperate, you will simply notice that it will get dark in the middle of the day. Using `WeatherData`, we can try to predict which areas are likely to have cloud cover on August 21. The following example is based on a similar Wolfram Community post.

The following retrieves all of the counties that intersect with the eclipse’s polygon bounds.

Most of the work is involved in looking at the `"CloudCoverFraction"` values for each county on August 21 for each year from 2001 to 2016 and finding the mean value for each county.

I can then use `GeoRegionValuePlot` to plot these values. In general, it appears that most areas along this path have relatively low cloud cover on August 21, based on historical data.

The total solar eclipse on August 21, 2017, is a big deal due to the fact that the path carries it across a large area of the United States. Make every effort to see it! Take the necessary safety precautions and wear eclipse-viewing glasses. If your kids are in school already, see if they are planning any viewing events. Just make sure to plan ahead, since traffic may be very heavy in areas near totality. Enjoy it!

How far can one get in teaching computational thinking to high-school students in two weeks? Judging by the results of this year’s Wolfram High-School Summer Camp the answer is: remarkably far.

I’ve been increasingly realizing what an immense and unique opportunity there now is to teach computational thinking with the whole stack of technology we’ve built up around the Wolfram Language. But it was a thrill to see just how well this seems to actually work with real high-school students—and to see the kinds of projects they managed to complete in only two weeks.

We’ve been doing our high-school summer camp for 5 years now (as well as our 3-week Summer School for more experienced students for 15 years). And every time we do the camp, we figure out a little more. And I think that by now we really have it down—and we’re able to take even students who’ve never really been exposed to computation before, and by the end of the camp have them doing serious computational thinking—and fluently implementing their ideas by writing sometimes surprisingly sophisticated Wolfram Language code (as well as creating well-written notebooks and “computational essays” that communicate about what they’ve done).

Over the coming year, we’re going to be dramatically expanding our Computational Thinking Initiative, and working to bring analogs of the Summer Camp experience to as many students as possible. But the Summer Camp provides fascinating and important data about what’s possible.

So how did the Summer Camp actually work? We had a lot of applicants for the 40 slots we had available this year. Some had been pointed to the camp by parents, teachers, or previous attendees. But a large fraction had just seen mention of it in the Wolfram|Alpha sidebar. There were students from a range of kinds of schools around the US, and overseas (though we still have to figure out how to get more applicants from underserved populations). Our team had done interviews to pick the final students—and I thought the ones they’d selected were terrific.

The students’ past experience was quite diverse. Some were already accomplished programmers (almost always self-taught). Others had done a CS class or two. But quite a few had never really done anything computational before—even though they were often quite advanced in various STEM areas such as math. But almost regardless of background, it was striking to me how new the core concepts of computational thinking seemed to be to so many of the students.

How does one take an idea or a question about almost anything, and find a way to formulate it for a computer? To be fair, it’s only quite recently, with all the knowledge and automation that we’ve been able to build into the Wolfram Language, that it’s become realistic for kids to do these kinds of things for real. So it’s not terribly surprising that in their schools or elsewhere our students hadn’t really been exposed to such things before. But it’s now possible—and that means there’s a great new opportunity to seriously teach computational thinking to kids, and to position them to pursue the amazing range of directions that computational thinking is opening up.

It’s important, by the way, to distinguish between “computational thinking” and straight “coding”. Computational thinking is about formulating things in computational terms. Coding is about the actual mechanics of telling a computer what to do. One of our great goals with the Wolfram Language is to automate the process of coding as much as possible so people can concentrate on pure computational thinking. When one’s using lower-level languages, like C++ and Java, there’s no choice but to be involved with the detailed mechanics of coding. But with the Wolfram Language the exciting thing is that it’s possible to teach pure high-level computational thinking, without being forced to deal with the low-level mechanics of coding.

What does this mean in practice? I think it’s very empowering for students: as soon as they “get” a concept, they can immediately apply it, and do real things with it. And it was pretty neat at the Summer Camp to see how easily even students who’d never written programs before were able to express surprisingly sophisticated computational ideas in the Wolfram Language. Sometimes it seemed like students who’d learned a low-level language before were actually at a disadvantage. Though for me it was interesting a few times to witness the “aha” moment when a student realized that they didn’t have to break down their computations into tiny steps the way they’d been taught—and that they could turn some big blob of code they’d written into one simple line that they could immediately understand and extend.

The Summer Camp program involves several hours each day of lectures and workshops aimed at bringing students up to speed with computational thinking and how to express it in the Wolfram Language. But the real core of the program is every student doing an individual, original, computational thinking project.

And, yes, this is a difficult thing to orchestrate. But over the years we’ve been doing our Summer School and Summer Camp we’ve developed a very successful way of setting this up. There are a bunch of pieces to it, and the details depend on the level of the students. But here let’s talk about high-school students, and this year’s Summer Camp.

Right before the camp we (well, actually, I) came up with a list of about 70 potential projects. Some are quite specific, some are quite open-ended, and some are more like “metaprojects” (e.g. pick a dataset in the Wolfram Data Repository and analyze it). Some are projects that could at least in some form already have been done quite a few years ago. But many projects have only just become possible—this year particularly as a result of all our recent advances in machine learning.

I tried to have a range of nominal difficulty levels for the projects. I say “nominal” because even a project that can in principle be done in an easy way can also always be done in a more elaborate and sophisticated way. I wanted to have projects that ranged from the extremely well defined and precise (implement a precise algorithm of this particular type), to ones that involved wrangling data or machine learning training, to ones that were basically free-form and where the student got to define the objective.

Many of the projects in this list might seem challenging for high-school students. But my calculation (which in fact worked out well) was that with the technology we now have, all of them are within range.

It’s perhaps interesting to compare the projects with what I suggested for this year’s Summer School. The Summer School caters to more experienced students—typically at the college, graduate school or postdoc level. And so I was able to suggest projects that require deeper mathematical or software engineering knowledge—or are just bigger, with a higher threshold to achieve a reasonable level of success.

Before students start picking projects, it’s important that they understand what a finished project should look like, and what’s involved in doing it. So at the very beginning of the camp, the instructors went through projects from previous camps, and discussed what the “output” of a project should be. Maybe it’ll be an active website; maybe an interactive Demonstration; maybe it’ll be a research paper. It’s got to be possible to make a notebook that describes the project and its results, and to make a post about it for Wolfram Community.

After talking about the general idea of projects, and giving examples of previous ones, the instructors did a quick survey of this year’s suggestions list, filling in a few details of what the imagined projects actually were. After this, the students were asked to pick their top three projects from our list, and then invent two more potential projects of their own.

It’s always an interesting challenge to find the right project for each student—and it’s something I’ve personally been involved in at our Summer Camp for the past several years. (And, yes, it helps that I have decades of experience in organizing professional and research projects and figuring out the best people to do them.)

It’s taken us a few iterations, but here’s the approach we’ve found works well. First, we randomly break the students up into groups of a dozen or so. Then we meet with each group, going around the room and asking each student a little about themselves, their interests and goals—and their list of projects.

After we’re finished with each group, we meet separately and try to come up with a project for each student. Sometimes it’ll be one of the projects straight from our list. Sometimes it’ll be a project that the student themself suggested. And sometimes it’ll be some creative combination of these, or even something completely different based on what they said they were interested in.

After we think we’ve come up with a good project, the next step is to meet individually with each student and actually suggest it to them. It’s very satisfying that a lot of the time the students seem really enthused about the projects we end up suggesting. But sometimes it becomes clear that a project just isn’t a good fit—and then sometimes we modify it in real time, but more often we circle back later with a different suggestion.

Once the projects are set, we assign an appropriate mentor to each student, taking into account both the student and the subject of the project. And then things are off and running. We have various checkpoints, like that students have to write up descriptions of their projects and post them on the internal Summer Camp site.

I personally wasn’t involved in the actual execution of the projects (though I did have a chance to check in on a few of them). So it was pretty interesting for me to see at the end of the camp what had actually happened. It’s worth mentioning that our scheme is that mentors can make suggestions about projects, but all the final code in a project should be created by the student. And if one version of the project ends up being too difficult, it’s up to the mentor to simplify it. So however the final project comes out, it really is the student’s work.

Much of the time, the Summer Camp will be the first time students have ever done an original project. It could potentially seem daunting. But I think the fact that we give so many examples of other projects, and that everyone else at the camp is also doing a project, really helps. And in the end experiencing the whole process of going from the idea for a project to a real, finished project is incredibly educational—and seems to have a big effect on many of our students.

OK, so that’s the theory. So what actually happened at this year’s Summer Camp? Well, here are all the projects the students did, with the titles they gave them:

It’s a very interesting, impressive, and diverse list. But let me pick out a few semi-randomly to discuss in a bit more detail. Consider these as “case studies” for what high-school students can accomplish with the Wolfram Language in a couple of weeks at a summer camp.

One young man at our camp had quite a lot of math background, and told me he was interested in airplanes and flying, and had designed his own remote-control plane. I started thinking about all sorts of drone survey projects. But he didn’t have a drone with him—and we had to come up with a project that could actually be done in a couple of weeks. So I ended up suggesting the following: given two points on Earth, find how an airplane can get from one to the other by the shortest path that never needs to go above a certain altitude. (And, yes, a small-scale version of this would be relevant to things like drone surveying too.)

Here’s how the student did this project. First, he realized that one could think of possible flight paths as edges on a graph whose nodes are laid out on a grid on the Earth. Then he used the built-in `GeoElevationData` to delete nodes that couldn’t be visited because the elevation at that point was above the cutoff. Then he just used `FindShortestPath` to find the shortest path in the graph from the start to the end.

I thought this was a pretty clever solution. It was a nice piece of computational thinking to realize that the elements of paths could be thought of as edges on a graph with nodes removed. Needless to say, there were some additional details to get a really good result. First, the student added in diagonal connections on the grid, with appropriate weightings to still get the correct shortest path computation. And then he refined the path by successively merging line segments to better approximate a great-circle path, at each step using computational geometry to check that the path wouldn’t go through a “too-high” region.

You never know what people are going to come to Summer Camp with. A young man from New Zealand came to our camp with some overnight audio recordings from outside his house featuring occasional periods of (quite strange-sounding) squawking that were apparently the calls of one or more kiwi birds. What the young man wanted to do was automatic “kiwi voice recognition”, finding the calls, and perhaps distinguishing different birds.

I said I thought this wouldn’t be a particularly easy project, but he should try it anyway. Looking at what happened, it’s clear the project started out well. It was easy to pull out all intervals in his audio that weren’t just silence. But that broke up everything, including kiwi calls, into very small blocks. He solved that by the following interesting piece of code, that uses pattern matching to combine symbolic audio objects:

At this point it might just have worked to use unsupervised machine learning and `FeatureSpacePlot` to distinguish kiwi from non-kiwi sound clips. But machine learning is still quite a hit-or-miss business—and in this case it wasn’t a hit. So what did the student do? Well, he built himself a tiny lightweight user interface in a notebook, then started manually classifying sound clips. (Various instructors commented that it was fortunate he brought headphones…)

After classifying 200 clips, he used `Classify` to automatically classify all the other clips. He did a variety of transformations to the data—applying signal processing, generating a spectrogram, etc. And in the end he got his kiwi classifier to 82% accuracy: enough to make a reasonable first pass on finding kiwi calls—and going down a path to computational ornithology.

One young woman said she’d recently gotten a stress fracture in her foot that she was told was related to the force she was putting on it while running. She asked if she could make a computational model of what was going on. I have to say I was pessimistic about being able to do that in two weeks—and I suggested instead a project that I thought would be more manageable, involving studying possible gaits (walk, trot, etc.) for creatures with different numbers of legs. But I encouraged her to spend a little time seeing if she could do her original project—and I suggested that if she got to the stage of actually modeling bones, she could use our built-in anatomical data.

The next I knew it was a day before the end of the Summer Camp, and I was looking at what had happened with the projects… and I was really impressed! She’d found a paper with an appropriate model, understood it, and implemented it, and now she had an interactive demonstration of the force on a foot during walking or running. She’d even used the anatomical data to show a 3D image of what was happening.

She explained that when one walks there are two peaks in the force, but when one runs, there’s only one. And when I set her interactive demonstration for my own daily walking regimen I found out that (as she said was typical) I put a maximum force of about twice my weight on my foot when I walk.

At first I couldn’t tell if he was really serious… but one young man insisted he wanted to use machine learning to tell when a piece of fruit is ripe. As it happens, I had used pretty much this exact example in a blog post some time ago discussing the use of machine learning in smart contracts. So I said, “sure, why don’t you try it”. I saw the student a few times during the Summer Camp, curiously always carrying a banana. And what I discovered at the end of the camp was that that very banana was a key element of his project.

At first he searched the web for images of bananas described as “underripe”, “overripe”, etc., then arranged them using `FeatureSpacePlot`:

Then he realized that he could get more quantitative by first looking at where in color space the pixels of the banana image lay. The result was that he was actually able to define a “banana ripeness scale”, where, as he described it: “A value of one was assigned to bananas that were on the brink of spoilage. A value of zero was assigned to a green banana fresh off a tree. A value of 0.5 was assigned to the ‘perfect’ banana.” It’s a nice example of how something everyday and qualitative can be made computational.

For his project, the student made a “Banana Classifier” app that he deployed through the Wolfram Cloud. And he even had an actual banana to test it on!

One of my suggested projects was to implement “international or historical numeral systems”—the analogs of things like Roman numerals but for different cultures and times. One young woman fluent in Korean said she’d like to do this project, starting with Korean.

As it happens, our built-in `IntegerName` function converts to traditional Korean numerals. So she set herself the task of converting from Korean numerals. It’s an interesting algorithmic exercise, and she solved it with some nice, elegant code.

By that point, she was on a roll… so she decided to go on to Burmese, and Thai. She tried to figure out Burmese from web sources… only to discover they were inconsistent… with the result that she ended up contacting a person who had an educational video about Burmese numerals, and eventually unscrambled the issue, wrote code to represent it, and then corrected the Wikipedia page about Burmese numerals. All in all, a great example of real-world algorithm curation. Oh, and she set up the conversions as a Wolfram Language microsite on the web.

Can machine learning tell if something is funny? One young man at the Summer Camp wanted to find out. So for his project he used our Reddit API connection to pull jokes from the Jokes subreddit, and (presumably) non-jokes from the AskReddit subreddit. It took a bit of cleanup and data wrangling… but then he was able to feed his training data straight into the `Classify` function, and generated a classifier from which he then built a website.

It’s a little hard to know how well it works outside of “Reddit-style humor”—but his anecdotal study at the Summer Camp suggested about a 90% success rate.

Different projects involve different kinds of challenges. Sometimes the biggest challenge is just to define the project precisely enough. Other times it’s to get—or clean—the data that’s needed. Still other times, it’s to find a way to interpret voluminous output. And yet other times, it’s to see just how elegantly some particular idea can be implemented.

One math-oriented young woman at the camp picked “implementing checksum algorithms” from my list. Such algorithms (used for social security numbers, credit card numbers, etc.) are very nicely and precisely defined. But how simply and elegantly can they be implemented in the Wolfram Language? It’s a good computational thinking exercise—that requires really understanding both the algorithms and the language. And for me it’s nice to be able to immediately read off from the young woman’s code just how these checksum algorithms work…

How should one plot a function in 4D? I had a project in my list about this, though I have to admit I hadn’t really figured out how it should be done. But, fortunately, a young man at the Summer Camp was keen to try to work on it. And with an interesting mixture of computational and mathematical thinking, he created `ParametricPlot4D`—then did a bunch of math to figure out how to render the results in what seemed like two useful ways: as an orthogonal projection, and as a stereographic projection. A `Manipulate` makes the results interactive—and they look pretty neat…

In addition to my explicit list of project suggestions, I had a “meta suggestion”: take any dataset, for example from the new Wolfram Data Repository, and try to analyze and understand it. One student took a dataset about meteorite impacts; another about the recent Ebola outbreak in Africa. One young woman said she was interested in actuarial science—so I suggested that she look at something quintessentially actuarial: mortality data.

I suggested that maybe she could look at the (somewhat macabrely named) Death Master File. I wasn’t sure how far she’d get with it. But at the end of the camp I found out that she’d processed 90 million records—and successfully reduced them to derive aggregate survival curves for 25 different states and make an interactive Demonstration of the results. (Letting me conclude, for example, that my current probability of living to age 100 is 28% higher in Massachusetts than in Indiana…)

Each year when I make up a list of projects for the Summer Camp I wonder if there’ll be particular favorites. My goal is actually to avoid this, and to have as uniform a distribution of interest in the projects as possible. But this year “Use Machine Learning to Identify Polyhedra” ended up being a minor favorite. And one consequence was that a student had already started working on the project even before we’d talked to him—even though by that time the project was already assigned to someone else.

But actually the “recovery” was better than the original. Because we figured out a really nice alternative project that was very well suited to the student. The project was to take images of regular tilings, say from a book, and to derive a computational representation of them, suitable, say, for `LatticeData`.

The student came up with a pretty sophisticated approach, largely based on image processing, but with a dash of computational geometry, combinatorics and even some cluster analysis thrown in. First, he used fairly elaborate image processing to identify the basic unit in the tiling. Then he figured out how this unit was arranged to form the final tiling. It ended up being about 102 lines of fairly dense algorithmic code—but the result was a quite robust “tiling OCR” system, that he also deployed on the web.

In my list I had a project “Identify buildings from satellite images”. A few students thought it sounded interesting, but as I thought about it some more, I got concerned that it might be really difficult. Still, one of our students was a capable young man who already seemed to know a certain amount about machine learning. So I encouraged him to give it a try. He ended up doing an impressive job.

He started by getting training data by comparing satellite images with street maps that marked buildings (and, conveniently, starting with the upcoming version of the Wolfram Language, not only streets maps but also satellite images are built in):

Then he used `NetChain` to build a neural net (based on the classic LeNet network, but modified). And then he started trying to classify parts of images as “building” or “not building”.

The results weren’t at all bad. But so far they were only answering the question “is there a building in that square?”, not “where is there a building?”. So then—in a nice piece of computational thinking—the student came up with a further idea: just have a window pan across the image, at each step estimating the probability of building vs. not-building. The result was a remarkably accurate heat map of where buildings might be.

It’d be a nice machine learning result for anyone. But as something done by a high-school student in two weeks I think it’s really impressive. And another great example of what’s now possible at an educational level with our whole Wolfram Language technology stack.

OK, so our Summer Camp was a success, and, with luck, the students from it are now successfully “launched” as independent computational thinkers. (The test, as far as I’m concerned, is whether when confronted with something in their education or their lives, they routinely turn to computational thinking, and just “write a program to solve the problem”. I’m hopeful that many of them now will. And, by the way, they immediately have “marketable skills”—like being able to do all sorts of data-science-related things.)

But how can we scale up what we’ve achieved with the Summer Camp? Well, we have a whole Computational Thinking Initiative that we’ve been designing to do just that. We’ll be rolling out different parts over the next little while, but one aspect will be doing other camps, and enabling other people to also do camps.

We’ve now got what amounts to an operations manual for how to “do a camp”. But suffice it to say that the core of it is to have instructors with good knowledge of the Wolfram Language (e.g. to the level of our Certified Instructor program), access to a bunch of great students, and use of a suitable venue. Two weeks seems to be a good length, though longer would work too. (Shorter will probably not be sufficient for students without prior experience to get to the point of doing a real project.)

Our camp is for high-school students (mainly aged 15 through 17). I think it would also be possible to do a successful camp for advanced middle-school students (maybe aged 12 and 13). And, of course, our long-running Summer School provides a very successful model for older students.

Beyond camps, we’ve had for some time a mentorships program which we will be streamlining and scaling up—helping students to work on longer-term projects. We’re also planning a variety of events and venues in which students can showcase their computational thinking work.

But for now it’s just exciting to see what was achieved in two weeks at this year’s Summer Camp. Yes, with the tech stack we now have, high-school students really can do serious computational thinking—that will make them not only immediately employable, but also positioned for what I think will be some of the most interesting career directions of the next few decades.

*To comment, please visit the copy of this post at the Stephen Wolfram Blog »*

Our goal with SystemModeler is to provide a state-of-the-art environment for modeling, simulation—and analytics—that leverages the Wolfram technology stack and builds on the Modelica standard for systems description (that we helped to develop).

SystemModeler is routinely used by the world’s engineering organizations on some of the world’s most complex engineering systems—as well as in fields such as life sciences and social science. We’ve been pursuing the development of what is now SystemModeler for more than 15 years, adding more and more sophistication to the capabilities of the system. And today we’re pleased to announce the latest step forward: SystemModeler 5.

As part of the 4.1, 4.2, 4.3 sequence of releases, we completely rebuilt and modernized the core computational kernel of SystemModeler. Now in SystemModeler 5, we’re able to build on this extremely strong framework to add a whole variety of new capabilities.

Some of the headlines include:

- Support for continuous media such as fluids and gases, using the latest Modelica libraries
- Almost 200 additional Modelica components, including Media, PowerConverters and Noise libraries
- Complete visual redesign of almost 6000 icons, for consistency and improved readability
- Support for new GUI workspaces optimized for different levels of development and presentation
- Almost 500 built-in example models for easy exploration and learning
- Modular reconfigurability, allowing different parts of models to be easily switched and modified
- Symbolic parametric simulation: the ability to create a fully computable object representing variations of model parameters
- Importing and exporting FMI 2 models for broad model interchange and system integration

A modeling project is greatly simplified if there is a library available for the topic. A library essentially provides the modeling language for that domain, consisting of components, sensors, sources and interfaces. Using these elements, building a model typically consists of dragging components, sensors and sources into a model space and then connecting their interfaces, as in this video:

SystemModeler already comes with an amazing collection of libraries for different domains, such as electrical (analog, digital, power), mechanical (1- or 3-dimensional), thermal (heat transfer, thermo-fluid flow), etc. And with the SystemModeler Library Store, there are many free and paid libraries that add to this collection.

With SystemModeler 5, we provide the latest version of the Modelica Standard Library (3.2.2) that we helped develop with industry and academic partners, adding almost 200 new components. Some of the new libraries include Media, PowerConverters and Noise, as well as several other sublibraries and utilities.

The Media library is technically advanced and required major updates to the SystemModeler kernel, containing models for the behavior of common gases and liquids. They range from ideal one-component gases to multicomponent mediums with phase transitions and nonlinear effects.

Let’s look at a basic example: have you ever noticed how when you use a compressed air duster the temperature of the can seems to drop rapidly? The following is a model of a 1 liter can at 75 psi and room temperature with a nozzle that restricts the flow out of the can into an ambient environment at atmospheric pressure.

The temperature of the three canister parts above is dependant on the medium inside each of them. If you want to analyze how the canister behaves with a different gas, all individual components would need to change to reflect this. In SystemModeler 5, we have made it so that you can reconfigure the whole model instead, setting one value to switch out all the parts at once.

Here, two different gases with identical starting temperatures and pressures are shown. If you compare a canister containing normal air to one containing helium gas, you can see that the much denser air retains its temperature better than the less dense helium gas.

A similar example modeling the stresses caused by expanding gases in a tank can be found here.

A great feature when building models using the drag-drop-connect paradigm is that the resulting model diagram is an effective way to communicate and understand models—whether that is for presentations and reports or when interactively exploring and learning about a model.

SystemModeler diagrams are now even more effective for visual communication, as a result of the redesign of nearly 6000 icons for improved consistency and readability. See this video for a short overview:

In addition, we have designed GUI workspaces that are optimized for different scenarios from presenter to developer. The main difference is the amount of tools and information panels that are readily available. What is essential for advanced development is mostly clutter when presenting or exploring, for instance.

It is often very convenient to provide a single interface to reconfigure multiple aspects of a complex model. This makes it easy to provide a few major model scenarios, for instance. SystemModeler 5 fully supports reconfigurable models, including interactive support for configuration management, which is an industry first.

Let’s use the wheels of a car as an example: say you want to test how different tires perform in a tight corner on a slippery surface. Instead of changing each of the tire model components, we can simply select the desired model configuration from a drop-down menu.

We have gone from Bambi on ice to Formula 1 performance. To understand these tracks, see this video:

Download this example and try it out for yourself! Now, if only changing the tires on a real car were as quick.

When you build a model, you typically want it to have parameters that can be tuned or fitted. With SystemModeler 5, you can now immediately explore and optimize different parameter values efficiently, using the function `WSMParametricSimulateValue`.

For instance, in this example, we are considering rope length and release time for a medieval trebuchet. And using optimization functions, we can find the the optimal parameter values that maximize the range of this ancient war machine. The “value” for this system is a whole trajectory, some of which you can see below. Notice that if you release the stone at the wrong time, the trajectory is actually going in the wrong direction (colored red below).

Using the function `WSMParametricSimulateValue`, you can also efficiently provide for interactive exploration of parameter spaces. Let us continue with the previous example. Say you want to analyze the turning car further and look at a few parameters in greater detail; of particular interest could be to explore how speed and road conditions such as friction and turning radius affect the car’s ability to follow the desired trajectory.

The parametric simulate function can then be used in a `Manipulate`.

The FMI (functional mock-up interface) standard is a broad industry standard for model exchange between simulation and system integration tools. The standard was originally proposed by Daimler and has been developed by a broad group of industry and academic partners, including us, over the course of several years. Typical use cases include:

- To import and export models between different tools, allowing different teams and companies to cooperate loosely
- To package models as a way to protect intellectual property, whether they rely on different tools or not
- To package models for system integration and perform hardware-in-the loop simulation

SystemModeler 5 now fully supports both FMI 1.0 and FMI 2.0 for model import and export, and with some 100 tools listed as supporting or planning support for this standard, this is by far the easiest way to integrate workflows between different tools, teams and companies.

Getting back to the car example, it is clearly struggling on the slippery surface. You could try to see if you can improve the cornering by adding an antilock braking system (ABS) to the model. However, you probably will not be able to get your hands on open source code for such a system, as they are likely to be proprietary. However, we can import an FMU (functional mock-up unit)—the actual objects exchanged in the FMI standard—of the ABS system.

By importing an FMU of the ABS controller, you can connect it like any other component. In the following simulation, the driver tries to steer to the right while slamming on the brakes. Without ABS, the wheels quickly lock up and the car will keep heading straight ahead. The ABS will, however, employ cadence braking, preventing the wheels from locking up and allowing the car to steer to the right.

See What’s New for all the new features with examples. See features and examples for a more comprehensive presentation of SystemModeler as a modeling, simulation and analytics environment. To get going right away, get the trial here. If you are new to SystemModeler, get started with these videos.

]]>