For example, when drawing a graphic, we usually specify the coordinates of its points or elements. But sometimes it’s simpler to express the graphic as a collection of relative displacements: move a distance *r* in a direction forming an angle *θ* with respect to the direction of the segment constructed in the previous step. This is known as turtle graphics in computer graphics, and is basically what the new function `AnglePath` does. If all steps have the same length, use `AnglePath`[{*θ*1,*θ*2,...}] to specify the angles. If each step has a different length, use `AnglePath`[{{r1,*θ*1},{r2,*θ*2}, ...}] to give the pairs {length, angle}. That’s it. Let’s see some results.

Turn 60 degrees to the left with respect to the previous direction six times. You get a hexagon:

If the angle is 135 degrees and you repeat eight times, then this is the result. Note that 8 * 135° = 1080°, so we go around the center three times:

Suppose that we again keep turning the same positive angle *θ*, but we increase the lengths of the steps linearly, in increments *dr*, from 0 to 1:

Then we get these curves that spiral outward, producing nice outputs:

If we choose the angles randomly, we get random walks on the plane:

Now let’s try to combine multiple `AnglePath` lines. Suppose that at each step we choose randomly between two possible angles and that the lengths obey a power law, so they get smaller in each iteration:

The result does not look very interesting yet:

But if we repeat the experiment 10 times, then we start seeing some structure:

We can construct all different lists of 10 choices of the two angles, using `Tuples` (there are 2^10=1024 possible lists). Replacing `Line` with `BSplineCurve` produces curved lines instead of straight segments. The result is a nice self-similar structure:

`AnglePath` allows us to construct fractal structures easily, with very compact code. In fact, the code fits in a tweet! These are two examples derive from Wolfram Tweet-a-Program:

These curious spirals are approximate Cornu spirals. With larger steps, they develop interesting substructure:

This was a quick introduction to how useful and fun the function `AnglePath` can be. `AnglePath` is supported in Version 10.1 of the Wolfram Language and *Mathematica*, and is rolling out soon in all other Wolfram products. Start using it now, and tweet your results through Wolfram TaP!

Download this post as a Computable Document Format (CDF) file.

]]>

I’ve often wondered what the biggest little polyhedra would look like. *Mathematica* 10 introduced `Volume[ConvexHullMesh[points]]`, so I thought I could solve the problem by picking points at random. Below is some code for picking, calculating, and showing a random little polyhedron. If the code is run a thousand times, one of the solutions will be better than the others. Here, I ran it three times. One of these three solutions is (probably) better than the other two.

With randomly selected points, images like the following emerge from the better solutions. I posted these on Wolfram Community under the discussion Biggest Little Polyhedra, and got some useful comments from Robin Houston and Todd Rowland. I thought of using results from “Visualizing the Thomson Problem” for solutions. In the Thomson problem, electrons repel each other on a sphere. Twelve repelling points move to the vertices of an icosahedron, which is inefficient for BLP, since all the longest distances pass through the center of the bounding sphere, just like the regular hexagon in the polygon case. I modified the Thomson code so that points repelled each other and all polar opposites, and that gave good starting values.

Four points need a regular tetrahedron, with volume /12= 0.117851.

Five points need a unit equilateral triangle with a perpendicular unit line, with volume /12 = 0.1443375; solved in 1976 [1].

I’ll use the name 6-BLP for the biggest little polyhedron on 6 points. In 2003, the volume for 6-BLP was solved to four decimals of accuracy [2, 3]. Graphics for 6-BLP and 7-BLP are below, with red lines for the unit diagonals.

To find these on my own, I first picked the best solutions out of a thousand tries, then used simulated annealing to improve the solutions. For each of the points in the good solution, I introduced a tiny bit of randomness to try to find a better solution, thousands of times. Then I introduced a tinier bit of randomness, over and over again. Some of these solutions seemed to go to a symmetrical solution. For example, with seven points, the best solution seemed to be gradually drifting toward this polyhedron, with a value of *r* of about a half, which represents the relative size of the upper triangle △456.

The exact volume can be determined by the tetrahedra defined by the points {{2,3,4,7}, {2,4,6,7}, {5,4,7,6}}, with the volumes of the first two being tripled for symmetry. Look at the volumes of the tetrahedra, and switch any two numbers in a tetrahedron with negative volume.

After changing the parity of the last tetrahedron, we can calculate the exact *r* that gives the exact optimal volume. In the same way, we’ll also solve a few others.

The solution for 16-BLP takes more than a minute, so I’ve separated it out.

The first value in the solutions is the optimal volume as a `Root` object, and the second is the optimal value of *r*. Here’s a cleaner table of values.

That is far beyond anything I could have solved by hand. With random selection, annealing, symmetry spotting, `Solve`[], and `Maximize`[], I was also able to find the exact *n*-BLP (biggest little polygon) for *n* = 6, 7, 8, 9, and 16.

Here are a few views of the 8-BLP, with the red tubes showing unit-length diagonals.

Some views of 9-BLP:

Some views of 16-BLP:

The labeled 8-BLP below features perpendicular unit lines 1-2 and 3-4 above and below the origin. The labeled 9-BLP below features stacked triangles △123, △456, and △789.

The labeled 16-BLP below features a truncated tetrahedron on points 1-12 and added points 13-16.

Fairly complicated, right? With sphere point picking, random numbers –Pi to Pi and –1 to 1 can produce randomly distributed points on a unit sphere. Points on a unit sphere can be mapped back to points in the (–Pi to Pi, –1 to 1) rectangle. With the solutions for 8, 9, and 16, here’s what happens.

For 10-BLP, I haven’t been able to find an exact solution, but I did find a numerical solution to any desired level of accuracy. If anyone can find a root object for this, let me know. Open up the notebook version to see a rather difficult equation in the Initialization section.

Here’s a labeled view of 10-BLP from two different perspectives.

In a similar fashion, a numerical solution for 11-BLP can be found.

Here are two views of 11-BLP.

Have I really solved these? Maybe not. For these particular symmetries, I’m sure I’ve found the local maximum. For example, here’s a function with a local maximum of 5 at the value 1.

Plot a bit more, and the global maximum of 32 can be found at value -2.

In the related Thomson problem, there’s a proof that the 12 vertices of an icosahedron give a minimum energy configuration for 12 electrons. But 7, 8, 9, 10, 11, and 13+ electrons are all considered unsolved. The Kepler conjecture suggested that hexagonal close packing was the densest arrangement of spheres, but a complete formal proof by Thomas Hales wasn’t completed until August 10, 2014. The densest packing of regular tetrahedra, the fraction 4000/4671 = .856347…, wasn’t found until July 27, 2010, and still isn’t proven maximal. Take any solution claims with a grain of salt; geometric maximization problems are notoriously tricky.

For months, my best solution for 11 points was in an asymmetric local maximum. Some (or most) of the following solutions are likely local instead of global, but which ones? With that caveat, we can look at best known solutions for 12 points and above.

12-BLP seems to be the point 12, the slightly messy heptagon 11-6-7-10-8-5-9, and the quadrilateral 1-4-3-2.

13-BLP seems to be the point 13, the slightly messy heptagon 12-8-10-6-7-9-11, and the messy pentagon 1-2-3-4-5.

My attempts to add symmetry have resulted in figures with a lower volume.

So far, my best solutions for a 14-BLP seems to have a lot of symmetry, but I haven’t solved it. I spent some time optimizing a point-heptagon-heptagon solution for a 15-BLP, only to watch my randomizer “improve” it relentlessly by increasing volume while sacrificing symmetry.

17-BLP, 18-BLP—I believe 17-BLP has nice symmetry.

19-BLP, 20-BLP—20 is not the dodecahedron, due to inefficient unit lines through the center.

The snub cube and half the vertices of the great rhombicuboctahedron both have lower volume than 24-BLP.

21-BLP, 22-BLP—Lots of 7- and 9-pointed stars.

23-BLP, 24-BLP—My best 24-BLP has tetrahedral symmetry.

Here’s some of the symmetry in the current best 24-BLP. Points 1-12 and 13-24 have respective norms of 0.512593 and 0.515168.

16-BLP, 17-BLP—Letting the unit lines define polygons. 16-BLP contains many 7-pointed stars.

The same polyhedra shown as solid objects, using `ConvexHullMesh`[]. That’s BLP 9-10-11-12, 13-14-15-16, 17-18-19-20, 21-22-23-24.

Here’s the current table of the best known values.

Here are the best solutions I’ve found so far for 4 to 24 points.

Let the points be centered so that the maximal distance from the origin is as small as possible. The below scatterplot shows the distance from the origin for vertices of each polyhedron, from 8 to 24 vertices.

*Mathematica* 10.1 managed to exactly solve 6-BLP, 7-BLP, 8-BLP, 9-BLP, and 16-BLP. It found numerically exact solutions for 10-BLP and 11-BLP, and made good progress for up to 24 points. That gives solutions for seven previously unsolved problems in combinatorial geometry, all by repeating `Volume[ConvexHullMesh[points]]`. What new features have you had success with?

[1] B. Kind and P. Kleinschmidt, “On the Maximal Volume of Convex Bodies with Few Vertices,” *Journal of Combinatorial Theory*, Series A, 21(1) 1976 pp. 124-128.

doi:10.1016/0097-3165(76)90056-X

[2] A. Klein and M. Wessler, “The Largest Small n-dimensional Polytope with n+3 Vertices,” *Journal of Combinatorial Theory*, Series A, 102(2), 2003 pp. 401-409.

doi:10.1016/S0097-3165(03)00054-2

[3] A. Klein and M. Wessler, “A Correction to ‘The Largest Small n-dimensional Polytope with n+3 Vertices,’” *Journal of Combinatorial Theory*, Series A, 112(1), 2005 pp. 173-174.

doi:10.1016/j.jcta.2005.06.001

Download this post as a Computable Document Format (CDF) file.

]]>If you’re looking for inspiration or just want a taste of what’s to come, videos from last year’s conference are available on our website. We saw an impressive array of presentations from both guests and our very own developers; below is a sampling of some of the most engaging innovations and projects that were shown.

Integrating *Mathematica* and the Unity Game Engine: Not Just for Games Anymore

George Danner

Wolfram Data Science Platform: Data Science in the Cloud

Dillon Tracy

Machine Learning

Etienne Bernard

Stitchcoding and Movie Color Maps

Theo Gray and Nina Paley

Rhino, Meet *Mathematica*

Chris Carlson

This year we have already introduced cutting-edge technologies to the Wolfram Language lineup, including the Wolfram Cloud, *SystemModeler* 4.1, Data Drop, and new *Mathematica* functionalities such as `ImageIdentify` and `GrammarRules`. We’ll see you in October to learn about all this and more!

The notion of a key in cryptography is similar to the way we use keys in everyday life, in that only someone with a certain key can perform a certain action. One very simple way of arranging this is to have a single key that is used to encrypt as well as decrypt, much like the locking and unlocking of a door:

This is called symmetric cryptography because both the party encrypting and the party decrypting share a single key. Symmetric cryptography is great for encrypting large amounts of information very securely and very efficiently, but there needs to be a preexisting relationship between both parties to be able to share a key in the first place. Asymmetric cryptography does not require a preexisting relationship—both parties have different keys, typically a public key and a private key. Something encrypted with the public key can only be decrypted with the private one:

Asymmetric cryptography is usually used for exchanging small amounts of information, for instance, a symmetric key that can then be used for transferring a larger message.

These functions have been designed to be usable by those without a technical understanding of cryptography, but still retain enough flexibility to satisfy those who do. For example, to generate a secure symmetric key, you could simply run this:

But if you wanted to generate a more specific kind of key, you could do this:

This flexibility is carried over to encryption and decryption, as those functions can use any generated key:

In the Wolfram Language, encryption isn’t limited to text. You can actually encrypt any expression:

One of the main motivations for adding cryptographic functionality to the Wolfram Language was the arrival of the Wolfram Cloud. The cloud is inherently communication based. Both in the internal workings of the cloud and in almost anything utilizing it, cryptography has the potential to play an important role in ensuring those communications are secure. Hopefully our combination of ease of use and power, as well as the broad user base of the Wolfram Language, will result in lots of interesting new protocols as well as a more secure cloud.

The (new) cryptographic functionality is supported in Version 10.1 of the Wolfram Language and *Mathematica*, and is rolling out soon in all other Wolfram products.

*This product includes software developed by the OpenSSL Project for use in the OpenSSL Toolkit. This product includes cryptographic software written by Eric Young.*

Download this post as a Computable Document Format (CDF) file.

I’ve built systems that give computers all sorts of intelligence, much of it far beyond the human level. And for a long time we’ve been integrating all that intelligence into the Wolfram Language.

Now I’m excited to be able to say that we’ve reached a milestone: there’s finally a function called ImageIdentify built into the Wolfram Language that lets you ask, “What is this a picture of?”—and get an answer.

And today we’re launching the Wolfram Language Image Identification Project on the web to let anyone easily take any picture (drag it from a web page, snap it on your phone, or load it from a file) and see what ImageIdentify thinks it is:

It won’t always get it right, but most of the time I think it does remarkably well. And to me what’s particularly fascinating is that when it does get something wrong, the mistakes it makes mostly seem remarkably human.

It’s a nice practical example of artificial intelligence. But to me what’s more important is that we’ve reached the point where we can integrate this kind of “AI operation” right into the Wolfram Language—to use as a new, powerful building block for knowledge-based programming.

In a Wolfram Language session, all you need do to identify an image is feed it to the ImageIdentify function:

What you get back is a symbolic entity, that the Wolfram Language can then do more computation with—like, in this case, figure out if you’ve got an animal, a mammal, etc. Or just ask for a definition:

Or, say, generate a word cloud from its Wikipedia entry:

And if one had lots of photographs, one could immediately write a Wolfram Language program that, for example, gave statistics on the different kinds of animals, or planes, or devices, or whatever, that appear in the photographs.

With ImageIdentify built right into the Wolfram Language, it’s easy to create APIs, or apps, that use it. And with the Wolfram Cloud, it’s also easy to create websites—like the Wolfram Language Image Identification Project.

For me personally, I’ve been waiting a long time for ImageIdentify. Nearly 40 years ago I read books with titles like *The Computer and the Brain* that made it sound inevitable we’d someday achieve artificial intelligence—probably by emulating the electrical connections in a brain. And in 1980, buoyed by the success of my first computer language, I decided I should think about what it would take to achieve full-scale artificial intelligence.

Part of what encouraged me was that—in an early premonition of the Wolfram Language—I’d based my first computer language on powerful symbolic pattern matching that I imagined could somehow capture certain aspects of human thinking. But I knew that while tasks like image identification were also based on pattern matching, they needed something different—a more approximate form of matching.

I tried to invent things like approximate hashing schemes. But I kept on thinking that brains manage to do this; we should get clues from them. And this led me to start studying idealized neural networks and their behavior.

Meanwhile, I was also working on some fundamental questions in natural science—about cosmology and about how structures arise in our universe—and studying the behavior of self-gravitating collections of particles.

And at some point I realized that both neural networks and self-gravitating gases were examples of systems that had simple underlying components, but somehow achieved complex overall behavior. And in getting to the bottom of this, I wound up studying cellular automata and eventually making all the discoveries that became *A New Kind of Science*.

So what about neural networks? They weren’t my favorite type of system: they seemed a little too arbitrary and complicated in their structure compared to the other systems that I studied in the computational universe. But every so often I would think about them again, running simulations to understand more about the basic science of their behavior, or trying to see how they could be used for practical tasks like approximate pattern matching:

Neural networks in general have had a remarkable roller-coaster history. They first burst onto the scene in the 1940s. But by the 1960s, their popularity had waned, and the word was that it had been “mathematically proven” that they could never do anything very useful.

It turned out, though, that that was only true for one-layer “perceptron” networks. And in the early 1980s, there was a resurgence of interest, based on neural networks that also had a “hidden layer”. But despite knowing many of the leaders of this effort, I have to say I remained something of a skeptic, not least because I had the impression that neural networks were mostly getting used for tasks that seemed like they would be easy to do in lots of other ways.

I also felt that neural networks were overly complex as formal systems—and at one point even tried to develop my own alternative. But still I supported people at my academic research center studying neural networks, and included papers about them in my *Complex Systems* journal.

I knew that there were practical applications of neural networks out there—like for visual character recognition—but they were few and far between. And as the years went by, little of general applicability seemed to emerge.

Meanwhile, we’d been busy developing lots of powerful and very practical ways of analyzing data, in *Mathematica* and in what would become the Wolfram Language. And a few years ago we decided it was time to go further—and to try to integrate highly automated general machine learning. The idea was to make broad, general functions with lots of power; for example, to have a single function Classify that could be trained to classify any kind of thing: say, day vs. night photographs, sounds from different musical instruments, urgency level of email, or whatever.

We put in lots of state-of-the-art methods. But, more importantly, we tried to achieve complete automation, so that users didn’t have to know anything about machine learning: they just had to call Classify.

I wasn’t initially sure it was going to work. But it does, and spectacularly.

People can give training data on pretty much anything, and the Wolfram Language automatically sets up classifiers for them to use. We’re also providing more and more built-in classifiers, like for languages, or country flags:

And a little while ago, we decided it was time to try a classic large-scale classifier problem: image identification. And the result now is ImageIdentify.

What is image identification really about? There are some number of named kinds of things in the world, and the point is to tell which of them a particular picture is of. Or, more formally, to map all possible images into a certain set of symbolic names of objects.

We don’t have any intrinsic way to describe an object like a chair. All we can do is just give lots of examples of chairs, and effectively say, “Anything that looks like one of these we want to identify as a chair.” So in effect we want images that are “close” to our examples of chairs to map to the name “chair”, and others not to.

Now, there are lots of systems that have this kind of “attractor” behavior. As a physical example, think of a mountainscape. A drop of rain may fall anywhere on the mountains, but (at least in an idealized model) it will flow down to one of a limited number of lowest points. Nearby drops will tend to flow to the same lowest point. Drops far away may be on the other side of a watershed, and so will flow to other lowest points.

The drops of rain are like our images; the lowest points are like the different kinds of objects. With raindrops we’re talking about things physically moving, under gravity. But images are composed of digital pixels. And instead of thinking about physical motion, we have to think about digital values being processed by programs.

And exactly the same “attractor” behavior can happen there. For example, there are lots of cellular automata in which one can change the colors of a few cells in their initial conditions, but still end up in the same fixed “attractor” final state. (Most cellular automata actually show more interesting behavior, that doesn’t go to a fixed state, but it’s less clear how to apply this to recognition tasks.)

So what happens if we take images and apply cellular automaton rules to them? In effect we’re doing image processing, and indeed some common image processing operations (both done on computers and in human visual processing) are just simple 2D cellular automata.

It’s easy to get cellular automata to pick out certain features of an image—like blobs of dark pixels. But for real image identification, there’s more to do. In the mountain analogy, we have to “sculpt” the mountainscape so that the right raindrops flow to the right points.

So how do we do this? In the case of digital data like images, it isn’t known how to do this in one fell swoop; we only know how to do it iteratively, and incrementally. We have to start from a base “flat” system, and gradually do the “sculpting”.

There’s a lot that isn’t known about this kind of iterative sculpting. I’ve thought about it quite extensively for discrete programs like cellular automata (and Turing machines), and I’m sure something very interesting can be done. But I’ve never figured out just how.

For systems with continuous (real-number) parameters, however, there’s a great method called back propagation—that’s based on calculus. It’s essentially a version of the very common method of gradient descent, in which one computes derivatives, then uses them to work out how to change parameters to get the system one is using to better fit the behavior one wants.

So what kind of system should one use? A surprisingly general choice is neural networks. The name makes one think of brains and biology. But for our purposes, neural networks are just formal, computational, systems, that consist of compositions of multi-input functions with continuous parameters and discrete thresholds.

How easy is it to make one of these neural networks perform interesting tasks? In the abstract, it’s hard to know. And for at least 20 years my impression was that in practice neural networks could mostly do only things that were also pretty easy to do in other ways.

But a few years ago that began to change. And one started hearing about serious successes in applying neural networks to practical problems, like image identification.

What made that happen? Computers (and especially linear algebra in GPUs) got fast enough that—with a variety of algorithmic tricks, some actually involving cellular automata—it became practical to train neural networks with millions of neurons, on millions of examples. (By the way, these were “deep” neural networks, no longer restricted to having very few layers.) And somehow this suddenly brought large-scale practical applications within reach.

I don’t think it’s a coincidence that this happened right when the number of artificial neurons being used came within striking distance of the number of neurons in relevant parts of our brains.

It’s not that this number is significant on its own. Rather, it’s that if we’re trying to do tasks—like image identification—that human brains do, then it’s not surprising if we need a system with a similar scale.

Humans can readily recognize a few thousand kinds of things—roughly the number of picturable nouns in human languages. Lower animals likely distinguish vastly fewer kinds of things. But if we’re trying to achieve “human-like” image identification—and effectively map images to words that exist in human languages—then this defines a certain scale of problem, which, it appears, can be solved with a “human-scale” neural network.

There are certainly differences between computational and biological neural networks—although after a network is trained, the process of, say, getting a result from an image seems rather similar. But the methods used to train computational neural networks are significantly different from what it seems plausible for biology to use.

Still, in the actual development of ImageIdentify, I was quite shocked at how much was reminiscent of the biological case. For a start, the number of training images—a few tens of millions—seemed very comparable to the number of distinct views of objects that humans get in their first couple of years of life.

There were also quirks of training that seemed very close to what’s seen in the biological case. For example, at one point, we’d made the mistake of having no human faces in our training. And when we showed a picture of Indiana Jones, the system was blind to the presence of his face, and just identified the picture as a hat. Not surprising, perhaps, but to me strikingly reminiscent of the classic vision experiment in which kittens reared in an environment of vertical stripes are blind to horizontal stripes.

Probably much like the brain, the ImageIdentify neural network has many layers, containing a variety of different kinds of neurons. (The overall structure, needless to say, is nicely described by a Wolfram Language symbolic expression.)

It’s hard to say meaningful things about much of what’s going on inside the network. But if one looks at the first layer or two, one can recognize some of the features that it’s picking out. And they seem to be remarkably similar to features we know are picked out by real neurons in the primary visual cortex.

I myself have long been interested in things like visual texture recognition (are there “texture primitives”, like there are primary colors?), and I suspect we’re now going to be able to figure out a lot about this. I also think it’s of great interest to look at what happens at later layers in the neural network—because if we can recognize them, what we should see are “emergent concepts” that in effect describe classes of images and objects in the world—including ones for which we don’t yet have words in human languages.

Like many projects we tackle for the Wolfram Language, developing ImageIdentify required bringing many diverse things together. Large-scale curation of training images. Development of a general ontology of picturable objects, with mapping to standard Wolfram Language constructs. Analysis of the dynamics of neural networks using physics-like methods. Detailed optimization of parallel code. Even some searching in the style of *A New Kind of Science* for programs in the computational universe. And lots of judgement calls about how to create functionality that would actually be useful in practice.

At the outset, it wasn’t clear to me that the whole ImageIdentify project was going to work. And early on, the rate of utterly misidentified images was disturbingly high. But one issue after another got addressed, and gradually it became clear that finally we were at a point in history when it would be possible to create a useful ImageIdentify function.

There were still plenty of problems. The system would do well on certain things, but fail on others. Then we’d adjust something, and there’d be new failures, and a flurry of messages with subject lines like “We lost the anteaters!” (about how pictures that ImageIdentify used to correctly identify as anteaters were suddenly being identified as something completely different).

Debugging ImageIdentify was an interesting process. What counts as reasonable input? What’s reasonable output? How should one make the choice between getting more-specific results, and getting results that one’s more certain aren’t incorrect (just a dog, or a hunting dog, or a beagle)?

Sometimes we saw things that at first seemed completely crazy. A pig misidentified as a “harness”. A piece of stonework misidentified as a “moped”. But the good news was that we always found a cause—like confusion from the same irrelevant objects repeatedly being in training images for a particular type of object (e.g. “the only time ImageIdentify had ever seen that type of Asian stonework was in pictures that also had mopeds”).

To test the system, I often tried slightly unusual or unexpected images:

And what I found was something very striking, and charming. Yes, ImageIdentify could be completely wrong. But somehow the errors seemed very understandable, and in a sense very human. It seemed as if what ImageIdentify was doing was successfully capturing some of the essence of the human process of identifying images.

So what about things like abstract art? It’s a kind of Rorschach-like test for both humans and machines—and an interesting glimpse into the “mind” of ImageIdentify:

Something like ImageIdentify will never truly be finished. But a couple of months ago we released a preliminary version in the Wolfram Language. And today we’ve updated that version, and used it to launch the Wolfram Language Image Identification Project.

We’ll continue training and developing ImageIdentify, not least based on feedback and statistics from the site. Like for Wolfram|Alpha in the domain of natural language understanding, without actual usage by humans there’s no real way to realistically assess progress—or even to define just what the goals should be for “natural image understanding”.

I must say that I find it fun to play with the Wolfram Language Image Identification Project. It’s satisfying after all these years to see this kind of artificial intelligence actually working. But more than that, when you see ImageIdentify respond to a weird or challenging image, there’s often a certain “aha” feeling, like one was just shown in a very human-like way some new insight—or joke—about an image.

Underneath, of course, it’s just running code—with very simple inner loops that are pretty much the same as, for example, in my neural network programs from the beginning of the 1980s (except that now they’re Wolfram Language functions, rather than low-level C code).

It’s a fascinating—and extremely unusual—example in the history of ideas: neural networks were studied for 70 years, and repeatedly dismissed. Yet now they are what has brought us success in such a quintessential example of an artificial intelligence task as image identification. I expect the original pioneers of neural networks—like Warren McCulloch and Walter Pitts—would find little surprising about the core of what the Wolfram Language Image Identification Project does, though they might be amazed that it’s taken 70 years to get here.

But to me the greater significance is what can now be done by integrating things like ImageIdentify into the whole symbolic structure of the Wolfram Language. What ImageIdentify does is something humans learn to do in each generation. But symbolic language gives us the opportunity to represent shared intellectual achievements across all of human history. And making all these things computational is, I believe, something of monumental significance, that I am only just beginning to understand.

But for today, I hope you will enjoy the Wolfram Language Image Identification Project. Think of it as a celebration of where artificial intelligence has reached. Think of it as an intellectual recreation that helps build intuition for what artificial intelligence is like. But don’t forget the part that I think is most exciting: it’s also practical technology, that you can use here and now in the Wolfram Language, and deploy wherever you want.

Well, I could consult *The Old Farmer’s Almanac* for the last frost date, but how accurate is it for my specific locale? What about the variability? Might there be a trend to earlier dates due to global warming? To answer these questions, I need historical temperature data. The Wolfram Language has weather data available, so maybe I could do a little data mining and come up with our own planting chart, and you could for your town, too.

Let’s begin by defining where I’m located, using free-form input.

And now, let’s get the temperature data. There are several kinds available, but some of them may not be available everywhere. Since we’re interested in temperatures that go below freezing, we’ll use “`MinTemperature`”.

So, what does the temperature data look like?

There is a gap in the data in the late 1940s, but that should not present a problem.

Let’s pick a year at random and look at it in detail, just to get a feel for the data. Using `Manipulate`, we can pick out individual days and read off the minimum temperature.

We can readily observe the annual variation, and also see that it’s not perfectly sinusoidal: the low temperature for the months of January through April seems to be flat. There is also a good deal of day-to-day variation. Using the slider, we find that the last frost date in the spring of 1990 was April 19 and the first frost date in the autumn was October 27.

As we saw above, the minimum temperature data is a time series, actually a `TemporalData` object. Many functions in the Wolfram Language know how to handle these objects automatically, and we will be making use of them rather than converting the data to a list of {date,temp} pairs using `Normal`. `TemporalData` objects can also act as functions so that you can get a property of the object with the idiom `temporalDataObject[property]`, where property is something like `"FirstDate"`. A very useful function for time series is `MovingMap`.

Let’s “fold” the data back onto itself on an annual basis and plot it. To do this, we’ll convert the long time series into a list of year-long time series. If there is no data for a given year (remember the gap mentioned above?), then `Missing[]` will be returned by `extractYear`.

To get the individual time series to plot over the same time interval, we can convert the actual dates to the number of seconds from the beginning of that year. A second argument is included in the function so that we can start the year in any month, which is convenient for working with data from a location south of the equator. Note that we’re not doing anything special when February 29 is present in leap years.

This plot shows that there is wider variation in the daily low temperatures during the winter than in the summer, and that the lowest low occurs near the end of January and the highest low occurs near the end of July. The frost-free period is early May to early October.

Now, to get the last frost date each spring and the first frost date each autumn, we need to work with yearly windows using `MovingMap`. We scan the window over each year, take the dates where the minimum temperature is less than zero, and take the last or first occurrence for the spring or autumn, respectively, returning both the date and its lowest temperature. We want to be able to use the function for data from a location south of the equator where spring and autumn are reversed, so the season is specified as “`early`” or “`late`”.

We define a function to convert time in seconds since the start of the year to time in days since the start of the year:

Wow, that’s quite a spread for each histogram, more than two months. We can get the earliest and latest dates on an annual basis by using the `yearAbsoluteTime` function when sorting the dates, and then taking the first and last dates in each list.

Since the latest date we’ve had frost in the spring is May 11, we could naively assume that anytime after that would be OK for planting. But mother nature is not that simple, and we cannot be 100% sure of the latest date for a spring frost. We need to work with probabilities—that is, model the dates as a distribution and then pick the 95th quantile if we want to be wrong 5% of the time, or the 99th quantile if we can tolerate only 1% error. The spring data is roughly symmetrical (the same shape on the left as on the right), so a normal distribution might be a good first estimate for a model. However, our data is too “pointy” in the middle for that, so we’ll use a smooth kernel distribution instead.

If I’m willing to be wrong half the time, then I use the dates from the first row where the probability is 0.5; if I want to be wrong only once in twenty years, then I use the row for 0.05 probability. I still don’t know which year I’ll be wrong with this model, but that’s the nature of probabilities.

The *Almanac* predicts April 24 and October 15 for the median (50% probability) last and first frost dates, respectively, which are closer to the 10% probability of our model. Perhaps they have a longer set of data from which they have drawn their conclusions, or they included a two-week buffer for good measure.

How do our last and first frost dates look over time? Is there a trend due to climate change? We can show a moving average and standard deviation superimposed on our data by using `MovingMap`. Here are the spring observations:

And here are the autumn observations:

There seems to be a trend, especially after 1990, to earlier last frost dates in the spring and later first frost dates in the autumn, but we really need much more data to say that with confidence.

Well, we’ve answered our questions for Trenton, New Jersey. We’ve mined the temperature data from the curated weather data in the Wolfram Language, visualized it, built a model, made predictions with the model, compared the predictions to those in *The Old Farmer’s Almanac*, and looked for temporal trends in the data.

Here are some examples for other cities with a shorter growing season (Calgary, Alberta) or located south of the equator (Christchurch, New Zealand), where spring and autumn are reversed. More sophisticated models for predicting the first and last frost dates could be tried, for example, by using machine learning with the previous two months of low and high temperatures.

So, now it’s your turn. Plug in your location and see when it’s safe to plant. You could also compute the growing season from this data or look for temporal patterns (a `Periodogram` might be revealing). Are there other weather aphorisms you could test with the Wolfram Language and its curated data?

Download this post as a Computable Document Format (CDF) file.

Every May since 1956, the League of American Bicyclists has sponsored National Bike Month to highlight the health benefits of bicycling and inspire more people to give it a try. Communities across the country celebrate two-wheeled glory in various ways; among the many events on Champaign-Urbana’s Bike Month calendar is Bike to Work (BTW) Day on May 14.

Wolfram supports our local BTW Day by providing refreshments at a designated refueling station on State street. Additionally, whether you’re biking to work in CU or elsewhere, we would like to fully prep any intrepid cyclists planning to embark on such a journey by pulling together some vital information.

Say one of our developers was spending time in Dodds Park before coming to work at Wolfram Research. That distance is about 4.6 miles.

What’s the fastest a person could hope to cover that mileage?

Yikes! That might be a little too ambitious. Let’s settle for a more reasonable 15 mph.

What if that commute was made even more challenging (obnoxious) by being all uphill? Champaign doesn’t have many hills to speak of, but our developer Mariusz Jankowski deals with them all the time on his 11-mile bicycle commute in Portland. Mariusz has done some pretty cool Wolfram Language computations with the GPS data he’s gathered. For our purposes, a mere 5% incline is enough to more than double the calories our adventuring developer is burning!

All right, that’s all good to know, but what if she had to do all this for an entire day on Jupiter? The bright side is that it would be awesome training for a *Lost in Space* sequel.

All joking aside, this much cycling needs fuel. How many almonds would she need to eat to power this Jovian trip?

Almonds aren’t very satisfactory, but everyone loves burritos!

There you have it, burritos are clearly the way to go when biking to work on Jupiter.

Gee, that escalated quickly.

If you’re more Earth-based, you may be in need of maps or geolocation services for your trip, or maybe you need to know if your outdoor commute will be long enough for you to sunburn. Check out Stephen Wolfram’s blog post on Apple Watch apps to find out how to track that information!

In the meantime, be sure to look for our station when you begin your own interplanetary, trans-dimensional journey to health and recreation!

]]>

Early on, hackathon participants were oriented to the various tools available to aid them with development. I showed the hackathon participants that the Wolfram Language knows about thousands of real-world entities, and that everything in the language is a symbolic expression.

Using CityBikes API to get real-time data, it was easy to track usage in a bike-sharing system in a smart city. Importing and visualizing this data with `GeoGraphics` was straightforward:

I ran a scheduled task to obtain the data for Barcelona every 10 minutes. Here is an animation of that data, showing bike usage in Barcelona over a 24-hour period:

Then I explained how I used a Raspberry Pi to digest Friday’s bicycle data overnight. The microprocessor was set up to compute the total number of bicycles available in different cities every 10 minutes from 3:30–8:30am CET:

European cities showed a valley in the number of available bicycles at 8am when people cycled to work. Citizens from New York and Mexico City were found to head back home around 5am CET.

Essential for this hackathon was the new Wolfram Data Drop, an open service that makes it easy to accumulate data of any kind, from anywhere, which works great on the Pi while connected to the Wolfram Cloud. The following is a dataset that I created for Barcelona’s bike-sharing system. Every 3 minutes the total number of parked bicycles is added to a `Databin`:

One of the cool features of Data Drop is that you can directly analyze this data through Wolfram|Alpha:

Another dataset that I created using a Raspberry Pi monitored the pedestrian flow happening at the front door of my apartment. If any movement was detected by a PIR motion sensor, the RaspiCam would take a photo, and a new entry would be added into a databin:

This appears in the Data Drop cloud like this:

The result was this `DateListPlot` of cumulative numbers of movements detected:

Then I showed how it could be set up to monitor my home hall’s activity in regular periods of time:

Certainly, this opens up a new world of possibilities. For example, you can use Data Drop to combine data from specific events from different devices. This was exactly what one of the teams did. They set up a Twitter account with `ServiceConnect` to inform people of the current air pollution in “La Diagonal,” Barcelona’s most important avenue. Every 20 minutes they checked the latest values of 10 gas sensors, and then generated and tweeted a `ListLinePlot` with a map of the sensors:

Other smart city projects involved the use of the new Machine Learning capabilities available in *Mathematica* 10, such as `FindFaces` to estimate the number of individuals in a bar, or `BarcodeRecognize` for a universal citizen ID card project. For most of the participants, this was their first encounter with the Wolfram Language, and yet they made useful, functional prototypes in just 48 hours. So I can’t wait to see what they are capable of with just a bit more practice. I wish all of them tons of happy, smart coding!

If you haven’t participated in a hackathon yet, check out the Smart City App Hack. Also feel free to contact us for future events, and don’t forget to have a look at Create, Code, Deploy: Workshop for Hackathons if you missed it. Finally, if you are looking for a three-week-long hackathon, apply now to the Wolfram Innovation Summer School or the Wolfram Science Summer School.

Download this post as a Computable Document Format (CDF) file

]]>

Come and join us in Frankfurt for the third European Wolfram Technology Conference, Wolfram Research Europe’s action-packed annual showpiece event.

Set for 2–3 June in Germany’s financial capital, the conference is where our latest releases will be showcased. You can also hear from our team of experts, as well as enjoy the opportunity to connect with Wolfram technology users from all over the world. And there’s still time to register for this event at the Radisson Blu Hotel, Frankfurt!

Throughout the conference, key developers will help you to understand how to master your data with computation and share their insights into how Wolfram’s technology can work for you.

As well as talks and tutorials from our own experts, a carefully selected group of Wolfram enthusiasts—both academic and commercial—will take the floor to share their Wolfram-inspired stories with you, covering topics such as data science, engineering, and education. See the full list of sessions here.

We’re aiming at all levels: individuals from beginners to students to professional experts. Whether you’re working for yourself, a startup, or a multinational, you’ll get plenty out of it. And if you can’t make it, why not send along a colleague who needs to learn more about Wolfram technologies?

June 2 is not far away, so register now!

]]>My idea was to write code with our standard Wolfram Programming Cloud, but instead of producing a web app or web API, to produce an app for the Apple Watch. And conveniently enough, a preliminary version of our Wolfram Cloud app just became available in the App Store—letting me deploy from the Wolfram Cloud to both mobile devices and the watch.

To some extent it was adventure programming. The Apple Watch was just coming out, and the Wolfram Cloud app was still just preliminary. But of course I was building on nearly 30 years of progressive development of the Wolfram Language. And I’m happy to say that it didn’t take long for me to start getting interesting Wolfram Language apps running on the watch. And after less than a day of work—with help from a handful of other people—I had 25 watch-ready apps:

All of these I built by writing code in the Wolfram Programming Cloud (either on the web or the desktop), then deploying to the Wolfram Cloud, and connecting to the Apple Watch via the Wolfram Cloud app. And although the apps were designed for the Apple Watch, you can actually also use them on the web, or on a phone. There are links to the web versions scattered through this post. To get the apps onto your phone and watch, just go to this page and follow the instructions. That page also has all the Wolfram Language source code for the apps, and you can use any Wolfram Language system—Wolfram Programming Cloud (including the free version), *Mathematica* etc.—to experiment with the code for yourself, and perhaps deploy your own version of any of the apps.

So how does it all work? For my first watch-app-writing session, I decided to start by making a tiny app that just generates a single random number. The core Wolfram Language code to do that is simply:

For the watch we want the number to look nice and bold and big, and it might as well be a random color:

We can immediately deploy this publicly to the cloud by saying:

And if you go to that URL in any web browser, you’ll get to a minimal web app which immediately gives a web page with a random number. (The Delayed in the code says to delay the computation until the moment the page is accessed or refreshed, so you get a fresh random number each time.)

So what about getting this to the Apple Watch? First, it has to get onto an iPhone. And that’s easy. Because anything that you’ve deployed to the Wolfram Cloud is automatically accessible on an iPhone through the Wolfram Cloud app. To make it easy to find, it’s good to add a recognizable name and icon. And if it’s ultimately headed for the watch, it’s good to put it on a black background:

And now if you go to this URL in a web browser, you’ll find a public version of the app there. Inside the Wolfram Cloud app on an iPhone, the app appears inside the WatchApps folder:

And now, if you touch the app icon, you’ll run the Wolfram Language code in the Wolfram Cloud, and back will come a random number, displayed on the phone:

If you want to run the app again, and get a fresh random number, just pull down from the top of the phone.

To get the app onto the watch, go back to the listing of apps, then touch the watch icon at the top and select the app. This will get the app listed on the watch that’s paired with your phone:

Now just touch the entry for the RandomNumber app and it’ll go to the Wolfram Cloud, run the Wolfram Language code, and display a random number on the watch:

It’s simple to make all sorts of “randomness apps” with the Wolfram Language. Here’s the core of a Coin Flip app:

And this is all it takes to deploy the app, to the web, mobile and watch:

One might argue that it’s overkill to use our sophisticated technology stack to do this. After all, it’s easy enough to flip a physical coin. But that assumes you have one of those around (which I, for one, don’t any more). Plus, the Coin Flip app will make better randomness.

What about playing Rock, Paper, Scissors with your watch? The core code for that is again trivial:

There’s a huge amount of knowledge built in to the Wolfram Language—including, in one tiny corner, the knowledge to trivially create a Random Pokemon app:

Here it is running on the watch:

Let’s try some slightly more complex Wolfram Language code. Here’s a Word Inventor that makes a “word” by alternating random vowels and consonants (and often the result sounds a lot like a Pokemon, or a tech startup):

If nothing else, one thing people presumably want to use a watch for is to tell time. And since we’re in the modern internet world, it has to be more fun if there’s a cat or two involved. So here’s the Wolfram Language code for a Kitty Clock:

Which on the watch becomes:

One can get pretty geeky with clocks. Remembering our recent very popular My Pi Day website, here’s some slightly more complicated code to make a Pi Clock where the digits of the current time are displayed in the context where they first occur in pi:

Or adding a little more:

So long as you enable it, the Apple Watch uses GPS, etc. on its paired phone to know where you are. That makes it extremely easy to have a Lat-Long app that shows your current latitude and longitude on the watch (this one is for our company HQ):

I’m not quite sure why it’s useful (prove location over Skype?), but here’s a Here & Now QR app that shows your current location and time in a QR code:

Among the many things the Wolfram Language knows a lot about is geography. So here’s the code to find the ten volcanoes closest to you:

A little more code shows them on a map, and constructs a Nearest Volcanoes app:

Here’s the code for a 3D Topography app, that shows the (scaled) 3D topography for 10 miles around your location:

Since the watch communicates with the Wolfram Cloud, it can make use of all the real-time data that’s flowing into the Wolfram Knowledgebase. That data includes things like the current (*x*,*y*,*z*,*t*) position of the International Space Station:

Given the position, a little bit of Wolfram Language graphics programming gives us an ISS Locator app:

As another example of real-time data, here’s the code for an Apple Quanting app that does some quant-oriented computations on Apple stock:

And here’s the code for a Market Word Cloud app that shows a stock-symbols word cloud weighted by fractional price changes in the past day (Apple up, Google down today):

Here’s the complete code for a geo-detecting Currency Converter app:

It’s easy to make so many apps with the Wolfram Language. Here’s the core code for a Sunrise/Sunset app:

Setting up a convenient display for the watch takes a little more code:

The Wolfram Language includes real-time weather feeds:

Which we can also display iconically:

Here’s the data for the last week of air temperatures:

And with a little code, we can format this to make a Temperature History app:

Sometimes the easiest way to get a result in the Wolfram Language is just to call Wolfram|Alpha. Here’s what Wolfram|Alpha shows on the web if you ask about the time to sunburn (it detects your current location):

Now here’s a real-time Sunburn Time app created by calling Wolfram|Alpha through the Wolfram Language (the different rows are for different skin tones):

The Wolfram Language has access not only to all its own curated data feeds, but also to private data feeds, especially ones in the Wolfram Data Drop.

As a personal analytics enthusiast, I maintain a databin in the Wolfram Data Drop that tells me my current backlog of unprocessed and unread email messages. I have a scheduled task that runs in the cloud and generates a report of my backlog history. And given this, it’s easy to have an SW Email Backlog app that imports this report on demand, and displays it on a watch:

And, yes, the recent increase in unprocessed and unread email messages is at least in part a consequence of work on this blog.

There are now lots of Wolfram Data Drop databins around, and of course you can make your own. And from any databin you can immediately make a watch app that shows a dashboard for it. Like here’s a Company Fridge app based on a little temperature sensor sitting in a break-room refrigerator at our company HQ (the cycling is from the compressor; the spike is from someone opening the fridge):

Databins often get data from just a single source or single device. But one can also have a databin that gets data from an app running on lots of different devices.

As a simple example, let’s make an app that just shows where in the world that app is being accessed from. Here’s the complete code to the deploy such a “Data Droplets” app:

The app does two things. First, whenever it’s run, it adds the geo location of the device that’s running it to a central databin in the Wolfram Data Drop. And second, it displays a world map that marks the last 20 places in the world where the app has been used:

A typical reason to run an app on the watch is to be able to see results right on your wrist. But another reason is to use the app to make things happen externally, say through APIs.

As one very simple example, here’s the complete code to deploy an app that mails the app’s owner a map of a 1-mile region around wherever they are when they access the app:

So far, all the apps we’ve talked about are built from fixed pieces of Wolfram Language code that get deployed once to the Apple Watch. But the Wolfram Language is symbolic, so it’s easy for it to manipulate the code of an app, just like it manipulates any other data. And that means that it’s straightforward to use the Wolfram Language to build and deploy custom apps on the fly.

Here’s a simple example. Say we want to have an app on the watch that gives a countdown of days to one’s next birthday. It’d be very inconvenient to have to enter the date of one’s birthday directly on the watch. But instead we can have an app on the phone where one enters one’s birthday, and then this app can in real time build a custom watch app that gives the countdown for that specific birthday.

Here we enter a birthday in a standard Wolfram Language “smart field” that accepts any date format:

And as soon as we touch Submit, this app runs Wolfram Language code in the Wolfram Cloud that generates a new custom app for whatever birthday we entered, then deploys that generated app so it shows up on our watch:

Here’s the complete code that’s needed to make the Birthday Countdown app-generating app.

And here is the result from the generated countdown app for my birthday:

We can make all sorts of apps like this. Here’s a World Clocks example where you fill out a list of any number of places, and create an app that displays an array of clocks for all those places:

You can also use app generation to put *you* into an app. Here’s the code to deploy a “You Clock” app-generating app that lets you take a picture of yourself with your phone, then creates an app that uses that picture as the hands of a clock:

And actually, you can easily go even more meta, and have apps that generate apps that generate apps: apps all the way down!

When I set out to use the Wolfram Language to make apps for the Apple Watch I wasn’t sure how it would go. Would the deployment pipeline to the watch work smoothly enough? Would there be compelling watch apps that are easy to build in the Wolfram Language?

I’m happy to say that everything has gone much better than I expected. The watch is very new, so there were a few initial deployment issues, which are rapidly getting worked out. But it became clear that there are lots and lots of good watch apps that can be made even with tiny amounts of Wolfram Language code (tweet-a-watch-app?). And to me it’s very impressive than in less than one full day’s work I was able to develop and deploy 25 complete apps.

Of course, what ultimately made this possible is the whole Wolfram Language technology stack that I’ve been building for nearly 30 years. But it’s very satisfying to see all the automation we’ve built work so nicely, and make it so easy to turn ideas into yet another new kind of thing: watch apps.

It’s always fun to program in the Wolfram Language, and it’s neat to see one’s code deployed on something like a watch. But what’s ultimately more important is that it’s going to be very useful to lots of people for lots of purposes. The code here is a good way to get started learning what to do. But there are many directions to go, and many important—or simply fun—apps to create. And the remarkable thing is that the Wolfram Language makes it so easy to create watch apps that they can become a routine part of everyday workflow: just another place where functionality can be deployed.