The first gauge I remember was a blue wrist watch I received from my parents as a child. Their hope was probably to correct my tardiness, but it proved valuable for more important tasks such as timing bicycle races. Today digital gauges help us analyze a variety of data on smart phones and laptops. Battery level, signal strength, network speed, and temperature are some of the common data elements constantly monitored.

Gauges have been a part of the Wolfram Language for a few years.

`PlotTheme` is an exciting new addition to gauges that provides instant styling. A theme name is the only item required. The theme automatically calls the necessary options to create a pre-styled gauge.

Here is a sample of the themes for `AngularGauge`.

Incorporating gauges into your work is simple. In fact, you might find using multiple gauges in the same notebook a common occurrence. Set the theme for an entire notebook with the global variable `$PlotTheme`. For example, to set all gauges of a notebook to the “Web” theme, just evaluate `$PlotTheme` = “Web”. `$PlotTheme` can also be used for a cluster of gauges within a single cell, as in the following time zone example.

As with time, the weather always seems to have our attention. A weather dashboard is a convenient way of monitoring current weather conditions. Construct the dashboard using `WeatherData`, which is included in the Wolfram Language. It gives current and historical weather data for all standard weather stations worldwide. `AngularGauge` will display the wind direction and speed, while `VerticalGauge` displays the temperature. `GeoGraphics` is used for the location.

Interested in building your own weather station? Arnoud Buzing explains the details in a previous blog post. Interested in styling your own gauges? I can help. You might be wondering if it’s possible to change a particular aspect of a theme. User options automatically override `PlotTheme`, so altering a theme component or color is absolutely possible and encouraged. In essence, a theme can be a starting point for creating your own gauge styles.

The world is full of constantly changing data, so what better way to visualize the data than with a colorful gauge from the Wolfram Language. `PlotTheme` handles the task of styling, so implementing a gauge has never been easier. Visit the Gauges documentation page for more information. `PlotTheme` details can be found in the options section of each gauge documentation page. The gauges are `AngularGauge`, `BulletGauge`, `ClockGauge`, `HorizontalGauge`, `ThermometerGauge`, and `VerticalGauge`.

Get a free subscription for Wolfram Programming Cloud to see what you can do with `PlotThemes` in the next release of the Wolfram Language.

We will illustrate this with an example, and you can try it out by downloading a trial version of *SystemModeler* and this example model, and a trial of the Wolfram Hydraulic library.

Most people probably have experiences with things they bought and liked, but that then suddenly failed for some reason. During the last few years we have both experienced this problem, including a complete engine breakdown in Johan’s car (the engine had to be replaced), and Jan’s receiver, which suddenly went completely silent (the receiver had to be sent in for repair and have its network chip replaced).

In both cases it caused problems for the customers (us) as well as for the producer. These are just a couple of examples, and we’re sure you have your own.

*Consumer electronics, satellite systems, and flight systems all have different reasons for valuing reliability.*

In general, a failure might imply warranty costs, like replacing the network chip of the receiver; huge complications in repairing, as for the car engine, or even more for a satellite; or even risk of human life, as with airplanes.

This raises the question how combining system simulation models with uncertainty quantification can be used to improve system reliability and functionality.

With the addition of system reliability analysis to *SystemModeler*, the reliabilities of systems can be computed from the reliabilities of the components. Let’s have a look.

Let’s start at the component level with a hydraulic pipe, and compute the probability that the hydraulic pipe fails:

*Diagram for a pipe with normal operation, restricted operation, leaking operation, and blocked operation.*

This is a relatively small and simple component, with three different failure modes: it can leak, it can be blocked, or it can be restricted.

Here’s a system incorporating three pipes in which we can examine the different failure modes:

*A model with three pipes, one cylinder, and one pump. Pumping the fluid will lead to the cylinder pushing its rod out and a change of the measured position.*

*Fault detected! The measured position is not moving at all in the simulation with the blocked pipe.*

By looking at what the simulation results should be compared to what they are, we can detect different failures and generate a list of candidates for what the culprit is. This is studied in the area of fault diagnosis and failure detection, which we won’t pursue here. In the remainder of this post, we’ll focus instead on the overall reliabilities of systems like these.

The pipe can be illustrated as a traditional fault tree, where failure in any of the leaf nodes results in system failure:

*Fault tree for a pipe.*

In the new Reliability view in *SystemModeler*, we can specify the lifetime distributions of the individual components:

*The Reliability view in SystemModeler, where lifetime distributions are assigned to individual components.*

Next we construct the fault tree for the pipe by specifying that a leak, or a restriction, or a blockage will lead to system failure:

*Reliability view for a component with multiple lifetime distributions inside it. Here the fault tree is specified, by entering the Boolean expression for the configuration.*

Now the fault tree is available for analysis in the Wolfram Language:

The `WSMModelReliability` function can return a `FailureDistribution` (when using a fault tree), a `ReliabilityDistribution` (when using a reliability block diagram), or the lifetime distribution of a component. The traditional way to illustrate the reliability of components or systems is by using the `SurvivalFunction`, which describes the probability that the system works at time *t*. For one pipe, it looks like this:

This distribution behaves as any probability distribution in the Wolfram Language. More than 30 properties can be computed from it, for example, the conditional probability that the pipe will last for longer than 20,000 hours given that it worked at 10,000 hours:

(The sign is the `Conditioned` operator, and is the `Distributed` descriptor. The code above could be read out as: “The probability that a basic pipe still works after 20,000 hours if it worked for the first 10,000 hours”.)

Systems are, of course, made up of many pipes. Here is the schematic for the hydraulic power plant of a Cessna aircraft flap system, which incorporates several basic pipe components:

*The hydraulic power plant of a Cessna aircraft flap system, with one tank, two pumps, multiple valves, and fifteen pipes.*

*SystemModeler* automatically detects that the pipes in the power plant have reliability annotations and can compute the reliability of the entire system from them. The first question we’ll ask is how much worse will the reliability be for the hydraulic power system as compared to the individual pipe:

*Comparison of the reliability of one pipe and the hydraulic power system.*

We can see that a system with many pipes performs far worse than a single one, which is not completely unexpected. This is an illustration of the “weakest link” phenomenon: failure in one pipe will cause system failure.

If we look at the same components in the flap system of the aircraft, we see a similar story.

Next we put the hydraulic power plant and the flap system together (a total of 75 components). In *SystemModeler* this is as easy as specifying that we want “hydraulicPower and flaps”.

*Reliability view for the full Cessna aircraft. Here the reliability distribution is specified using the two components “hydraulicPower” and “flaps”.*

*The reliability functions for the different parts of the system.*

The reliability of the combined system is less than the reliabilities of the flap and hydraulic power subsystems, a property that generalizes to all systems and subsystems when using connections that depend on a single failure.

Finally, let us find out which components are most cost-effective to improve. The Wolfram Language includes nine different importance measure functions, starting from the very basic `StructuralImportance` and going to more advanced measures. Let’s find out which failure in the basic pipe to improve:

*The improvement potential for the different failures in a pipe.*

The improvement importance describes how much the system reliability would be increased by replacing a component with a perfect component. The improvement importance is a relative measure, so for a figure to make sense, it has to be put in context with the other components in the system. From the plot it’s clear that figuring out ways to avoid the pipe becoming restricted would improve the reliability for the system the most. We can do the same thing for the full system and compare the flap system to the hydraulic power system:

*The improvement potential for the hydraulic power system is strictly higher than the flap system’s improvement potential.*

From this plot we can learn a couple of things. First, it pays off more to improve the hydraulic power system compared to the flap system throughout the full lifetime of the system. Second, it actually pays off more and more as the ratio between the power plant and the flaps starts at 1.66 (hard to see in the plot, but easier when comparing the real numbers) and from there is strictly increasing. For example, at time 3,788h, when the hydraulic power plant has the highest value, the ratio between the two is 2.08, and at time 10,000h the ratio is 3.38.

Reliability analysis can show you where to concentrate your engineering effort to produce the most reliable products, estimate where failure will happen, and price warranties accordingly.

For more on what’s new in *SystemModeler* 4 as well as examples, free courses, and fully functional trial software, check out the *SystemModeler* website.

In a previous blog post, “Modeling Aircraft Flap System Failure Scenarios with *SystemModeler*,” the impact of an electrical failure was studied, and the blog post “Reliability Mathematics with *Mathematica*” gives an in-depth look on the reliability analysis functionality in *Mathematica*. Finally, in the free course “Modeling Safety-Critical Systems,” you can learn how component faults can be modeled and how their effect on system behavior can be simulated.

Download this post as a Computable Document Format (CDF) file, and its accompanying models.

]]>This past year, I finally succumbed to the increasingly common practice of recording personal activity data. Over the last few years, I’d noted that my rides had become shorter and easier as the season progressed, so I was mildly interested in verifying this improvement in personal fitness. Using nothing more than a smart phone and a suitable application, I recorded 27 rides between home and work, and then used the Wolfram Language to read, analyze, and visualize the results.

Here is a Google Earth image showing my morning bike route covering a distance of a little under 11 miles, running from east to west.

To transfer data from the smart phone, I used GPX (GPS Exchange Format), a file format supported by major GPS device manufacturers and available in many software applications. A typical GPX file includes time-stamped location and elevation data, and the Wolfram Language returns this data as a list of rules, descriptively named `Geometry`, `PointElevation`, and `PointTimestamp`.

This displays a fragment of one of the GPX data files:

Taking advantage of new geographic data and computation functionality in the Wolfram Language and the available time and position data, I quickly and easily created an animation of the day’s ride (for details of functions, position, and time, see the Initializations in the attached CDF). Click the Play button to view animation.

More interestingly, and with just a bit more effort, I next compared all the morning rides of the season in a single animated race, in effect a rat race to work! The season’s leader is shown in yellow, of course!

The results of this rat race are as follows:

Now for a quick peek at ride times in chronological order. This clearly supports my earlier observation that ride times improved as the season progressed, and as I logged more miles on the road bike.

While the preceding calculations and visualizations are nice, we can do much more. The GPX files contain time-stamped elevation data, so not only does this allow easy visualization of the common road profile, but even better: detection of downhill and uphill segments of the route via new signal processing peak detection functionality.

First, here is the standard road profile:

Prior to locating the peaks and valleys, I smooth the elevation data so as to capture only the large-scale local maxima and minima in the signal. To accomplish this, I first use uniform linear resampling to correct for the highly irregular intervals at which the data was captured, then blur it with a `GaussianFilter`.

The smoothing operation removes spurious peaks and valleys in the elevation data. I determine the remaining large-scale peaks using `FindPeaks` and segment the route into ascending and descending sections.

This shows the uphill and downhill sections of the morning ride on an elevation plot:

For an arguably even more useful visualization, here are the rising and falling segments of the ride on a map:

The approximate distances of the uphill and downhill sections are readily available from the ascend and descend lists computed earlier. The following result confirms the simple truth that going to work is *always* harder (i.e., uphill), and therefore less pleasant than the return trip.

I am already looking forward to the next season of riding and more fun in analyzing my data using the Wolfram Language.

Download this post as a Computable Document Format (CDF) file.

]]>We’ve developed a lot of internal tools to help us analyze and extract information from Wikipedia over the years, but now we’ve also added a Wikipedia “integrated service” to the latest version of the Wolfram Language—making it incredibly easy for anyone to incorporate Wiki content into Wolfram Language workflows.

You can simply grab the text of an article, of course, and feed it into some of the Wolfram Language’s new functions for text processing and visualization:

Or if you don’t have a specific article in mind, you can search by title or content:

You can even use Wolfram Language entities directly in `WikipediaData` to, say, get equivalent page titles in any of the dozens of available Wikipedia language versions:

One of my favorite functions allows you to explore article links out from (or pointing in toward) any given article or category—either in the form of a simple list of titles, or as a list of rules that can be used with the Wolfram Language’s powerful functions for graph visualization. In fact, with just a few lines of code, you can create a beautiful and interesting visualization of the shared links between any set of Wikipedia articles:

There’s a lot of useful functionality here, and we’ve really only scratched the surface. Get a free subscription for Wolfram Programming Cloud to see what you can do with `WikipediaData` in the next release of the Wolfram Language, and watch for many more integrated services to follow throughout the coming year.

That’s the Wolfram Science Summer School, which for the last decade or so has been my favorite time of the year. When it was founded in 2003, the school’s focus was on Stephen Wolfram’s *A New Kind of Science*, but its scope has expanded to include what is now called Wolfram Science. Stephen Wolfram explained in a blog post last year how this school is like entrepreneurship science. It’s not about doing the same old stuff, as you might get in a typical academic environment.

This year we are holding the Summer School again, June 29–July 17; you can find the application link at the bottom of the Summer School home page. I am excited to think about what will be discovered.

But I’m also excited about a new program called the Wolfram Innovation Summer School (also June 29–July 17).

The idea is to focus on the development of technology through innovation. The students who have applied so far come from around the world and have a keen interest in creating new and useful things, be they new products or new companies. As in the Wolfram Science Summer School, the age range for Innovation Summer School attendees may be quite broad, but most participants will probably be in their twenties. (For teenagers interested in innovation, there is the Wolfram Tech Innovation Summer Camp , which is based on the *Mathematica* Summer Camp, both taking place July 6–July 17.)

There are lots of schools and programs out there for entrepreneurship, but what makes the Wolfram Innovation Summer School different is the focus on developing ideas and making things, and of course, getting project advice from Stephen Wolfram and his staff.

Two programs. One for science and a new one for technology. Bring your ideas and creative drive, and explore with us where no one has explored before.

]]>For this experiment, you will need an Arduino Yún (or an equivalent Arduino that has wireless capability), a TMP36 temperature sensor, and a breadboard and jumper wires.

Here is the hardware diagram. Connect the 5V pin to the left pin of the TMP36 sensor, the GND pin to the right pin of the TMP36, and the A0 pin to the middle TMP36 pin:

Once everything is connected and powered up, the TMP36 sensor will send a voltage to the A0 pin. This voltage increases when the temperature goes up and decreases when the temperature goes down. So we can use the voltage reading and interpret it as a temperature. Luckily in this experiment we only needed three jumper cables, so hopefully you did not end up looking like this poor man:

Now we are ready to write the Arduino code that will upload the recorded temperature data to the Wolfram Cloud. Make sure your Arduino Yún is configured to connect to the internet. Then, using the Arduino application, upload the following sketch onto your Arduino after you replace the text “YOUR_BIN_ID” with the “Short ID” of a databin that you created yourself:

To follow along with the code, here is what it does: The process *p* variable is used for calling a tool called *curl*, which is a way to make http requests with your Arduino. In our case, we call a specific Data Drop URL, which lets you upload small bits of temperature data easily. In the *loop*() section of the code, you can see how the variable *val* is reading from the analog pin (A0), and how *val* is then converted from a raw reading to a *temperature* variable. This temperature is then added to an *average* variable exactly 60 times, but on the 60^{th} time, the code will execute the block of the *if* statement. This code block runs the data upload code to upload the average of the 60 measurements. It also resets all the counters so that everything will start over again. Finally, at the end is a 1000-millisecond delay that will (approximately) space out your recorded temperatures by one second:

To test that everything worked, you can open the Arduino serial monitor. If successful, you will see messages like the one below appear every minute or so:

Now you can put your device in a weather-resistant container (I used a Hefty bag) and place it outside in a location that is shaded for most of the day (like a porch):

Now we’re ready to do some interesting analysis of your temperature data! It’s best to collect at least one full day of data before analyzing it, but the code below should work for shorter time periods as well. First, we need to get the data from the databin we used to upload the temperature data. The Arduino sketch uses *temperature* as the URL parameter for the temperature, so we will need to use the same thing here to retrieve it. In this example, my databin has collected data for about 20 days (for your experiment, replace the text “YOUR_BIN_ID” with the bin ID shown in the output from `CreateDatabin` above):

Now we will need to transform the temperature event series in a few more steps.

First, shift the series to account for the proper time zone (CST). This is done with the `TimeSeriesShift` command.

Next, calibrate the temperatures so they match more closely official NOAA measurements for a nearby official weather station. In my case I used an official NOAA weather station (KCMI) to calibrate my $2 TMP36 temperature sensor with the undoubtedly much more expensive and precise official sensor data. Calibration is an important step, and in this case I had to correct my data by about 5 degrees Celsius to match the official data. Another good way to calibrate your TMP36 sensor is to place it in a cup of ice water (exactly 0 degrees Celsius) and a cup of boiling water (exactly 100 degrees Celsius).

Next, define the time window of interest. In my case the starting point of reliable data was on January 22 at 9pm. You will need to change this to a date that is a good starting point for your data.

Finally, resample the data to evenly spaced periods of 15 minutes. You will quickly notice that recording data every minute will give you a massive amount of data points, and sampling it back to 15-minute intervals will still give you enough data to work with for plots that show multiple days of data.

Here is all the code we just discussed above:

At this point you can do a quick check to make sure that your temperature data is looking OK:

But we can make this a lot more useful and interesting. Let’s write a function that will collect a specific point of interest for each day of data, for example the *min* and *max* of this data:

The function above simply generates a new `EventSeries` from the given one and collects the data points that satisfy a particular function for a given day.

First let’s create a new `EventSeries` that contains all the daily minimum temperatures:

And now do the same for the daily maximum temperatures:

And now we can plot the temperature data (purple) with the daily minimum temperatures (blue dots) and maximum temperatures (red dots):

And that is it for this experiment! You now have a working weather station, and a dataset that is easy to analyze. By modifying the given code, you can visualize daily, weekly, or monthly averages. Or you can try to make predictions for tomorrow’s weather based on patterns you have observed in the past (and perhaps combine this data with additional weather measurements like pressure and humidity data).

Download this post as a Computable Document Format (CDF) File.

]]>This coming Saturday is “Pi Day of the Century”. The date 3/14/15 in month/day/year format is like the first digits of π=3.1415… And at 9:26:53.589… it’s a “super pi moment”.

Between *Mathematica* and Wolfram|Alpha, I’m pretty sure our company has delivered more π to the world than any other organization in history. So of course we have to do something special for Pi Day of the Century.

One of my main roles as CEO is to come up with ideas—and I’ve spent decades building an organization that’s good at turning those ideas into reality. Well, a number of weeks ago I was in a meeting about upcoming corporate events, and someone noted that Pi Day (3/14) would happen during the big annual SXSW (South by Southwest) event in Austin, Texas. So I said (or at least I thought I said), “We should have a big pi to celebrate Pi Day.”

I didn’t give it another thought, but a couple of weeks later we had another meeting about upcoming events. One agenda item was Pi Day. And the person who runs our Events group started talking about the difficulty of finding a bakery in Austin to make something suitably big. “What are you talking about?” I asked. And then I realized: “You’ve got the wrong kind of pi!”

I guess in our world pi confusions are strangely common. Siri’s voice-to-text system sends Wolfram|Alpha lots of “pie” every day that we have to specially interpret as “pi”. And then there’s the Raspberry Pi, that has the Wolfram Language included. And for me there’s the additional confusion that my personal fileserver happens to have been named “pi” for many years.

After the pi(e) mistake in our meeting we came up with all kinds of wild ideas to celebrate Pi Day. We’d already rented a small park in the area of SXSW, and we wanted to make the most interesting “pi countdown” we could. We resolved to get a large number of edible pie “pixels”, and use them to create a π shape inside a pie shape. Of course, there’ll be the obligatory pi selfie station, with a “Stonehenge” pi. And a pi(e)-decorated Wolfie mascot for additional selfies. And of course we’ll be doing things with Raspberry Pis too.

I’m sure we’ll have plenty of good “pi fun” at SXSW. But we also want to provide pi fun for other people around the world. We were wondering, “What can one do with pi?” Well, in some sense, you can do anything with pi. Because, apart from being the digits of pi, its infinite digit sequence is—so far as we can tell—completely random. So for example any run of digits will eventually appear in it.

How about giving people a personal connection to that piece of math? Pi Day is about a date that appears as the first digits of pi. But any date appears somewhere in pi. So, we thought: Why not give people a way to find out where their birthday (or other significant date) appears in pi, and use that to make personalized pi T-shirts and posters?

In the Wolfram Language, it’s easy to find out where your birthday appears in π. It’s pretty certain that any mm/dd/yy will appear somewhere in the first 10 million digits. On my desktop computer (a Mac Pro), it takes 6.28 seconds (2π?!) to compute that many digits of π.

Here’s the Wolfram Language code to get the result and turn it into a string (dropping the decimal point at position 2):

Now it’s easy to find any “birthday string”:

So, for example, my birthday string first appears in π starting at digit position 151,653.

What’s a good way to display this? It depends how “pi lucky” you are. For those born on 4/15/92, their birthdate already appears at position 3. (Only about a certain fraction of positions correspond to a possible date string.) People born on November 23, 1960 have the birthday string that’s farthest out, appearing only at position 9,982,546. And in fact most people have birthdays that are pretty “far out” in π (the average is 306,150 positions).

Our long-time art director had the idea of using a spiral that goes in and out to display the beginnings and ends of such long digit sequences. And almost immediately, he’d written the code to do this (one of the great things about the Wolfram Language is that non-engineers can write their own code…).

Next came deploying that code to a website. And thanks to the Wolfram Programming Cloud, this was basically just one line of code! So now you can go to MyPiDay.com…

…and get your own piece of π!

And then you can share the image, or get a poster or T-shirt of it:

With all this discussion about pi, I can’t resist saying just a little about the science of pi. But first, just why is pi so famous? Yes, it’s the ratio of circumference to diameter of a circle. And that means that π appears in zillions of scientific formulas. But it’s not the whole story. (And for example most people have never even heard of the analog of π for an ellipse—a so-called complete elliptic integral of the second kind.)

The bigger story is that π appears in a remarkable range of mathematical settings—including many that don’t seem to have anything to do with circles. Like sums of negative powers, or limits of iterations, or the probability that a randomly chosen fraction will not be in lowest terms.

If one’s just looking at digit sequences, pi’s 3.1415926… doesn’t seem like anything special. But let’s say one just starts constructing formulas at random and then doing traditional mathematical operations on them, like summing series, doing integrals, finding limits, and so on. One will get lots of answers that are 0, or 1/2, or . And there’ll be plenty of cases where there’s no closed form one can find at all. But when one can get a definite result, my experience is that it’s remarkably common to find π in it.

A few other constants show up too, like *e* (2.1718…), or Euler gamma (0.5772…), or Catalan’s constant (0.9159…). But π is distinctly more common.

Perhaps math could have been set up differently. But at least with math as we humans have constructed it, the number that is π is a widespread building block, and it’s natural that we gave it a name, and that it’s famous—now even to the point of having a day to celebrate it.

What about other constants? “Birthday strings” will certainly appear at different places in different constants. And just like when Wolfram|Alpha tries to find closed forms for numbers, there’s typically a tradeoff between digit position and obscurity of the constants used. So, for example, my birthday string appears at position 151,653 in π, 241,683 in *e*, 45,515 in , 40,979 in ζ(3) … and 196 in the 1601th Fibonacci number.

Let’s say you make a plot that goes up whenever a digit of π is 5 or above, and down otherwise:

It looks just like a random walk. And in fact, all statistical and cryptographic tests of randomness that have been tried on the digits (except tests that effectively just ask “are these the digits of pi?”) say that they look random too.

Why does that happen? There are fairly simple procedures that generate digits of pi. But the remarkable thing is that even though these procedures are simple, the output they produce is complicated enough to seem completely random. In the past, there wasn’t really a context for thinking about this kind of behavior. But it’s exactly what I’ve spent many years studying in all kinds of systems—and wrote about in *A New Kind of Science*. And in a sense the fact that one can “find any birthday in pi” is directly connected to concepts like my general Principle of Computational Equivalence.

Of course, just because we’ve never seen any regularity in the digits of pi, it doesn’t mean that no such regularity exists. And in fact it could still be that if we did a big search, we might find somewhere far out in the digits of pi some strange regularity lurking.

What would it mean? There’s a science fiction answer at the end of Carl Sagan’s book version of *Contact*. In the book, the search for extraterrestrial intelligence succeeds in making contact with an interstellar civilization that has created some amazing artifacts—and that then explains that what they in turn find remarkable is that encoded in the distant digits of pi, they’ve found intelligent messages, like an encoded picture of a circle.

At first one might think that finding “intelligence” in the digits of pi is absurd. After all, there’s just a definite simple algorithm that generates these digits. But at least if my suspicions are correct, exactly the same is actually true of our whole universe, so that every detail of its history is in principle computable much like the digits of pi.

Now we know that within our universe we have ourselves as an example of intelligence. SETI is about trying to find other examples. The goal is fairly well defined when the search is for “human-like intelligence”. But—as my Principle of Computational Equivalence suggests—I think that beyond that it’s essentially impossible to make a sharp distinction between what should be considered “intelligent” and what is “merely computational”.

If the century-old mathematical suspicion is correct that the digits of pi are “normal”, it means that every possible sequence eventually occurs among the digits, including all the works of Shakespeare, or any other artifact of any possible civilization. But could there be some other structure—perhaps even superimposed on normality—that for example shows evidence of the generation of intelligence-like complexity?

While it may be conceptually simple, it’s certainly more bizarre to contemplate the possibility of a human-like intelligent civilization lurking in the digits of pi, than in the physical universe as explored by SETI. But if one generalizes what one counts as intelligence, the situation is a lot less clear.

Of course, if we see a complex signal from a pulsar magnetosphere we say it’s “just physics”, not the result of the evolution of a “magnetohydrodynamic civilization”. And similarly if we see some complex structure in the digits of pi, we’re likely to say it’s “just mathematics”, not the result of some “number theoretic civilization”.

One can generalize from the digit sequence of pi to representations of any mathematical constant that is easy to specify with traditional mathematical operations. Sometimes there are simple regularities in those representations. But often there is apparent randomness. And the project of searching for structure is quite analogous to SETI in the physical universe. (One difference, however, is that π as a number to study is selected as a result of the structure of our physical universe, our brains, and our mathematical development. The universe presumably has no such selection, save implicitly from the fact the we exist in it.)

I’ve done a certain amount of searching for regularities in representations of numbers like π. I’ve never found anything significant. But there’s nothing to say that any regularities have to be at all easy to find. And there’s certainly a possibility that it could take a SETI-like effort to reveal them.

But for now, let’s celebrate the Pi Day of our century, and have fun doing things like finding birthday strings in the digits of pi. Of course, someone like me can’t help but wonder what success there will have been by the next Pi Day of the Century, in 2115, in either SETI or “SETI among the digits”…

Pictures from the Pi Day event:

Fast-forward a bit, and last year we added `NumberLinePlot` to the Wolfram Language to visualize points, intervals, and inequalities. Once people started seeing the number lines, we began getting requests for similar plots, but with dates and times, so we decided it was time to tackle `TimelinePlot`.

One difference between timelines and number lines, though, is the importance of labels and how commonly they’re used. We had to make it easy to include labels, and build a good system to automatically place them in a timeline. You can use rules to label dates, and the labels will be positioned so that they avoid overlapping as much as possible. The goal for labels was to automatically generate timelines that were of a similar quality to the ones our graphic designers had previously created by hand. Here’s an example of the vacations I took last year, with labels of where my family went, who we visited, or what we did:

It turns out that a lot of the entities that the Wolfram Language knows about have at least one date associated with them, so it’s easy to construct timelines from them. One of my favorite examples, which takes a little bit to set up, is a timeline showing when all the *Star Trek* movies were released:

In this case, labels are stacked very high so that the timeline fits in the width we have available. For such a narrow page, a vertical layout probably works better:

Or a horizontal layout that doesn’t stack the labels in neat columns:

In addition to creating cool posters and movie release timelines, `TimelinePlot` is also useful for tracking flight schedules, logging calendar entries, charting historical people and events, conference and other event planning, and more.

`TimelinePlot` will be introduced in the next release of the Wolfram Language… stay tuned!

Since we began publishing Demonstrations, nearly 28 million different people have visited the site. We thank our 2,039 contributors (including Stephen Wolfram himself)—and especially the 1,961 independent authors ranging from high school participants of our *Mathematica* Summer Camp to Ivy League professors to real estate aficionados—for their creativity, ingenuity, and dedication. These contributors have made interactive models including rotary engines, Enigma machines, optical illusions, photographic filters, and planetary systems that have been used to create or enhance public information resources, interactive textbooks, journal articles, course websites, and even legislative testimony.

Not surprisingly, Demonstrations have had the greatest impact in education. With the surge of 1:1 initiatives and flipped classrooms, the need for engaging, readily available curriculum resources is greater than ever. We often hear from instructors about how the dynamic visualization features of Demonstrations help students explore and understand hard concepts that are nearly impossible to grasp when viewed only as static pictures.

Surface Morphing from the Wolfram Demonstrations Project by Yu-Sung Chang

It is also rewarding to see users become contributors, using the open code of existing examples as a jumping off point to create their own submissions, or starting from scratch to create a new Demonstration with just a few short lines of code.

The Wolfram Language function `Manipulate` and the Computable Document Format (CDF) form the backbone of the Demonstrations Project. `Manipulate` automates the process of making interactive interfaces with controls such as sliders and radio buttons, while the functions it can call to compute data and create graphics and text span the whole range of the Wolfram Language, for example, this Plot expression:

Replace constants with variables; specify the ranges of the variables, and the result is an instant interface:

Once saved as CDF, the free *CDF Player* makes that content readily accessible to anyone, increasing the reach of the Demonstrations Project to literally global proportions and spurring the growth of active communities around it.

With the release of *Mathematica* 10, users are able to expand the Demonstrations repertoire even further with new and powerful functions. We added a host of new areas, including machine learning, computational geometry, geographic computation, and more. With last month’s release of a *CDF Player* that also supports these features, the entire Demonstrations community can now benefit, and we are now officially accepting Version 10–based submissions (like the example below that uses new mesh regions functionality).

Triangulating Random and Regular Polygons from the Wolfram Demonstrations Project by George Beck

Because browser plugins won’t be around forever, we’re also working on transitioning Demonstrations into CDFs that run in the cloud with HTML5, no plugin required. Last year we released two extraordinary cloud environments for programming that are making this possible: *Mathematica* Online and Wolfram Programming Cloud.

Are you ready to submit your own Demonstration? Check out these resources:

• General tips and hints for authors are available on the Author Guidelines page

• Explore the basics of the `Manipulate` function in this training video

• Learn Wolfram Language basics for Interactive Manipulation

• Collaborate with others on Wolfram Community

We’re excited to see what new Demonstrations are yet to come. Author a new Demonstration today and help make knowledge accessible for everyone!

]]>When I first started thinking about the Data Drop, I viewed it mainly as a convenience—a means to get data from here to there. But now that we’ve built the Data Drop, I’ve realized it’s much more than that. And in fact, it’s a major step in our continuing efforts to integrate computation and the real world.

So what is the Wolfram Data Drop? At a functional level, it’s a universal accumulator of data, set up to get—and organize—data coming from sensors, devices, programs, or for that matter, humans or anything else. And to store this data in the cloud in a way that makes it completely seamless to compute with.

Our goal is to make it incredibly straightforward to get data into the Wolfram Data Drop from anywhere. You can use things like a web API, email, Twitter, web form, Arduino, Raspberry Pi, etc. And we’re going to be progressively adding more and more ways to connect to other hardware and software data collection systems. But wherever the data comes from, the idea is that the Wolfram Data Drop stores it in a standardized way, in a “databin”, with a definite ID.

Here’s an example of how this works. On my desk right now I have this little device:

Every 30 seconds it gets data from the tiny sensors on the far right, and sends the data via wifi and a web API to a Wolfram Data Drop databin, whose unique ID happens to be “3pw3N73Q”. Like all databins, this databin has a homepage on the web: wolfr.am/3pw3N73Q.

The homepage is an administrative point of presence that lets you do things like download raw data. But what’s much more interesting is that the databin is fundamentally integrated right into the Wolfram Language. A core concept of the Wolfram Language is that it’s knowledge based—and has lots of knowledge about computation and about the world built in.

For example, the Wolfram Language knows in real time about stock prices and earthquakes and lots more. But now it can also know about things like environmental conditions on my desk—courtesy of the Wolfram Data Drop, and in this case, of the little device shown above.

Here’s how this works. There’s a symbolic object in the Wolfram Language that represents the databin:

And one can do operations on it. For instance, here are plots of the time series of data in the databin:

And here are histograms of the values:

And here’s the raw data presented as a dataset:

What’s really nice is that the databin—which could contain data from anywhere—is just part of the language. And we can compute with it just like we would compute with anything else.

So here for example are the minimum and maximum temperatures recorded at my desk:

(for aficionados: MinMax is a new Wolfram Language function)

We can convert those to other units (% stands for the previous result):

Let’s pull out the pressure as a function of time. Here it is:

Of course, the Wolfram Knowledgebase has historical weather data. So in the Wolfram Language we can just ask it the pressure at my current location for the time period covered by the databin—and the result is encouragingly similar:

If we wanted, we could do all sorts of fancy time series analysis, machine learning, modeling, or whatever, with the data. Or we could do elaborate visualizations of it. Or we could set up structured or natural language queries on it.

Here’s an important thing: notice that when we got data from the databin, it came with units attached. That’s an example of a crucial feature of the Wolfram Data Drop: it doesn’t just store raw data, it stores data that has real meaning attached to it, so it can be unambiguously understood wherever it’s going to be used.

We’re using a big piece of technology to do this: our Wolfram Data Framework (WDF). Developed originally in connection with Wolfram|Alpha, it’s our standardized symbolic representation of real-world data. And every databin in the Wolfram Data Drop can use WDF to define a “data semantics signature” that specifies how its data should be interpreted—and also how our automatic importing and natural language understanding system should process new raw data that comes in.

The beauty of all this is that once data is in the Wolfram Data Drop, it becomes both universally interpretable and universally accessible, to the Wolfram Language and to any system that uses the language. So, for example, any public databin in the Wolfram Data Drop can immediately be accessed by Wolfram|Alpha, as well as by the various intelligent assistants that use Wolfram|Alpha. Tell Wolfram|Alpha the name of a databin, and it’ll automatically generate an analysis and a report about the data that’s in it:

Through WDF, the Wolfram Data Drop immediately handles more than 10,000 kinds of units and physical quantities. But the Data Drop isn’t limited to numbers or numerical quantities. You can put anything you want in it. And because the Wolfram Language is symbolic, it can handle it all in a unified way.

The Wolfram Data Drop automatically includes timestamps, and, when it can, geolocations. Both of these have precise canonical representations in WDF. As do chemicals, cities, species, networks, or thousands of other kinds of things. But you can also drop things like images into the Wolfram Data Drop.

Somewhere in our Quality Assurance department there’s a camera on a Raspberry Pi watching two recently acquired corporate fish—and dumping an image every 10 minutes into a databin in the Wolfram Data Drop:

In the Wolfram Language, it’s easy to stack all the images up in a manipulable 3D “fish cube” image:

Or to process the images to get a heat map of where the fish spend time:

We can do all kinds of analysis in the Wolfram Language. But to me the most exciting thing here is how easy it is to get new real-world data into the language, through the Wolfram Data Drop.

Around our company, databins are rapidly proliferating. It’s so easy to create them, and to hook up existing monitoring systems to them. We’ve got databins now for server room HVAC, for weather sensors on the roof of our headquarters building, for breakroom refrigerators, for network ping data, and for the performance of the Data Drop itself. And there are new ones every day.

Lots of personal databins are being created, too. I myself have long been a personal data enthusiast. And in fact, I’ve been collecting personal analytics on myself for more than a quarter of a century. But I can already tell that March 2015 is going to show a historic shift. Because with the Data Drop, it’s become vastly easier to collect data, with the result that the number of streams I’m collecting is jumping up. I’ll be at least a 25-databin human soon… with more to come.

A really important thing is that because everything in the Wolfram Data Drop is stored in WDF, it’s all semantic and canonicalized, with the result that it’s immediately possible to compare or combine data from completely different databins—and do meaningful computations with it.

So long as you’re dealing with fairly modest amounts of data, the basic Wolfram Data Drop is set up to be completely free and open, so that anyone or any device can immediately drop data into it. Official users can enter much larger amounts of data—at a rate that we expect to be able to progressively increase.

Wolfram Data Drop databins can be either public or private. And they can either be open to add to, or require authentication. Anyone can get access to the Wolfram Data Drop in our main Wolfram Cloud. But organizations that get their own Wolfram Private Clouds will also soon be able to have their own private Data Drops, running inside their own infrastructure.

So what’s a typical workflow for using the Wolfram Data Drop? It depends on what you’re doing. And even with a single databin, it’s common in my experience to want more than one workflow.

It’s very convenient to be able to take any databin and immediately compute with it interactively in a Wolfram Language session, exploring the data in it, and building up a notebook about it

But in many cases one also wants something to be done automatically with a databin. For example, one can set up a scheduled task to create a report from the databin, say to email out. One can also have the report live on the web, hosted in the Wolfram Cloud, perhaps using CloudCDF to let anyone interactively explore the data. One can make it so that a new report is automatically generated whenever someone visits a page, or one can create a dashboard where the report is continuously regenerated.

It’s not limited to the web. Once a report is in the Wolfram Cloud, it immediately becomes accessible on standard mobile or wearable devices. And it’s also accessible on desktop systems.

You don’t have to make a report. Instead, you can just have a Wolfram Language program that watches a databin, then for example sends out alerts—or takes some other action—if whatever combination of conditions you specify occur.

You can make a databin public, so you’re effectively publishing data through it. Or you can make it private, and available only to the originator of the data—or to some third party that you designate. You can make an API that accesses data from a databin in raw or processed form, and you can call it not only from the web, but also from any programming language or system.

A single databin can have data coming only from one source—or one device—or it can have data from many sources, and act as an aggregation point. There’s always detailed metadata included with each piece of data, so one can tell where it comes from.

For several years, we’ve been quite involved with companies who make connected devices, particularly through our Connected Devices Project. And many times I’ve had a similar conversation: The company will tell me about some wonderful new device they’re making, that measures something very interesting. Then I’ll ask them what’s going to happen with data from the device. And more often than not, they’ll say they’re quite concerned about this, and that they don’t really want to have to hire a team to build out cloud infrastructure and dashboards and apps and so on for them.

Well, part of the reason we created the Wolfram Data Drop is to give such companies a better solution. They deal with getting the data—then they just drop it into the Data Drop, and it goes into our cloud (or their own private version of it), where it’s easy to analyze, visualize, query, and distribute through web pages, apps, APIs, or whatever.

It looks as if a lot of device companies are going to make use of the Wolfram Data Drop. They’ll get their data to it in different ways. Sometimes through web APIs. Sometimes by direct connection to a Wolfram Language system, say on a Raspberry Pi. Sometimes through Arduino or Electric Imp or other hardware platforms compatible with the Data Drop. Sometimes gatewayed through phones or other mobile devices. And sometimes from other clouds where they’re already aggregating data.

We’re not at this point working specifically on the “first yard” problem of getting data out of the device through wires or wifi or Bluetooth or whatever. But we’re setting things up so that with any reasonable solution to that, it’s easy to get the data into the Wolfram Data Drop.

There are different models for people to access data from connected devices. Developers or researchers can come directly to the Wolfram Cloud, through either cloud or desktop versions of the Wolfram Language. Consumer-oriented device companies can choose to set up their own private portals, powered by the Wolfram Cloud, or perhaps by their own Wolfram Private Cloud. Or they can access the Data Drop from a Wolfram mobile app, or their own mobile app. Or from a wearable app.

Sometimes a company may want to aggregate data from many devices—say for a monitoring net, or for a research study. And again their users may want to work directly with the Wolfram Language, or through a portal or app.

When I first thought about the Wolfram Data Drop, I assumed that most of the data dropped into it would come from automated devices. But now that we have the Data Drop, I’ve realized that it’s very useful for dealing with data of human origin too. It’s a great way to aggregate answers—say in a class or a crowdsourcing project—collect feedback, keep diary-type information, do lifelogging, and so on. Once one’s defined a data semantics signature for a databin, the Wolfram Data Drop can automatically generate a form to supply data, which can be deployed on the web or on mobile.

The form can ask for text, or for images, or whatever. And when it’s text, our natural language understanding system can take the input and automatically interpret it as WDF, so it’s immediately standardized.

Now that we’ve got the Wolfram Data Drop, I keep on finding more uses for it—and I can’t believe I lived so long without it. As throughout the Wolfram Language, it’s really a story of automation: the Wolfram Data Drop automates away lots of messiness that’s been associated with collecting and processing actual data from real-world sources.

And the result for me is that it’s suddenly realistic for anyone to collect and analyze all sorts of data themselves, without getting any special systems built. For example, last weekend, I ended up using the Wolfram Data Drop to aggregate performance data on our cloud. Normally this would be a complex and messy task that I wouldn’t even consider doing myself. But with the Data Drop, it took me only minutes to set up—and, as it happens, gave me some really interesting results.

I’m excited about all the things I’m going to be able to do with the Wolfram Data Drop, and I’m looking forward to seeing what other people do with it. Do try out the beta that we launched today, and give us feedback (going into a Data Drop databin of course). I’m hoping it won’t be long before lots of databins are woven into the infrastructure of the world: another step forward in our long-term mission of making the world computable…