Some say that Tau Day is really the day to celebrate, and that τ(=2π) should be the most prominent constant, not π. It all started in 2001 with the famous opening line of a watershed essay by Bob Palais, a mathematician at the University of Utah:

“I know it will be called blasphemy by some, but I believe that π is wrong.”

Which has given rise in some circles to the celebration of Tau Day—or, as many people say, the one day on which you are allowed to eat two pies.

But is it true that τ is the better constant? In today’s world, it’s quite easy to test, and the Wolfram Language makes this task much simpler. (Indeed, Michael Trott’s recent blog post on dates in pi—itself inspired by Stephen Wolfram’s Pi Day of the Century post—made much use of the Wolfram Language.) I started by looking at 320,000 preprints from arXiv.org to see in practice how many formulas involve 2π rather than π alone, or other multiples of π.

Here is a `WordCloud` of some formulas containing 2π:

I found that only 18% of formulas considered involve 2π, suggesting that τ, after all, would not be a better choice.

But then why do τ supporters believe that we should switch to this new symbol? One reason is that using τ would make geometry and trigonometry easier to understand and learn. After all, when we learn trigonometry, we don’t measure angles in degrees, but in radians, and there are 2π radians in a circle. This means that 1/4 of a circle corresponds to 1/2 π radians, or π/2, and not a quarter of something! This counterintuitive madness would be resolved by the symbol τ, because every ratio of the circle would have a matching ratio of τ. For example, 1/4 would have an angle of τ/4.

I personally do not have strong feelings against π, and to be honest, I don’t think students would learn trigonometry faster if they were to use τ. Think about the two most important trigonometric functions, sine and cosine. What’s most helpful to remember about them is that sin= cos(2 π) = 1, and sin = cos(π) = –1. I have not only always preferred cosine simply because it’s easier to remember (there are no fractions in π and 2 π), I’ve also always recognized that sine and cosine are different because one is nonzero on integer multiples of π and the other is nonzero on some fractions of it. By using τ instead, this symmetry would be lost, and we would be left with the equalities sin = cos(τ) = 1 and sin = cos = –1.

Given these observations, it seems like choosing τ or π is a personal choice. That’s fair, but it’s not a rigorous approach for determining which constant is more useful.

Even the approach I had at the beginning could lead to the wrong conclusion. *The Tau Manifesto*, by Michael Hartl, gives some examples of places where 2π is most commonly used:

And indeed, all these formulas would be easier if we used τ. However, those are just six of the vast number of formulas that scientists use regularly, and as I mentioned before, not many mathematical expressions involve 2π. Nevertheless, it could happen that formulas not involving 2π would still be simpler if written in τ. For example, the expression 4 π² would simply become (τ²).

For this reason I looked back at the scientific articles to see whether using τ instead of 2π (and τ/2 instead of π) would make their formulas simpler. For instance, these are some that would be simpler in τ:

And these are some that would not:

Let me now try to explain what I mean by simpler by looking at an example: if I take the term containing π in the bottom-left formula of the *Tau Manifesto* equation table:

I can replace π with τ/2 using `ReplaceAll`, and I get:

Just by looking at these two expressions, you can see that the second one is simpler. It’s not just your intuition that tells you that; it’s clear that there are fewer symbols and constants in the replaced expression. We can look at their corresponding `TreeForm`s to demonstrate it explicitly:

To get a numeric difference, we can look at the leaf counts (number of leaves on the trees), which correspond to the number of symbols and constants in the original formulas:

To see whether τ had an overall simplifying impact, I computed the complexity of each formula (defined as their leaf counts, as computed above) involving π that appeared in the articles when using π and τ. To be more precise, I first deleted all the formulas that were either equal to π or 2 π. I felt it would have been unfair to consider those as well because very often, if they appear by themselves, they do not stand for formulas. I then compared the number of times the τ formulas were better with the number of times they were not, and only 43% of the formulas whose complexity changed at all were actually better, meaning that using τ would make more than half of them look more complex. In other words, based on this comparison, we should keep using π. However, this is not the end of the story.

One observation I made is that if an expression gets either more or less complex, it’s likely to have a leaf count that is less than 40. In fact, if you look at the percentage of formulas that are better when using π or τ and that have a number of leaves that is less than a fixed number, you get this picture:

where the *x* axis represents the upper bound on the number of leaves. This suggests that almost all formulas that become simpler have complexities less than 50, regardless of the symbol we choose.

A more relevant observation is that the situation changes drastically as the complexity of the formulas increases. Already by only considering formulas that have complexities greater than 3, like from earlier, only 48% are simpler in π against 52% that are simpler in τ. The graph below shows how the percentage of formulas that are better in either π or τ changes as a function of the complexity:

As you can see, as the number of leaves exceeds 48, the situation becomes chaotic. This is because only 0.4% of formulas have complexities greater than 50. There are not enough of these for us to deduce anything stable and reasonable about them, and the previous observation tells us that we should not really worry much about them anyway.

What this graph tells me is that in everyday life, and for anything more complex than fairly easy expressions like , we should indeed use τ for simplicity. But there is still something else I have not considered. What about different subjects?

It might be that formulas in physics look simpler in τ, but formulas in other subjects do not. The initial search I made included articles from different subjects; however, I didn’t initially check whether the majority of π-containing formulas were from a limited subset of those subjects, or whether the ones that became simpler with τ were mostly from a limited subset. In fact, if I just restrict analysis to articles in mathematics, the situation becomes the following:

Basically, only 23% of formulas benefit from using τ, and those benefits come only when the complexity is fairly high. For instance, something of this sort:

would be an expression that would be simpler in τ, and you probably have not seen many of this type of expression. This suggests that either scientists in different subjects should use different conventions depending on their field-specific formulas, or that all scientific disciplines should switch to τ even though it does not really make sense for some of them to do so. After all, in a democracy, the majority wins, and it is impossible to accommodate everyone.

However, the above formula shows something else that I want to point out. With τ, it becomes this:

And that is not much of an improvement: even though an expression could be easier in τ, the improvement might be so small that it is irrelevant. Consider for instance these two expressions together with their leaf counts:

And the corresponding expressions in τ:

The first formula is simpler in τ, but the leaf count is only 1/13 smaller than the original complexity, whereas the second expression is simpler in π and the replaced expression is 1/6 higher than the original complexity. In other words, the first case’s improvement was 1/13 and the second’s was -1/6 (the minus sign indicates negative improvement, as the expression in τ was worse). The mean of the vector is –0.044, a negative number, which means that using τ in these two expressions makes the whole vector 0.044 worse, although π and τ each improved one formula.

This vector approach is different from the one-count-per-equation one that I used earlier. It considers quantity of improvement instead of just an either/or binary, and it completely reverses the previous conclusions. I have computed these vectors for formulas having complexities bounded from below in the same way I did in the previous example. What I’ve seen is that the overall improvement in going from π to τ, computed as the mean of these vectors, looks like this as the complexity increases:

where the *least* worsening, -0.04, is achieved at a complexity of 5. As you can see, the improvement stays below 0 the whole time, meaning that while more formulas may be shorter with τ (depending on the field), on average those length decreases are outweighed by the length increases in the formulas that are getting longer.

To make my point, at the end of this scientific investigation: I think we should be happy with our old friend π and not switch to τ.

I have two final observations. The first is that if we had already lived in a τ world, the conclusion would have been different, and we would have chosen to stick with τ. If our expressions were already in τ and we were investigating whether switching to π would make them simpler, our vector-based graph would look like this:

That difference in behavior is because the vectors used to construct the graphs depend on the original complexities, and so change when the original changes.

This shows that for formulas that have a complexity greater than 2 (most of them do) and for which the complexity is not always greater than 18, the improvement in switching from τ to π would be negative again, suggesting that we should not accept the switch. Unfortunately for supporters of τ, we do not live in a τ world.

The second observation, which was brought to me by Michael Trott, is that 2/3 of the formulas shown in *The Tau Manifesto* (the green table at the beginning) don’t just have 2π in them, but the complex number 2π*i*. This suggests that maybe the question I was trying to answer is not the correct one. A better one could be this: would it make sense to have a new symbol τ for the complex number 2π*i*?

This new convention would require changing from π*i* to τ/2 as well, but that doesn’t affect the complexity of π*i*. In general, formulas having a π*i* term inside would either become simpler or preserve their complexity. To give you an idea, here’s a word cloud of formulas that would become simpler:

Which, after substituting τ= 2π*i*, become these:

You could argue that the percentage of improved formulas may not be high enough, and changing from 2π*i* to τ is not worth the effort. What evidence shows, however, is the opposite: of all formulas having a π*i* term, 75% would be simpler, and the remaining 25% would keep their original complexity—none would get worse. This is a strong point to make, and I am not in the position to do it, but I think the equality τ = 2π*i* looks more promising (and less historically disruptive) than τ = 2π.

Whatever your opinion on τ, I hope you have a lovely Tau Day. Please enjoy two pi(e)s today—imaginary or otherwise.

]]>

The new Wolfram Language Image Identification Project was the focus of much interest at the conference. Talks on the subject were particularly well received, and there were laughs aplenty when European CEO Conrad Wolfram demonstrated its ability to recognize the aforementioned tropical fruit… which happened to be the nearest object at hand during his opening keynote!

Tom Wickham-Jones’ keynote on the Wolfram Cloud Platform and Oliver Rübenkönig’s engaging “Writing Your Own PDE Solvers” talk were also voted among the most memorable and enjoyable sessions by the delegates in Frankfurt.

The conference covered a wide range of topics for those interested in Wolfram technologies, including *SystemModeler*, the Wolfram Language, and the Wolfram Data Drop. It also explored the technology and vision behind the Computer-Based Math^{TM} education system.

Apart from learning and entertainment, there was also plenty of celebration at the Frankfurt event. Among those with something extra to cheer at the conference dinner was the UnRisk development team, which took one of this year’s Wolfram Innovator Awards. UnRisk was recognized for its highly sophisticated family of financial derivatives and risk analytics products, which are built around the Wolfram Language.

The other Innovator Award winner, also from the finance industry, was André Koppel, of André Koppel Software GmbH. A creator and seller of financial insolvency products that use Wolfram Language and CDF, Koppel has long been a terrific supporter of Wolfram and an advocate of our technology.

Another cause for celebration was the 25th anniversary of the partnership between Wolfram and ADDITIVE GmbH. Our German sales partner’s CEO Andreas Heilemann was on hand to wield the knife for cake-slicing duty on this auspicious occasion, of which we hope to share many more with ADDITIVE.

Stay tuned for news of the date and venue for EWTC 2016!

]]>Here I will concentrate on dates than can be described with a maximum of six digits. This means I’ll be able to uniquely encode all dates between Saturday, March 14, 2015, and Sunday, March 15, 1915—a time range of 36,525 days.

We start with a graphical visualization of the topic at hand to set the mood.

This year’s Pi Day was, like every year, on March 14.

Since the centurial Pi Day of the twentieth century, 36,525 days had passed.

We generate a list of all the 36,525 dates under consideration.

For later use, I define a function `dateNumber` that for a given date returns the sequential number of the date, with the first date, Mar 15 1915, numbered 1.

I allow the months January to September to be written without a leading zero—9 instead of 09 for September—and similarly for days. So, for some dates, multiple digit sequences represent them. The function `makeDateTuples` generates all tuples of single-digit integers that represent a date. One could use slightly different conventions and minimal changes of the code and always enforce short dates or always enforce zeros. With the optional inclusion of zeros for days and months, I get more possible matches and a richer result, so I will use these in the following. (And, if you prefer a European date format of day-month-year, then some larger adjustments have to be made to the definition of `makeDateTuples`.)

Some examples with four, two, and one representation:

The next plot shows which days from the last year are representable with four, five, and six digits. The first nine days of the months January to September just need four or five digits to be represented, and the last days of October, November, and December need six.

For a fast (constant time), repeated recognition of a tuple as a date, I define two functions `dateQ` and `dateOf`. `dateOf` gives a normalized form of a date digit sequence. We start with generating pairs of tuples and their date interpretations.

Here are some examples.

Most (77,350) tuples can be uniquely interpreted as dates; some (2,700) have two possible date interpretations.

Here are some of the digit sequences with two date interpretations.

Here are the two date interpretations of the sequence {1,2,1,5,4} as Jan 21 1954 or as Dec 1 1954 recovered by using the function `datesOf`.

These are the counts for the four-, five-, and six-digit representations of dates.

And these are the counts for the number of definitions set up for the function `datesOf`.

For all further calculations, I will use the first ten million decimal digits of pi (later I will see that ten million digits are enough to find any date). We allow for an easy substitution of pi by another constant.

Instead of using the full digit sequence as a string, I will use the digit sequence split into (overlapping) tuples. Then I can independently and quickly operate onto each tuple. And I index each tuple with the index representing the digit number. For example:

Using the above-defined `dateQ` and `dateOf` functions, I can now quickly find all digit sequences that have a date interpretation.

Here are some of the date interpretations found. Each sublist is of the form {* date, startingDigit, digitSequenceRepresentingTheDate*}.

We have found about 8.1 million dates represented as four digits, about 3.8 million dates as five digits, and about 365,000 dates represented as six digits, totaling more than 12 million dates altogether.

Note that I could have used string-processing functions (especially `StringPosition`) to find the positions of the date sequences. And, of course, I would have obtained the same result.

While the use of `StringPosition` is a good approach to deal with a single date, dealing with all 35,000 sequences would have taken much longer.

We pause a moment and have a look at the counts found for the 4-tuples. Out of the 10,000 possible 4-tuples, the 8,100 used appear each on average (1/10)⁴*10⁷=10⁴ times based on the randomness of the digits of pi. And approximately, I expect a standard deviation of about 100010^½≈31.6. Some quick calculations and a plot confirm these numbers.

The histogram of the counts shows the expected bell curve.

And the following graphic shows how often each of the 4-tuples that represent dates were found in the ten million decimal digits. We enumerate the 4-tuples by concatenating the digits; as a result, I see “empty” vertical stripes in the region where no 4-tuples are represented by dates.

Now I continue to process the found date positions. We group the results into sublists of identical dates.

Every date **does** indeed occur in the first 10 million digits, meaning I have 36,525 different dates found. (We will see later that I did not calculate many more digits than needed.)

Here is what a typical member of `dateGroups` looks like.

Now let’s do some statistics on the found dates. Here is the number of occurrences of each date in the first ten million digits of pi. Interestingly, and in the first moment maybe unexpectedly, many dates appear hundreds of times. The periodically occurring vertical stripes result from the October-November-December month quarters.

The mean spacing between the occurrences also clearly shows the early occurrence of four-digit years with average spacings below 10,000, the five-digit dates with spacings around 100,000, and the six-digit dates with spacings around one million.

For easier readability, I format the triples {* date, startingPosition, dateDigitSequence*} in a customized manner.

The most frequent date in the first ten million digits of pi is Aug 6 1939—it occurs 1,362 times.

Now let’s find the least occurring dates in the first ten million digits of pi. These three dates occur only once in the first ten million digits.

And all of these dates occur only twice in the first ten million digits.

Here is the distribution of the number of the date occurrences. The three peaks corresponding to the six-, five-, and four-digit date representations (from left to right) are clearly distinct. The dates that are represented by 6-tuples each occur only a very few times, and, as I have already seen above, appear on average about 1,200 times.

We can also accumulate by year and display the date interpretations per year (the smaller values at the beginning and end come from the truncation of the dates to ensure uniqueness.) The distribution is nearly uniform.

Let’s have a look at the dates with some “neat” date sequences and how often they occur. As the results in `dateGroups` are sorted by date, I can easily access a given date. When does the date 11-11-11 occur?

And where does the date 1-23-45 occur?

No date starts on its “own” position (meaning there is no example such as January 1, 1945 [1-1-4-5] in position 1145).

But one palindromic case exists: March 3, 1985 (3.3.8.5), which occurs at palindromic position 5833.

A very special date is January 9, 1936: 1.9.3.6 appears at the position of the 1,936^{th} prime, 16,747.

Let’s see what anniversaries happened on this day in history.

While no date appeared at its “own” position, if I slightly relax this condition and search for all dates that overlap with its digits’ positions, I do find some dates.

And at more than 100 positions within the first ten million digits of pi, I find the famous pi starting sequence 3,1,4,5,9 again.

Within the digits of pi I do not just find birthday dates, but also physical constant days, such as the ħ-day (the reduced Planck constant day), which was celebrated as the centurial instance on October 5, 1945.

Here are the positions of the matching date sequences.

And here is an attempt to visualize the appearance of all dates. In the date-digit plane, I place a point at the beginning of each date interpretation. We use a logarithmic scale for the digit position, and as a result, the number of points is much larger in the upper part of the graphic.

For the dates that appear early on in the digit sequence, the finite extension of the date over the digits can be visualized too. A date extends over four to six digits in the digit sequence. The next graphic shows all digits of all dates that start within the first 10,000 digits.

After coarse-graining, the distribution is quite uniform.

So far I have taken a date and looked at where this date starts in the digit sequence of pi. Now let’s look from the reverse direction: how many dates intersect at a given digit of pi? To find the total counts of dates for each digit, I loop over the dates and accumulate the counts for each digit.

A maximum of 20 dates occur at a given digit.

Here are two intervals of 200 digits each. We see that most digits are used in a date interpretation.

Above, I noted that I have about 12 million dates in the digit sequence under consideration. The digit sequence that I used was only ten million digits long, and each date needs about five digits. This means the dates need about 60 million digits. It follows that many of the ten million digits must be shared and used on average about five times. Only 2,005 out of the first ten million digits are not used in any of the date interpretations, meaning that 99.98% of all digits are used for date interpretations (not all as starting positions).

And here is the histogram of the distribution of the number of dates present at a certain digit. The back-of-the-envelope number of an average of six dates per digits is clearly visible.

The 2,005 positions that are not used are approximately uniformly distributed among the first ten million digits.

If I display the concrete positions of the non-used digits versus their expected average position, I obtain a random walk–like graph.

So, what are the neighboring digits around the unused digits? One hundred sixty two different five-neighborhoods exist. Looking at them immediately shows why the center digits cannot be part of a date: too many sequences of zeros before, at, or after.

And the largest unused block of digits that appears are the six digits between position 8,127,088 and 8,127,093.

At a given digit, dates from various years overlap. The next graphic shows the range from the earliest to the latest year as a function of the digit position.

These are the unused digits together with three left- and three right-neighboring digits.

Because the high coverage seems, in the first moment, maybe unexpected, I select a random digit position and select all dates that use this digit.

And here is a visualization of the overlap of the dates.

The most-used digit is the 1 at position 2,645,274: 20 possible date interpretations meet at it.

Here are the digits in its neighborhood and the possible date interpretations.

If I plot the years starting at a given digit for a larger amount of digits (say the first 10,000), then I see the relatively dense coverage of date interpretations in the digits-date plane.

Let’s now build a graph of dates that are “connected”. We’ll consider two dates connected if the two dates share a certain digit of the digit sequence (not necessarily the starting digit of a date).

Here is the same as the graph for the first 600 digits with communities emphasized.

We continue with calculating the mean distance between two occurrences of the same date.

The first occurrences of dates are the most interesting, so let’s extract these. We will work with two versions, one sorted by the date (the list `firstOccurrences`) they represent, and one sorted by the starting position (the list `firstOccurrencesSortedByOccurrence`) in the digits of pi.

Here are the possible date interpretations that start within the first 10 digits of pi.

And here are the other extremes: the dates that appear deepest into the digit expansion.

We see that Wed Nov 23 1960 starts only at position 9,982,546(=2 7 713039)—so by starting with the first ten million digits, I was a bit lucky to catch it. Here is a quick direct check of this record-setting date.

So, who are the lucky (well-known) people associated with this number through their birthday?

And what were the Moon phases on the top dozen out-pi-landish dates?

And while Wed Nov 23 1960 is furthest out in the decimal digit sequence, the last prime date in the list is Oct 22 1995.

In general, less than 10% of all first date appearances are prime.

Often one maps the digits of pi to a direction in the plane and forms a random walk. We do the same based on the date differences between consecutive first appearances of dates. We obtain typically looking 2D random walk images.

Here are the first-occurring date positions for the last few years. The bursts in October, November, and December of each year are caused by the need for five or six consecutive digits, while January to September can be encoded with fewer digits if I skip the optional zeros.

If I include all dates, I get, of course, a much denser filled graphic.

A logarithmic vertical axis shows that most dates occur between the thousandth and millionth digits.

To get a more intuitive understanding of overall uniformity and local randomness in the digit sequence (and as a result in the dates), I make a Voronoi tessellation of the day-digit plane based on points at the first occurrence of a date. The decreasing density for increasing digits results from the fact that I only take first-date occurrences into account.

Easter Sunday positions are a good date to visualize, as the date varies over the years.

The mean first occurrence as a function of the number of digits needed to specify a date depends, of course, on the number of digits needed to encode a date.

The mean occurrence is at 239,083, but due to the outliers at a few million digits, the standard deviation is much larger.

Here are the first occurrences of the “nice” dates that are formed by repetition of a single digit.

The detailed distribution of the number of occurrences of first dates has the largest density within the first few 10,000 digits.

A logarithmic axis shows the distribution much better, but because of the increasing bin sizes, the maximum has to be interpreted with care.

The last distribution is mostly a weighted superposition of the first occurrences of four-, five-, and six-digit sequences.

And here is the cumulative distribution of the dates as a function of the digits’ positions. We see that the first 1% of the ten million digits covers already 60% of all dates.

Slightly more dates start at even positions than at odd positions.

We could do the same with mod 3, mod 4, … . The left image shows the deviation of each congruence class from its average value, and the right image shows the higher congruences, all considered again mod 2.

The actual number of first occurrences per year fluctuates around the mean value.

The mean number of first-date interpretations sorted by month clearly shows the difference between the one-digit months and the two-digit months.

The mean number by day of the month (ranging from 1 to 31) is, on average, a slowly increasing function.

Finally, here are the mean occurrences by weekday. Most first date occurrences happen for dates that are Wednesdays.

Above I observed that most numbers participate in a possible date interpretation. Only relatively few numbers participate in a first-occurring date interpretation: 121,470.

Some of the position sequences overlap anyway, and I can form network chains of the dates with overlapping digit sequences.

The next graphic shows the increasing gap sizes between consecutive dates.

Distribution of the gap sizes:

Here are pairs of consecutively occurring date-interpretations that have the largest gap between them. The larger gaps were clearly visible in the penultimate graphic.

Now, the very special dates are the ones where the concatenated continued fraction (cf) expansion position agrees with the decimal expansion position. By concatenated continued fraction expansion, I mean the digits on the left at each level of the following continued fraction.

This gives the following cf-pi string:

And, interestingly, there is just one such date.

None of the calculations carried out so far were special to the digits in pi. The digits of any other irrational numbers (or even sufficiently long rational numbers) contain date interpretations. Running some overnight searches, it is straightforward to find many numeric expressions that contain the dates of this year (2015). Here they are put together in an interactive demonstration.

We now come to the end of our musings. As a last example, let’s interpret digit positions as seconds after this year’s pi-time at March 14 9:26:53. How long would I have to wait until seeing the digit sequence 3·1·4·1·5 in the decimal expansion of other constants? Can one find a (small) expression such that 3·1·4·1·5 does not occur in the first million digits? (The majority of the elements of the following list ξs are just directly written down random expressions; the last elements were found in a search for expressions that have the digit sequence 3·1·4·1·5 as far out as possible.)

Here are two rational numbers whose decimal expansions contain the digit sequence:

And here are two integers with the starting digit sequence of pi.

Using the neat new function `TimelinePlot` that Brett Champion described in his last blog post, I can easily show how long I would have to wait.

We encourage readers to explore the dates in the digits of pi more, or replace pi with another constant (for instance, Euler’s constant E, to justify the title of this post), and maybe even 10 by another base. The overall, qualitative structures will be the same for almost all irrational numbers. (For a change, try `ChampernowneNumber[10]`.) Will ten million digits be enough to find every date in, say, E (where is October 21, 2014?) Which special dates are hidden in other constants? These and many more things are left to explore.

Download this post as a Computable Document Format (CDF) file.

]]>—Dale Dougherty

I joined the maker movement last year, first by making simple things like a home alarm system, then by becoming a mentor in local hackathons and founding a Wolfram Meetup group in Barcelona. There is likely an open community of makers that you can join close to where you live; if not, the virtual community is open to everyone. So what are you waiting for? With the Raspberry Pi 2 combined with the Wolfram Language, you really have an amazing tool set you can use to make, tinker, and explore.

If there was one general complaint about the Raspberry Pi, it was about its overall performance when running desktop applications like *Mathematica*. The Raspberry Pi Foundation addressed this performance issue early this year by releasing the Raspberry Pi 2 with a quad-core processor and 1 GB of RAM, which has greatly improved the experience of interacting with the device via the Wolfram Language user interface.

Here are 10 different ways to write a “Hello, World!” program for your Pi.

1) Enter a string:

2) Create a panel:

3) Post “Hello, World!” in its own window:

4) Create a button that prints “Hello, World!”:

Hello, World!

5) Make your Raspberry Pi speak “Hello, World!”:

6) Deploy “Hello, World!” to the Wolfram Cloud:

7) Send a “Hello, World!” tweet:

8) Display “Hello!” over the world map and submit it to Wolfram Tweet-a-Program:

9) Program your Pi to say “Hello, World” in Morse code by blinking an LED:

Notice that the GPIO interface requires root privilege to control the LED, so you must start *Mathematica* as root from the Raspberry Pi terminal by typing `sudo mathematica` in the command line.

10) Apply sound to the “Hello, World” Morse code:

This list could go on and on—it’s limited only by your imagination. If you want to send more “Hello, World” Morse code, you can make an optical telegraph. The Community post Raspberry Pi goes to school, by Adriana O’Brien, shows you how.

*This image was created with Fritzing.*

One of the most useful things about using the Wolfram Language on the Pi is that it works seamlessly with the new Wolfram Data Drop open service. This allows you to make an activity tracker in just a few minutes. For example, using Data Drop and a PIR (Passive InfraRed) motion sensor, I kept track of all human movements in my home hallway for several months.

*This image was created with Fritzing.*

Every 20 minutes, the total number of counts was added to a databin, so I could monitor my hallway in real time from anywhere with Wolfram|Alpha. And if I wanted to, I could also analyze the data and create advanced visualizations like in this `DateListPlot` that distinguishes business days from weekends:

The Wolfram Data Drop also accepts images from the Raspberry Pi camera module, so you can easily make a remote motion trigger with a PIR sensor.

Or you can take several snapshots and make a time lapse, like in this tutorial on turning my animated plant into a moving animal:

The Wolfram Language has all sorts of image processing algorithms built in. But for some applications, the image that comes out with `DeviceRead`[`"RaspiCam"`] is just too small. To take the most out of your 5 MP camera module, use `Import` with the following specifications:

Yes, this is the view from my office window. There is a lot of detail that can be processed in many different ways:

The Wolfram Language on Raspberry Pi 2 is also great for rapid prototyping and 3D printing. It knows how to import and export hundreds of data formats and subformats. For example, here’s how to turn the skeletal polyhedron (specifically, a rhombicuboctahedron) drawn by Leonardo da Vinci into an object file that can be 3D printed:

Finally, let me invite you to join Wolfram Community and show off your own Raspberry Pi projects, discover new ideas to use as starting points in your future creations, or take advantage of the many helpful tutorials that have been posted by fellow users.

Download this post as a Computable Document Format (CDF) file.

]]>

**Thomas, could you please explain what SmartCooling is and what it can be used for?**

The SmartCooling library is developed for the simulation of cooling circuits with mechanically and electrically operated auxiliaries. Like in a real workshop—but in a virtual environment—the library offers all the components you need to build your own cooling applications for testing, studying, or performing comprehensive experiments.

It contains a variety of components, such as cooling fans, heat exchangers, and valves, that can be used to create cooling circuits of basically any degree of complexity. But like in real life, selecting which components to use is crucial, and you have to think about how they interact with each other, and what their physical properties are. Of course, the physical data that you put into your models is also very important. Usually, data for SmartCooling models can be taken directly from ready-to-use specification sheets—for example, performance, temperature ranges, operating conditions, speed, etc.—or it is gained from measurements of a real cooling system. The models of the SmartCooling library can be parametrized by entering this data in input fields.

With SmartCooling, it becomes a lot faster and a lot more efficient to design, simulate, and dimension cooling concepts for, for example, automotive applications.

An example of a typical automotive application is to use SmartCooling to investigate new cooling concepts in hybrid electric vehicles (HEV) and their impact on energy consumption. By using modeling and simulation, it can be shown that energy can be saved when substituting the water pump of an internal combustion engine (ICE) by an electrically operated water pump. This is because a mechanically operated water pump (mechanically powered by the ICE) is fixed to the speeds of the ICE, and is not working in its optimal operation area. With an electrically operated water pump, it is possible to control the speed of the water pump independently from the speed of the ICE. This results in better cooling of the ICE and an increase in efficiency, since the electric water pump can be operated in its optimum operating area. The impact of saving energy following this approach was the subject of the scientific paper “Optimization of a Cooling Circuit with a Parameterized Water Pump Model” (5th International Modelica Conference, Vienna, Austria).

Also, the SmartCooling library lets you choose between different levels of abstraction in its applications. This means that you can choose to use simplified models or more detailed models where the physical description is more extensive. These usually contain additional parameters and boundary conditions. Being able to choose a level of abstraction that fits your purpose, I think, is a great advantage. Simplified models help to save computing time, for example by using scalability (being able to transform small structures to large structures by using scaling factors), whereas detailed models allow you to more deeply investigate system behavior and phenomena. In contrast to scaling, each model is then considered individually.

**What was your motivation for developing this library?**

The reason why the SmartCooling library was developed came from praxis. At our business unit at AIT, the unit for Electric Drive Technologies (EDT), the focus of our work is on automotive applications. We especially target alternative vehicle concepts, like hybrid vehicles, pure electric vehicles, etc.

When you model an entire vehicle, you also have to take thermal aspects, such as thermal management and cooling, into account. The Modelica Standard Library (MSL) offers a lot of basic models and tools to build up applications for the automotive section, but sometimes it is rather unclear which components from which sublibrary of the MSL to use, and if they are appropriate for your particular application. This makes working in this domain difficult. As a result, we developed the SmartCooling library to allow us to model thermodynamics in an easy-to-use manner that is focused on automotive applications.

**Could you give our readers an example of a great SmartCooling application?**

Yes, of course!

A very good and realistic use case is that you need to evaluate a model of an electrically operated water pump. Let’s say it’s for the automotive application mentioned earlier, that the mechanically operated water pump of the cooling circuit in a conventional ICE-driven car is to be replaced by an electrically operated one.

The evaluation of models is necessary in order to get realistic simulation results. That also means that the models need to be well parametrized with realistic data. Much of the physical data can be taken from specification sheets, or gained from easily accessible measurement data, but you often find yourself in the situation where the determination of some parameter is difficult, and more detailed measurements have to be done. In this example, a real test bench for the electrically operated water pump is set up.

To measure the pressure difference and flow, the power consumption, and the hydraulic efficiency, sensors have been set up in the test bench. It wouldn’t be possible to do that in the real ICE system in the car because the space there is restricted. Another important reason for using a test bench is that the measurements are not restricted to just one characteristic curve, which would be the case in the real system due to the cooling circuit of the car.

Below is a model of the test bench circuit in *SystemModeler*, modeled with components from the SmartCooling library. The valve component creates a pressure drop in the circuit. While the water pump is driven at a certain constant speed, equal to that of the test bench, the friction losses can be adjusted by the valve. The circuit is investigated for different applied rotational speeds, ranging from 2,000 to 7,000 rpm.

The model is easily built with drag and drop, using the components from the SmartCooling library. The library itself functions much like a virtual lab area providing the right tools and equipment. The figure below shows exactly which components were used in the test bench model:

The remaining components, such as the electric machine and the sensors, can be found in the Modelica Standard Library:

When the model has been fully assembled, the water pump, the pipeline systems, and the electric machine are all parametrized with data obtained from the laboratory test bench.

A very good way of evaluating the test bench model is by using the Wolfram Language. The Wolfram Language makes it easy to run parameter sweeps of the geometrical and mechanical parameters in order to visualize the simulation results as a series of curves. These curves can then be compared with the measured data from the real test bench. Being able to run parameter sweeps in this manner makes the evaluation and validation process a lot simpler, and it leads to a better understanding of how certain values affect the behavior of the model.

Here is a parametric plot of the pressure increase (dp) over the volume flow (Vflow) of the water pump. The plot was generated with *WSMLink* by varying the water pump speed from 2,000 to 7,000 rpm.

So in this test bench example, we used the SmartCooling library to investigate state-of-the-art approaches and innovative system design for cooling architectures. The findings from the SmartCooling model supported the theory that the cooling circuit for a conventional ICE-driven car can be improved if the mechanically operated water pump is replaced by an electrically operated one. This is a great advantage, since a more efficient cooling of the ICE helps to save both energy (less fuel consumption) and money.

The validity of the simulation results, and further details on the evaluation process, are covered in the paper “Optimization of a Cooling Circuit with a Parameterized Water Pump Model.”

**Learn more**

I’d like to thank Thomas for providing us with this fascinating demonstration of the SmartCooling library and its applications. If you, or perhaps your company, would like to try modeling cooling circuits with SmartCooling, it is available for purchase in the *SystemModeler* Library Store. From the website you can also download a free trial of *SystemModeler*. For more SmartCooling examples, check out Battery Stack: Modeling a Cooling Circuit and Cylinder Cooling in our collection of *SystemModeler* industry examples, or visit our online Documentation Center, which hosts the full SmartCooling library documentation.

The rules were simple. Each person had to work on a project outside of their role at Wolfram, the project had to be a not-for-profit hack that fit the theme “Greater Good,” and it could only be completed using technologies available to the public.

The hackathon started at noon and ended at 11pm with a science fair-style showcase of the completed hacks. Some of the completed submissions included:

**Poisonous or Not?**

This hack created both an iPhone Cloud app and a website using Wolfram Programming Cloud, PHP, JavaScript, CSS, and HTML, highlighting specifically our new `ImageIdentify` functionality. This project allows a user to upload a picture of a plant and get a result (with probabilities) on if that plant is poisonous or not.

**Data Drop and Node-RED**

During a very recent trip to RoboUniverse, one developer was made aware of an open-source visual tool for the Internet of Things called Node-RED. He wanted to explore this tool and offer users a way to connect Wolfram technologies to their projects. So with this hack, he created a drag-and-drop component to allow users of Node-RED to connect to the Wolfram Data Drop.

**3D Dexterity Assist**

The Dexterity Assist is a 3D-printed object created to aid children using horseback riding as therapy. These children often need assistance with dexterity, and gripping the thin reins of a horse can be difficult for them. The team modeled, tweaked, and 3D-printed a cylindrical object inside which a horse’s reins could be inserted. *Mathematica* was used to create the 3D model, specifically `ParametricPlot3D` and `DiscretizeRegion`, and a MakerBot printed the objects.

Additional participants worked on tools and packages to increase productivity or make an impact for our internal development teams, several of which will remain ongoing explorations. Overall, the hackathon was a huge success for team building and produced some truly awesome results in a short period of time. We can’t wait to plan another!

Know of an upcoming hackathon we should participate in or that you want to use our technology for? Check out our hackathon page and let us know!

]]>Nash’s long career as a mathematician was marked by both brilliant achievements and terrible struggles with mental illness. Despite his battle with schizophrenia, Nash inspired generations of mathematicians and garnered a stunning array of awards, including the 1994 Nobel Prize in economic sciences, the American Mathematical Society’s 1999 Leroy P. Steele Prize for Seminal Contribution to Research, and the 1978 John von Neumann Theory Prize. We were personally honored in 2003 when Nash presented his work with *Mathematica* at the International *Mathematica *Symposium in London.

We are saddened to see the loss of a valuable member of the mathematics community, but will always remember John F. Nash, Jr. for his remarkable life and career. His legacy will remain to encourage us to continue to break new ground, push boundaries, and explore mathematics boldly and fearlessly.

]]>For example, when drawing a graphic, we usually specify the coordinates of its points or elements. But sometimes it’s simpler to express the graphic as a collection of relative displacements: move a distance *r* in a direction forming an angle *θ* with respect to the direction of the segment constructed in the previous step. This is known as turtle graphics in computer graphics, and is basically what the new function `AnglePath` does. If all steps have the same length, use `AnglePath`[{*θ*1,*θ*2,...}] to specify the angles. If each step has a different length, use `AnglePath`[{{r1,*θ*1},{r2,*θ*2}, ...}] to give the pairs {length, angle}. That’s it. Let’s see some results.

Turn 60 degrees to the left with respect to the previous direction six times. You get a hexagon:

If the angle is 135 degrees and you repeat eight times, then this is the result. Note that 8 * 135° = 1080°, so we go around the center three times:

Suppose that we again keep turning the same positive angle *θ*, but we increase the lengths of the steps linearly, in increments *dr*, from 0 to 1:

Then we get these curves that spiral outward, producing nice outputs:

If we choose the angles randomly, we get random walks on the plane:

Now let’s try to combine multiple `AnglePath` lines. Suppose that at each step we choose randomly between two possible angles and that the lengths obey a power law, so they get smaller in each iteration:

The result does not look very interesting yet:

But if we repeat the experiment 10 times, then we start seeing some structure:

We can construct all different lists of 10 choices of the two angles, using `Tuples` (there are 2^10=1024 possible lists). Replacing `Line` with `BSplineCurve` produces curved lines instead of straight segments. The result is a nice self-similar structure:

`AnglePath` allows us to construct fractal structures easily, with very compact code. In fact, the code fits in a tweet! These are two examples derive from Wolfram Tweet-a-Program:

These curious spirals are approximate Cornu spirals. With larger steps, they develop interesting substructure:

This was a quick introduction to how useful and fun the function `AnglePath` can be. `AnglePath` is supported in Version 10.1 of the Wolfram Language and *Mathematica*, and is rolling out soon in all other Wolfram products. Start using it now, and tweet your results through Wolfram TaP!

Download this post as a Computable Document Format (CDF) file.

]]>

I’ve often wondered what the biggest little polyhedra would look like. *Mathematica* 10 introduced `Volume[ConvexHullMesh[points]]`, so I thought I could solve the problem by picking points at random. Below is some code for picking, calculating, and showing a random little polyhedron. If the code is run a thousand times, one of the solutions will be better than the others. Here, I ran it three times. One of these three solutions is (probably) better than the other two.

With randomly selected points, images like the following emerge from the better solutions. I posted these on Wolfram Community under the discussion Biggest Little Polyhedra, and got some useful comments from Robin Houston and Todd Rowland. I thought of using results from “Visualizing the Thomson Problem” for solutions. In the Thomson problem, electrons repel each other on a sphere. Twelve repelling points move to the vertices of an icosahedron, which is inefficient for BLP, since all the longest distances pass through the center of the bounding sphere, just like the regular hexagon in the polygon case. I modified the Thomson code so that points repelled each other and all polar opposites, and that gave good starting values.

Four points need a regular tetrahedron, with volume /12= 0.117851.

Five points need a unit equilateral triangle with a perpendicular unit line, with volume /12 = 0.1443375; solved in 1976 [1].

I’ll use the name 6-BLP for the biggest little polyhedron on 6 points. In 2003, the volume for 6-BLP was solved to four decimals of accuracy [2, 3]. Graphics for 6-BLP and 7-BLP are below, with red lines for the unit diagonals.

To find these on my own, I first picked the best solutions out of a thousand tries, then used simulated annealing to improve the solutions. For each of the points in the good solution, I introduced a tiny bit of randomness to try to find a better solution, thousands of times. Then I introduced a tinier bit of randomness, over and over again. Some of these solutions seemed to go to a symmetrical solution. For example, with seven points, the best solution seemed to be gradually drifting toward this polyhedron, with a value of *r* of about a half, which represents the relative size of the upper triangle △456.

The exact volume can be determined by the tetrahedra defined by the points {{2,3,4,7}, {2,4,6,7}, {5,4,7,6}}, with the volumes of the first two being tripled for symmetry. Look at the volumes of the tetrahedra, and switch any two numbers in a tetrahedron with negative volume.

After changing the parity of the last tetrahedron, we can calculate the exact *r* that gives the exact optimal volume. In the same way, we’ll also solve a few others.

The solution for 16-BLP takes more than a minute, so I’ve separated it out.

The first value in the solutions is the optimal volume as a `Root` object, and the second is the optimal value of *r*. Here’s a cleaner table of values.

That is far beyond anything I could have solved by hand. With random selection, annealing, symmetry spotting, `Solve`[], and `Maximize`[], I was also able to find the exact *n*-BLP (biggest little polygon) for *n* = 6, 7, 8, 9, and 16.

Here are a few views of the 8-BLP, with the red tubes showing unit-length diagonals.

Some views of 9-BLP:

Some views of 16-BLP:

The labeled 8-BLP below features perpendicular unit lines 1-2 and 3-4 above and below the origin. The labeled 9-BLP below features stacked triangles △123, △456, and △789.

The labeled 16-BLP below features a truncated tetrahedron on points 1-12 and added points 13-16.

Fairly complicated, right? With sphere point picking, random numbers –Pi to Pi and –1 to 1 can produce randomly distributed points on a unit sphere. Points on a unit sphere can be mapped back to points in the (–Pi to Pi, –1 to 1) rectangle. With the solutions for 8, 9, and 16, here’s what happens.

For 10-BLP, I haven’t been able to find an exact solution, but I did find a numerical solution to any desired level of accuracy. If anyone can find a root object for this, let me know. Open up the notebook version to see a rather difficult equation in the Initialization section.

Here’s a labeled view of 10-BLP from two different perspectives.

In a similar fashion, a numerical solution for 11-BLP can be found.

Here are two views of 11-BLP.

Have I really solved these? Maybe not. For these particular symmetries, I’m sure I’ve found the local maximum. For example, here’s a function with a local maximum of 5 at the value 1.

Plot a bit more, and the global maximum of 32 can be found at value -2.

In the related Thomson problem, there’s a proof that the 12 vertices of an icosahedron give a minimum energy configuration for 12 electrons. But 7, 8, 9, 10, 11, and 13+ electrons are all considered unsolved. The Kepler conjecture suggested that hexagonal close packing was the densest arrangement of spheres, but a complete formal proof by Thomas Hales wasn’t completed until August 10, 2014. The densest packing of regular tetrahedra, the fraction 4000/4671 = .856347…, wasn’t found until July 27, 2010, and still isn’t proven maximal. Take any solution claims with a grain of salt; geometric maximization problems are notoriously tricky.

For months, my best solution for 11 points was in an asymmetric local maximum. Some (or most) of the following solutions are likely local instead of global, but which ones? With that caveat, we can look at best known solutions for 12 points and above.

12-BLP seems to be the point 12, the slightly messy heptagon 11-6-7-10-8-5-9, and the quadrilateral 1-4-3-2.

13-BLP seems to be the point 13, the slightly messy heptagon 12-8-10-6-7-9-11, and the messy pentagon 1-2-3-4-5.

My attempts to add symmetry have resulted in figures with a lower volume.

So far, my best solutions for a 14-BLP seems to have a lot of symmetry, but I haven’t solved it. I spent some time optimizing a point-heptagon-heptagon solution for a 15-BLP, only to watch my randomizer “improve” it relentlessly by increasing volume while sacrificing symmetry.

17-BLP, 18-BLP—I believe 17-BLP has nice symmetry.

19-BLP, 20-BLP—20 is not the dodecahedron, due to inefficient unit lines through the center.

The snub cube and half the vertices of the great rhombicuboctahedron both have lower volume than 24-BLP.

21-BLP, 22-BLP—Lots of 7- and 9-pointed stars.

23-BLP, 24-BLP—My best 24-BLP has tetrahedral symmetry.

Here’s some of the symmetry in the current best 24-BLP. Points 1-12 and 13-24 have respective norms of 0.512593 and 0.515168.

16-BLP, 17-BLP—Letting the unit lines define polygons. 16-BLP contains many 7-pointed stars.

The same polyhedra shown as solid objects, using `ConvexHullMesh`[]. That’s BLP 9-10-11-12, 13-14-15-16, 17-18-19-20, 21-22-23-24.

Here’s the current table of the best known values.

Here are the best solutions I’ve found so far for 4 to 24 points.

Let the points be centered so that the maximal distance from the origin is as small as possible. The below scatterplot shows the distance from the origin for vertices of each polyhedron, from 8 to 24 vertices.

*Mathematica* 10.1 managed to exactly solve 6-BLP, 7-BLP, 8-BLP, 9-BLP, and 16-BLP. It found numerically exact solutions for 10-BLP and 11-BLP, and made good progress for up to 24 points. That gives solutions for seven previously unsolved problems in combinatorial geometry, all by repeating `Volume[ConvexHullMesh[points]]`. What new features have you had success with?

[1] B. Kind and P. Kleinschmidt, “On the Maximal Volume of Convex Bodies with Few Vertices,” *Journal of Combinatorial Theory*, Series A, 21(1) 1976 pp. 124-128.

doi:10.1016/0097-3165(76)90056-X

[2] A. Klein and M. Wessler, “The Largest Small n-dimensional Polytope with n+3 Vertices,” *Journal of Combinatorial Theory*, Series A, 102(2), 2003 pp. 401-409.

doi:10.1016/S0097-3165(03)00054-2

[3] A. Klein and M. Wessler, “A Correction to ‘The Largest Small n-dimensional Polytope with n+3 Vertices,’” *Journal of Combinatorial Theory*, Series A, 112(1), 2005 pp. 173-174.

doi:10.1016/j.jcta.2005.06.001

Download this post as a Computable Document Format (CDF) file.

]]>If you’re looking for inspiration or just want a taste of what’s to come, videos from last year’s conference are available on our website. We saw an impressive array of presentations from both guests and our very own developers; below is a sampling of some of the most engaging innovations and projects that were shown.

Integrating *Mathematica* and the Unity Game Engine: Not Just for Games Anymore

George Danner

Wolfram Data Science Platform: Data Science in the Cloud

Dillon Tracy

Machine Learning

Etienne Bernard

Stitchcoding and Movie Color Maps

Theo Gray and Nina Paley

Rhino, Meet *Mathematica*

Chris Carlson

This year we have already introduced cutting-edge technologies to the Wolfram Language lineup, including the Wolfram Cloud, *SystemModeler* 4.1, Data Drop, and new *Mathematica* functionalities such as `ImageIdentify` and `GrammarRules`. We’ll see you in October to learn about all this and more!