Wolfram Blog http://blog.wolfram.com News, views, & ideas from the front lines at Wolfram Research Mon, 29 Jun 2015 16:06:40 +0000 en hourly 1 http://wordpress.org/?v=3.2.1 2 Pi or Not 2 Pi? http://blog.wolfram.com/2015/06/28/2-pi-or-not-2-pi/ http://blog.wolfram.com/2015/06/28/2-pi-or-not-2-pi/#comments Sun, 28 Jun 2015 14:31:28 +0000 Giorgia Fortuna http://blog.internal.wolfram.com/?p=26798 Three months ago the world (or at least the geek world) celebrated Pi Day of the Century (3/14/15…). Today (6/28) is another math day: 2π-day, or Tau Day (2π = 6.28319…).

Some say that Tau Day is really the day to celebrate, and that τ(=2π) should be the most prominent constant, not π. It all started in 2001 with the famous opening line of a watershed essay by Bob Palais, a mathematician at the University of Utah:

“I know it will be called blasphemy by some, but I believe that π is wrong.”

Which has given rise in some circles to the celebration of Tau Day—or, as many people say, the one day on which you are allowed to eat two pies.

But is it true that τ is the better constant? In today’s world, it’s quite easy to test, and the Wolfram Language makes this task much simpler. (Indeed, Michael Trott’s recent blog post on dates in pi—itself inspired by Stephen Wolfram’s Pi Day of the Century post—made much use of the Wolfram Language.) I started by looking at 320,000 preprints from arXiv.org to see in practice how many formulas involve 2π rather than π alone, or other multiples of π.

Here is a WordCloud of some formulas containing 2π:


I found that only 18% of formulas considered involve 2π, suggesting that τ, after all, would not be a better choice.

But then why do τ supporters believe that we should switch to this new symbol? One reason is that using τ would make geometry and trigonometry easier to understand and learn. After all, when we learn trigonometry, we don’t measure angles in degrees, but in radians, and there are 2π radians in a circle. This means that 1/4 of a circle corresponds to 1/2 π radians, or π/2, and not a quarter of something! This counterintuitive madness would be resolved by the symbol τ, because every ratio of the circle would have a matching ratio of τ. For example, 1/4 would have an angle of τ/4.

Pi vs. Tau

I personally do not have strong feelings against π, and to be honest, I don’t think students would learn trigonometry faster if they were to use τ. Think about the two most important trigonometric functions, sine and cosine. What’s most helpful to remember about them is that sinpi over 2= cos(2 π) = 1, and sin3 pi over 2 = cos(π) = –1. I have not only always preferred cosine simply because it’s easier to remember (there are no fractions in π and 2 π), I’ve also always recognized that sine and cosine are different because one is nonzero on integer multiples of π and the other is nonzero on some fractions of it. By using τ instead, this symmetry would be lost, and we would be left with the equalities sintau over 4 = cos(τ) = 1 and sin 3 tau over 4 = costau over 2 = –1.

Given these observations, it seems like choosing τ or π is a personal choice. That’s fair, but it’s not a rigorous approach for determining which constant is more useful.

Even the approach I had at the beginning could lead to the wrong conclusion. The Tau Manifesto, by Michael Hartl, gives some examples of places where 2π is most commonly used:

 Examples of places where 2 Pi is most commonly used

And indeed, all these formulas would be easier if we used τ. However, those are just six of the vast number of formulas that scientists use regularly, and as I mentioned before, not many mathematical expressions involve 2π. Nevertheless, it could happen that formulas not involving 2π would still be simpler if written in τ. For example, the expression 4 π² would simply become (τ²).

For this reason I looked back at the scientific articles to see whether using τ instead of 2π (and τ/2 instead of π) would make their formulas simpler. For instance, these are some that would be simpler in τ:
Formulas simpler in Tau

And these are some that would not:

Formulas not simpler in Tau

Let me now try to explain what I mean by simpler by looking at an example: if I take the term containing π in the bottom-left formula of the Tau Manifesto equation table:

formula = 1 over the square root of 2 Pi sigma

I can replace π with τ/2 using ReplaceAll, and I get:

Using ReplaceAll in formulas

Just by looking at these two expressions, you can see that the second one is simpler. It’s not just your intuition that tells you that; it’s clear that there are fewer symbols and constants in the replaced expression. We can look at their corresponding TreeForms to demonstrate it explicitly:

Using a TreeForm to look at two different expressions

To get a numeric difference, we can look at the leaf counts (number of leaves on the trees), which correspond to the number of symbols and constants in the original formulas:

Looking at leaf counts to get a numeric difference

To see whether τ had an overall simplifying impact, I computed the complexity of each formula (defined as their leaf counts, as computed above) involving π that appeared in the articles when using π and τ. To be more precise, I first deleted all the formulas that were either equal to π or 2 π. I felt it would have been unfair to consider those as well because very often, if they appear by themselves, they do not stand for formulas. I then compared the number of times the τ formulas were better with the number of times they were not, and only 43% of the formulas whose complexity changed at all were actually better, meaning that using τ would make more than half of them look more complex. In other words, based on this comparison, we should keep using π. However, this is not the end of the story.

One observation I made is that if an expression gets either more or less complex, it’s likely to have a leaf count that is less than 40. In fact, if you look at the percentage of formulas that are better when using π or τ and that have a number of leaves that is less than a fixed number, you get this picture:

Percentage of formulas that are better when using Pi or Tau and that have a number of leaves that is less than a fixed number

where the x axis represents the upper bound on the number of leaves. This suggests that almost all formulas that become simpler have complexities less than 50, regardless of the symbol we choose.

A more relevant observation is that the situation changes drastically as the complexity of the formulas increases. Already by only considering formulas that have complexities greater than 3, like 1 over the square root of 2 Pi sigma from earlier, only 48% are simpler in π against 52% that are simpler in τ. The graph below shows how the percentage of formulas that are better in either π or τ changes as a function of the complexity:

Percentage of formulas that are better in either Pi or Tau changes as a function of the complexity

As you can see, as the number of leaves exceeds 48, the situation becomes chaotic. This is because only 0.4% of formulas have complexities greater than 50. There are not enough of these for us to deduce anything stable and reasonable about them, and the previous observation tells us that we should not really worry much about them anyway.

What this graph tells me is that in everyday life, and for anything more complex than fairly easy expressions like 1 over the square root of 2 Pi sigma, we should indeed use τ for simplicity. But there is still something else I have not considered. What about different subjects?

It might be that formulas in physics look simpler in τ, but formulas in other subjects do not. The initial search I made included articles from different subjects; however, I didn’t initially check whether the majority of π-containing formulas were from a limited subset of those subjects, or whether the ones that became simpler with τ were mostly from a limited subset. In fact, if I just restrict analysis to articles in mathematics, the situation becomes the following:

Restricting analysis to articles in mathematics

Basically, only 23% of formulas benefit from using τ, and those benefits come only when the complexity is fairly high. For instance, something of this sort:

An expression that would be simpler in Tau

would be an expression that would be simpler in τ, and you probably have not seen many of this type of expression. This suggests that either scientists in different subjects should use different conventions depending on their field-specific formulas, or that all scientific disciplines should switch to τ even though it does not really make sense for some of them to do so. After all, in a democracy, the majority wins, and it is impossible to accommodate everyone.

However, the above formula shows something else that I want to point out. With τ, it becomes this:

Formula with Tau instead

And that is not much of an improvement: even though an expression could be easier in τ, the improvement might be so small that it is irrelevant. Consider for instance these two expressions together with their leaf counts:

Comparing these two expressions together with their leaf counts

And the corresponding expressions in τ:

Corresponding expressions in Tau

The first formula is simpler in τ, but the leaf count is only 1/13 smaller than the original complexity, whereas the second expression is simpler in π and the replaced expression is 1/6 higher than the original complexity. In other words, the first case’s improvement was 1/13 and the second’s was -1/6 (the minus sign indicates negative improvement, as the expression in τ was worse). The mean of the vector one thirteenth and one sixth is –0.044, a negative number, which means that using τ in these two expressions makes the whole vector 0.044 worse, although π and τ each improved one formula.

This vector approach is different from the one-count-per-equation one that I used earlier. It considers quantity of improvement instead of just an either/or binary, and it completely reverses the previous conclusions. I have computed these vectors for formulas having complexities bounded from below in the same way I did in the previous example. What I’ve seen is that the overall improvement in going from π to τ, computed as the mean of these vectors, looks like this as the complexity increases:

Overall improvement in going from Pi to Tau computed as the mean of these vectors

where the least worsening, -0.04, is achieved at a complexity of 5. As you can see, the improvement stays below 0 the whole time, meaning that while more formulas may be shorter with τ (depending on the field), on average those length decreases are outweighed by the length increases in the formulas that are getting longer.

To make my point, at the end of this scientific investigation: I think we should be happy with our old friend π and not switch to τ.

I have two final observations. The first is that if we had already lived in a τ world, the conclusion would have been different, and we would have chosen to stick with τ. If our expressions were already in τ and we were investigating whether switching to π would make them simpler, our vector-based graph would look like this:

Expressions were already in Tau and we were investigating whether switching to Pi would make them simpler

That difference in behavior is because the vectors used to construct the graphs depend on the original complexities, and so change when the original changes.

This shows that for formulas that have a complexity greater than 2 (most of them do) and for which the complexity is not always greater than 18, the improvement in switching from τ to π would be negative again, suggesting that we should not accept the switch. Unfortunately for supporters of τ, we do not live in a τ world.

The second observation, which was brought to me by Michael Trott, is that 2/3 of the formulas shown in The Tau Manifesto (the green table at the beginning) don’t just have 2π in them, but the complex number 2πi. This suggests that maybe the question I was trying to answer is not the correct one. A better one could be this: would it make sense to have a new symbol τ for the complex number 2πi?

This new convention would require changing from πi to τ/2 as well, but that doesn’t affect the complexity of πi. In general, formulas having a πi term inside would either become simpler or preserve their complexity. To give you an idea, here’s a word cloud of formulas that would become simpler:

WordCloud of formulas that would become simpler

Which, after substituting τ= 2πi, become these:

Substituting Tau equals 2 Pi i in a WordCloud

You could argue that the percentage of improved formulas may not be high enough, and changing from 2πi to τ is not worth the effort. What evidence shows, however, is the opposite: of all formulas having a πi term, 75% would be simpler, and the remaining 25% would keep their original complexity—none would get worse. This is a strong point to make, and I am not in the position to do it, but I think the equality τ = 2πi looks more promising (and less historically disruptive) than τ = 2π.

Whatever your opinion on τ, I hope you have a lovely Tau Day. Please enjoy two pi(e)s today—imaginary or otherwise.

http://blog.wolfram.com/2015/06/28/2-pi-or-not-2-pi/feed/ 1
EWTC 2015: Celebrating Wolfram in Europe http://blog.wolfram.com/2015/06/26/ewtc-2015-celebrating-wolfram-in-europe/ http://blog.wolfram.com/2015/06/26/ewtc-2015-celebrating-wolfram-in-europe/#comments Fri, 26 Jun 2015 07:00:23 +0000 Richard Asher http://blog.internal.wolfram.com/?p=26719 Thirty talks, one Wolfram Language code tutorial, one image processing workshop, and 130 delegates—plus a rogue appearance from a strategically placed pineapple—all added up to another successful and entertaining European Wolfram Technology Conference in Germany earlier this month.

European Technology Conference with Image Identify

The new Wolfram Language Image Identification Project was the focus of much interest at the conference. Talks on the subject were particularly well received, and there were laughs aplenty when European CEO Conrad Wolfram demonstrated its ability to recognize the aforementioned tropical fruit… which happened to be the nearest object at hand during his opening keynote!

Tom Wickham-Jones’ keynote on the Wolfram Cloud Platform and Oliver Rübenkönig’s engaging “Writing Your Own PDE Solvers” talk were also voted among the most memorable and enjoyable sessions by the delegates in Frankfurt.

The conference covered a wide range of topics for those interested in Wolfram technologies, including SystemModeler, the Wolfram Language, and the Wolfram Data Drop. It also explored the technology and vision behind the Computer-Based MathTM education system.

Development team at European Wolfram Technology Conference

Apart from learning and entertainment, there was also plenty of celebration at the Frankfurt event. Among those with something extra to cheer at the conference dinner was the UnRisk development team, which took one of this year’s Wolfram Innovator Awards. UnRisk was recognized for its highly sophisticated family of financial derivatives and risk analytics products, which are built around the Wolfram Language.

The other Innovator Award winner, also from the finance industry, was André Koppel, of André Koppel Software GmbH. A creator and seller of financial insolvency products that use Wolfram Language and CDF, Koppel has long been a terrific supporter of Wolfram and an advocate of our technology.

Another cause for celebration was the 25th anniversary of the partnership between Wolfram and ADDITIVE GmbH. Our German sales partner’s CEO Andreas Heilemann was on hand to wield the knife for cake-slicing duty on this auspicious occasion, of which we hope to share many more with ADDITIVE.

Stay tuned for news of the date and venue for EWTC 2016!

http://blog.wolfram.com/2015/06/26/ewtc-2015-celebrating-wolfram-in-europe/feed/ 0
Dates Everywhere in Pi(e)! Some Statistical and Numerological Musings about the Occurrences of Dates in the Digits of Pi http://blog.wolfram.com/2015/06/23/dates-everywhere-in-pie-some-statistical-and-numerological-musings-about-the-occurrences-of-dates-in-the-digits-of-pi/ http://blog.wolfram.com/2015/06/23/dates-everywhere-in-pie-some-statistical-and-numerological-musings-about-the-occurrences-of-dates-in-the-digits-of-pi/#comments Tue, 23 Jun 2015 18:03:09 +0000 Michael Trott http://blog.internal.wolfram.com/?p=26537 In a recent blog post, Stephen Wolfram discussed the unique position of this year’s Pi Day of the Century and gave various examples of the occurrences of dates in the (decimal) digits of pi. In this post, I’ll look at the statistics of the distribution of all possible dates/birthdays from the last 100 years within the (first ten million decimal) digits of pi. We will find that 99.998% of all digits occur in a date, and that one finds millions of dates within the first ten million digits of pi.

Here I will concentrate on dates than can be described with a maximum of six digits. This means I’ll be able to uniquely encode all dates between Saturday, March 14, 2015, and Sunday, March 15, 1915—a time range of 36,525 days.

We start with a graphical visualization of the topic at hand to set the mood.

Graphic visualization of pi

Get all dates for the last 100 years

This year’s Pi Day was, like every year, on March 14.

This year's pi day

Since the centurial Pi Day of the twentieth century, 36,525 days had passed.

Number of days between centurial pi days

We generate a list of all the 36,525 dates under consideration.

List of dates under consideration

For later use, I define a function dateNumber that for a given date returns the sequential number of the date, with the first date, Mar 15 1915, numbered 1.

Defining function dateNumber

I allow the months January to September to be written without a leading zero—9 instead of 09 for September—and similarly for days. So, for some dates, multiple digit sequences represent them. The function makeDateTuples generates all tuples of single-digit integers that represent a date. One could use slightly different conventions and minimal changes of the code and always enforce short dates or always enforce zeros. With the optional inclusion of zeros for days and months, I get more possible matches and a richer result, so I will use these in the following. (And, if you prefer a European date format of day-month-year, then some larger adjustments have to be made to the definition of makeDateTuples.)

Using makeDateTuples to generate tuples

Some examples with four, two, and one representation:

Examples of tuples with four, two, and one representation

The next plot shows which days from the last year are representable with four, five, and six digits. The first nine days of the months January to September just need four or five digits to be represented, and the last days of October, November, and December need six.

Which days from last year are representable with four, five, and six digits

For a fast (constant time), repeated recognition of a tuple as a date, I define two functions dateQ and dateOf. dateOf gives a normalized form of a date digit sequence. We start with generating pairs of tuples and their date interpretations.

Generating pairs of tuples and their data interpretations

Here are some examples.

RandomSample of tuplesAndDates

Most (77,350) tuples can be uniquely interpreted as dates; some (2,700) have two possible date interpretations.

Tuples interpreted as dates

Here are some of the digit sequences with two date interpretations.

Digit sequences with two date interpretations

Here are the two date interpretations of the sequence {1,2,1,5,4} as Jan 21 1954 or as Dec 1 1954 recovered by using the function datesOf.

Two date interpretations of the sequence 1,2,1,5,4

These are the counts for the four-, five-, and six-digit representations of dates.

Counts for the four-, five-, and six-digit representations of dates

And these are the counts for the number of definitions set up for the function datesOf.

Counts for the number of definitions set up for the function datesOf

Find all dates in the digits of pi

For all further calculations, I will use the first ten million decimal digits of pi (later I will see that ten million digits are enough to find any date). We allow for an easy substitution of pi by another constant.

Allowing for an easy substitution of pi by another constant

Instead of using the full digit sequence as a string, I will use the digit sequence split into (overlapping) tuples. Then I can independently and quickly operate onto each tuple. And I index each tuple with the index representing the digit number. For example:

Using the digit sequence split into overlapping tuples

Using the above-defined dateQ and dateOf functions, I can now quickly find all digit sequences that have a date interpretation.

Finding all digit sequences that have a date interpretation

Here are some of the date interpretations found. Each sublist is of the form {date, startingDigit, digitSequenceRepresentingTheDate}.

Sublist with the form date, startingDigit, digitSequenceRepresentingTheDate

We have found about 8.1 million dates represented as four digits, about 3.8 million dates as five digits, and about 365,000 dates represented as six digits, totaling more than 12 million dates altogether.

Dates represented at four, five, and six digits

Note that I could have used string-processing functions (especially StringPosition) to find the positions of the date sequences. And, of course, I would have obtained the same result.

Using string-processing functions to find the positions of the date sequences

While the use of StringPosition is a good approach to deal with a single date, dealing with all 35,000 sequences would have taken much longer.

Time to deal with 35,000 sequences

We pause a moment and have a look at the counts found for the 4-tuples. Out of the 10,000 possible 4-tuples, the 8,100 used appear each on average (1/10)⁴*10⁷=10⁴ times based on the randomness of the digits of pi. And approximately, I expect a standard deviation of about 100010^½≈31.6. Some quick calculations and a plot confirm these numbers.

Counts for the 4-tuples

The histogram of the counts shows the expected bell curve.

Histogram showing the expected bell curve

And the following graphic shows how often each of the 4-tuples that represent dates were found in the ten million decimal digits. We enumerate the 4-tuples by concatenating the digits; as a result, I see “empty” vertical stripes in the region where no 4-tuples are represented by dates.

4-tuples that represent dates were found in the ten million decimal digits

Now I continue to process the found date positions. We group the results into sublists of identical dates.

Grouping the results into sublists of identical dates

Every date does indeed occur in the first 10 million digits, meaning I have 36,525 different dates found. (We will see later that I did not calculate many more digits than needed.)

36,525 different dates found in the first 10 million digits

Here is what a typical member of dateGroups looks like.

What a typical member of a dateGroups look like

Statistics of all dates

Now let’s do some statistics on the found dates. Here is the number of occurrences of each date in the first ten million digits of pi. Interestingly, and in the first moment maybe unexpectedly, many dates appear hundreds of times. The periodically occurring vertical stripes result from the October-November-December month quarters.

Number of occurrences of each date in the first ten million digits of pi

The mean spacing between the occurrences also clearly shows the early occurrence of four-digit years with average spacings below 10,000, the five-digit dates with spacings around 100,000, and the six-digit dates with spacings around one million.

Mean spacing between the occurrences

For easier readability, I format the triples {date, startingPosition, dateDigitSequence} in a customized manner.

Formating triples for easier readability

The most frequent date in the first ten million digits of pi is Aug 6 1939—it occurs 1,362 times.

Most frequent date in the first ten million digits

Now let’s find the least occurring dates in the first ten million digits of pi. These three dates occur only once in the first ten million digits.

Least occurring dates in the first ten million digits

And all of these dates occur only twice in the first ten million digits.

Dates that occur only twice in the first ten million digits

Here is the distribution of the number of the date occurrences. The three peaks corresponding to the six-, five-, and four-digit date representations (from left to right) are clearly distinct. The dates that are represented by 6-tuples each occur only a very few times, and, as I have already seen above, appear on average about 1,200 times.

Distribution of the number of the date occurrences

We can also accumulate by year and display the date interpretations per year (the smaller values at the beginning and end come from the truncation of the dates to ensure uniqueness.) The distribution is nearly uniform.

Display the date interpretations per year

Let’s have a look at the dates with some “neat” date sequences and how often they occur. As the results in dateGroups are sorted by date, I can easily access a given date. When does the date 11-11-11 occur?

Dates with date sequences and how often they occur

And where does the date 1-23-45 occur?

Where does the date 1-23-45 occur

No date starts on its “own” position (meaning there is no example such as January 1, 1945 [1-1-4-5] in position 1145).

No date starts on its "own" position

But one palindromic case exists: March 3, 1985 (, which occurs at palindromic position 5833.

One palindromic case exists

A very special date is January 9, 1936: appears at the position of the 1,936th prime, 16,747. appears at the position of the 1,936th prime

Let’s see what anniversaries happened on this day in history.

Anniversaries on January 9, 1936

While no date appeared at its “own” position, if I slightly relax this condition and search for all dates that overlap with its digits’ positions, I do find some dates.

All dates that overlap with its digits' positions

And at more than 100 positions within the first ten million digits of pi, I find the famous pi starting sequence 3,1,4,5,9 again.

Finding pi again within the first ten million digits

Within the digits of pi I do not just find birthday dates, but also physical constant days, such as the ħ-day (the reduced Planck constant day), which was celebrated as the centurial instance on October 5, 1945.

Finding physical constant days within pi

Here are the positions of the matching date sequences.

Positions of the matching date sequences using ListLogLinearPlot

And here is an attempt to visualize the appearance of all dates. In the date-digit plane, I place a point at the beginning of each date interpretation. We use a logarithmic scale for the digit position, and as a result, the number of points is much larger in the upper part of the graphic.

 Visualizing the appearance of all dates

For the dates that appear early on in the digit sequence, the finite extension of the date over the digits can be visualized too. A date extends over four to six digits in the digit sequence. The next graphic shows all digits of all dates that start within the first 10,000 digits.

All digits of all dates that start within the first 10,000 digits

After coarse-graining, the distribution is quite uniform.

Distribution is uniform using coarse-graining

So far I have taken a date and looked at where this date starts in the digit sequence of pi. Now let’s look from the reverse direction: how many dates intersect at a given digit of pi? To find the total counts of dates for each digit, I loop over the dates and accumulate the counts for each digit.

Finding the total counts of dates for each digit

A maximum of 20 dates occur at a given digit.

A maximum of 20 dates occur at a given digit.

Here are two intervals of 200 digits each. We see that most digits are used in a date interpretation.

Two intervals of 200 digits each

Above, I noted that I have about 12 million dates in the digit sequence under consideration. The digit sequence that I used was only ten million digits long, and each date needs about five digits. This means the dates need about 60 million digits. It follows that many of the ten million digits must be shared and used on average about five times. Only 2,005 out of the first ten million digits are not used in any of the date interpretations, meaning that 99.98% of all digits are used for date interpretations (not all as starting positions).

2,005 out of the first ten million digits are not used in any of the date interpretations

And here is the histogram of the distribution of the number of dates present at a certain digit. The back-of-the-envelope number of an average of six dates per digits is clearly visible.

Histogram of the distribution of the number of dates present at a certain digit

The 2,005 positions that are not used are approximately uniformly distributed among the first ten million digits.

The 2,005 positions that are not used are approximately uniformly distributed

If I display the concrete positions of the non-used digits versus their expected average position, I obtain a random walk–like graph.

Random walk-like graph

So, what are the neighboring digits around the unused digits? One hundred sixty two different five-neighborhoods exist. Looking at them immediately shows why the center digits cannot be part of a date: too many sequences of zeros before, at, or after.

Neighboring digits around the unused digits

And the largest unused block of digits that appears are the six digits between position 8,127,088 and 8,127,093.

Largest unused block of digits are the six digits between position 8,127,088 and 8,127,093

At a given digit, dates from various years overlap. The next graphic shows the range from the earliest to the latest year as a function of the digit position.

These are the unused digits together with three left- and three right-neighboring digits.

Unused digits together with three left- and three right-neighboring digits

Because the high coverage seems, in the first moment, maybe unexpected, I select a random digit position and select all dates that use this digit.

Random digit position and select all dates that use this digit

And here is a visualization of the overlap of the dates.

Code for visualization of the overlap of the dates
Visualization of the overlap of the dates

The most-used digit is the 1 at position 2,645,274: 20 possible date interpretations meet at it.

Most-used digit is the 1 at position 2,645,274: 20 possible date interpretations meet at it

Here are the digits in its neighborhood and the possible date interpretations.

Digits in its neighborhood and the possible date interpretations

If I plot the years starting at a given digit for a larger amount of digits (say the first 10,000), then I see the relatively dense coverage of date interpretations in the digits-date plane.

Plot of years starting at a given digit for a larger amount of digits

Let’s now build a graph of dates that are “connected”. We’ll consider two dates connected if the two dates share a certain digit of the digit sequence (not necessarily the starting digit of a date).

Graph of dates that are connected

Here is the same as the graph for the first 600 digits with communities emphasized.

Graph for the first 600 digits with communities emphasized

We continue with calculating the mean distance between two occurrences of the same date.

Calculating the mean distance between two occurrences of the same date

The first occurrences of dates

The first occurrences of dates are the most interesting, so let’s extract these. We will work with two versions, one sorted by the date (the list firstOccurrences) they represent, and one sorted by the starting position (the list firstOccurrencesSortedByOccurrence) in the digits of pi.

Using firstOccurrences and firstOccurrencesSortedByOccurrence

Here are the possible date interpretations that start within the first 10 digits of pi.

Possible date interpretations that start within the first 10 digits of pi

And here are the other extremes: the dates that appear deepest into the digit expansion.

Dates that appear deepest into the digit expansion

We see that Wed Nov 23 1960 starts only at position 9,982,546(=2 7 713039)—so by starting with the first ten million digits, I was a bit lucky to catch it. Here is a quick direct check of this record-setting date.

Direct check of this record-setting date

So, who are the lucky (well-known) people associated with this number through their birthday?

People associated with November 23 1960 as their birthday

And what were the Moon phases on the top dozen out-pi-landish dates?

Moon phases on the top dozen out-pi-landish dates

And while Wed Nov 23 1960 is furthest out in the decimal digit sequence, the last prime date in the list is Oct 22 1995.

The last prime date

In general, less than 10% of all first date appearances are prime.

Percentage of first date appearances being prime

Often one maps the digits of pi to a direction in the plane and forms a random walk. We do the same based on the date differences between consecutive first appearances of dates. We obtain typically looking 2D random walk images.

Date differences between consecutive first appearances of dates

Here are the first-occurring date positions for the last few years. The bursts in October, November, and December of each year are caused by the need for five or six consecutive digits, while January to September can be encoded with fewer digits if I skip the optional zeros.

First-occurring date positions for the last few years

If I include all dates, I get, of course, a much denser filled graphic.

All date positions for the last few years

A logarithmic vertical axis shows that most dates occur between the thousandth and millionth digits.

Logarithmic vertical axis shows that most dates occur between the thousandth and millionth digits

To get a more intuitive understanding of overall uniformity and local randomness in the digit sequence (and as a result in the dates), I make a Voronoi tessellation of the day-digit plane based on points at the first occurrence of a date. The decreasing density for increasing digits results from the fact that I only take first-date occurrences into account.

Voronoi tessellation of the day-digit plane based on points at the first occurrence of a date

Easter Sunday positions are a good date to visualize, as the date varies over the years.

Visualizing Easter Sunday dates

The mean first occurrence as a function of the number of digits needed to specify a date depends, of course, on the number of digits needed to encode a date.

Finding mean first occurrence

The mean occurrence is at 239,083, but due to the outliers at a few million digits, the standard deviation is much larger.

The mean occurrence is at 239,083

Here are the first occurrences of the “nice” dates that are formed by repetition of a single digit.

First occurrences of the nice dates that are formed by repetition of a single digit

The detailed distribution of the number of occurrences of first dates has the largest density within the first few 10,000 digits.

Detailed distribution of the number of occurrences of first dates

A logarithmic axis shows the distribution much better, but because of the increasing bin sizes, the maximum has to be interpreted with care.

Logarithmic axis showing the distribution

The last distribution is mostly a weighted superposition of the first occurrences of four-, five-, and six-digit sequences.

The last distribution is mostly a weighted superposition of the first occurrences of four-, five-, and six-digit sequences

And here is the cumulative distribution of the dates as a function of the digits’ positions. We see that the first 1% of the ten million digits covers already 60% of all dates.

Cumulative distribution of the dates as a function of the digits' positions

Slightly more dates start at even positions than at odd positions.

More dates start at even positions than at odd positions

We could do the same with mod 3, mod 4, … . The left image shows the deviation of each congruence class from its average value, and the right image shows the higher congruences, all considered again mod 2.

Deviation from congruences from average value and higher congruances

The actual number of first occurrences per year fluctuates around the mean value.

The number of first occurrences per year fluctuates around the mean value

The mean number of first-date interpretations sorted by month clearly shows the difference between the one-digit months and the two-digit months.

The mean number of first-date interpretations sorted by month

The mean number by day of the month (ranging from 1 to 31) is, on average, a slowly increasing function.

The mean number by day of the month

Finally, here are the mean occurrences by weekday. Most first date occurrences happen for dates that are Wednesdays.

The mean occurrences by weekday

Above I observed that most numbers participate in a possible date interpretation. Only relatively few numbers participate in a first-occurring date interpretation: 121,470.

Few numbers participate in a first-occurring date interpretation

Some of the position sequences overlap anyway, and I can form network chains of the dates with overlapping digit sequences.

Network chains of the dates with overlapping digit sequences

The next graphic shows the increasing gap sizes between consecutive dates.

Increasing gap sizes between consecutive dates

Distribution of the gap sizes:

Distribution of the gap sizes

Here are pairs of consecutively occurring date-interpretations that have the largest gap between them. The larger gaps were clearly visible in the penultimate graphic.

Pairs of consecutively occurring date-interpretations that have the largest gap between them

Dates in other expansions and in other constants

Now, the very special dates are the ones where the concatenated continued fraction (cf) expansion position agrees with the decimal expansion position. By concatenated continued fraction expansion, I mean the digits on the left at each level of the following continued fraction.

Concatenated continued fraction expansion

This gives the following cf-pi string:

Cf-pi string

And, interestingly, there is just one such date.

One date in cf-pi string

None of the calculations carried out so far were special to the digits in pi. The digits of any other irrational numbers (or even sufficiently long rational numbers) contain date interpretations. Running some overnight searches, it is straightforward to find many numeric expressions that contain the dates of this year (2015). Here they are put together in an interactive demonstration.

We now come to the end of our musings. As a last example, let’s interpret digit positions as seconds after this year’s pi-time at March 14 9:26:53. How long would I have to wait until seeing the digit sequence 3·1·4·1·5 in the decimal expansion of other constants? Can one find a (small) expression such that 3·1·4·1·5 does not occur in the first million digits? (The majority of the elements of the following list ξs are just directly written down random expressions; the last elements were found in a search for expressions that have the digit sequence 3·1·4·1·5 as far out as possible.)

Digit positions as seconds after this year's pi-time

Here are two rational numbers whose decimal expansions contain the digit sequence:

Two rational numbers whose decimal expansions contain the digit sequence

And here are two integers with the starting digit sequence of pi.

Two integers with the starting digit sequence of pi

Using the neat new function TimelinePlot that Brett Champion described in his last blog post, I can easily show how long I would have to wait.

Using TimelinePlot with pi

We encourage readers to explore the dates in the digits of pi more, or replace pi with another constant (for instance, Euler’s constant E, to justify the title of this post), and maybe even 10 by another base. The overall, qualitative structures will be the same for almost all irrational numbers. (For a change, try ChampernowneNumber[10].) Will ten million digits be enough to find every date in, say, E (where is October 21, 2014?) Which special dates are hidden in other constants? These and many more things are left to explore.

Download this post as a Computable Document Format (CDF) file.

http://blog.wolfram.com/2015/06/23/dates-everywhere-in-pie-some-statistical-and-numerological-musings-about-the-occurrences-of-dates-in-the-digits-of-pi/feed/ 1
Embrace the Maker Movement with the Raspberry Pi 2 http://blog.wolfram.com/2015/06/18/embrace-the-maker-movement-with-the-raspberry-pi-2/ http://blog.wolfram.com/2015/06/18/embrace-the-maker-movement-with-the-raspberry-pi-2/#comments Thu, 18 Jun 2015 19:07:23 +0000 Bernat Espigulé-Pons http://blog.internal.wolfram.com/?p=26461 “All of us are makers. We’re born makers. We have this ability to make things, to grasp things with our hands. We use words like ‘grasp’ metaphorically to also think about understanding things. We don’t just live, but we make. We create things.”
—Dale Dougherty

I joined the maker movement last year, first by making simple things like a home alarm system, then by becoming a mentor in local hackathons and founding a Wolfram Meetup group in Barcelona. There is likely an open community of makers that you can join close to where you live; if not, the virtual community is open to everyone. So what are you waiting for? With the Raspberry Pi 2 combined with the Wolfram Language, you really have an amazing tool set you can use to make, tinker, and explore.

Raspberry Pi 2 and Wolfram Technologies

If there was one general complaint about the Raspberry Pi, it was about its overall performance when running desktop applications like Mathematica. The Raspberry Pi Foundation addressed this performance issue early this year by releasing the Raspberry Pi 2 with a quad-core processor and 1 GB of RAM, which has greatly improved the experience of interacting with the device via the Wolfram Language user interface.

Here are 10 different ways to write a “Hello, World!” program for your Pi.

1) Enter a string:

Hello, World! string

2) Create a panel:

Hello, World! panel

3) Post “Hello, World!” in its own window:

Hello, World! in its own window

4) Create a button that prints “Hello, World!”:

Button that prints Hello, World!
Hello, World!

5) Make your Raspberry Pi speak “Hello, World!”:

Speak Hello, World!

6) Deploy “Hello, World!” to the Wolfram Cloud:

Deploy Hello, World! in the Wolfram Cloud

7) Send a “Hello, World!” tweet:

Send Hello, World Tweet

8) Display “Hello!” over the world map and submit it to Wolfram Tweet-a-Program:

Hello, World! Tweet-a-Program

9) Program your Pi to say “Hello, World” in Morse code by blinking an LED:

Hello, World! in Morse code

Notice that the GPIO interface requires root privilege to control the LED, so you must start Mathematica as root from the Raspberry Pi terminal by typing sudo mathematica in the command line.

Morse code video input for Hello, World

10) Apply sound to the “Hello, World” Morse code:

Applying sound to Hello World morse code
Wolfram Language morse player

This list could go on and on—it’s limited only by your imagination. If you want to send more “Hello, World” Morse code, you can make an optical telegraph. The Community post Raspberry Pi goes to school, by Adriana O’Brien, shows you how.

Adriana's Raspberry Pi setup
This image was created with Fritzing.

One of the most useful things about using the Wolfram Language on the Pi is that it works seamlessly with the new Wolfram Data Drop open service. This allows you to make an activity tracker in just a few minutes. For example, using Data Drop and a PIR (Passive InfraRed) motion sensor, I kept track of all human movements in my home hallway for several months.

Raspberry Pi connected to sensor
This image was created with Fritzing.

Every 20 minutes, the total number of counts was added to a databin, so I could monitor my hallway in real time from anywhere with Wolfram|Alpha. And if I wanted to, I could also analyze the data and create advanced visualizations like in this DateListPlot that distinguishes business days from weekends:

Using databin

The Wolfram Data Drop also accepts images from the Raspberry Pi camera module, so you can easily make a remote motion trigger with a PIR sensor.

Raspberry Pi with PIR sensor

Or you can take several snapshots and make a time lapse, like in this tutorial on turning my animated plant into a moving animal:

Plant animation

The Wolfram Language has all sorts of image processing algorithms built in. But for some applications, the image that comes out with DeviceRead["RaspiCam"] is just too small. To take the most out of your 5 MP camera module, use Import with the following specifications:

Using Import to enlarge a RaspiCam photo

Yes, this is the view from my office window. There is a lot of detail that can be processed in many different ways:

Processing details in different ways

The Wolfram Language on Raspberry Pi 2 is also great for rapid prototyping and 3D printing. It knows how to import and export hundreds of data formats and subformats. For example, here’s how to turn the skeletal polyhedron (specifically, a rhombicuboctahedron) drawn by Leonardo da Vinci into an object file that can be 3D printed:

Leonardo Da Vinci rhombicuboctahedron sketch into 3D printed

Finally, let me invite you to join Wolfram Community and show off your own Raspberry Pi projects, discover new ideas to use as starting points in your future creations, or take advantage of the many helpful tutorials that have been posted by fellow users.

Download this post as a Computable Document Format (CDF) file.

http://blog.wolfram.com/2015/06/18/embrace-the-maker-movement-with-the-raspberry-pi-2/feed/ 0
Presenting the SmartCooling Library for SystemModeler http://blog.wolfram.com/2015/06/09/presenting-the-smartcooling-library-for-systemmodeler/ http://blog.wolfram.com/2015/06/09/presenting-the-smartcooling-library-for-systemmodeler/#comments Tue, 09 Jun 2015 14:51:13 +0000 Anneli Mossberg http://blog.internal.wolfram.com/?p=23772 The SystemModeler Library Store, launched with the release of Wolfram SystemModeler 4, is continually growing with free and purchasable libraries developed by both Wolfram and third parties. One of our commercial newcomers is SmartCooling, a Modelica library developed by the Austrian Institute of Technology (AIT) that is used for modeling and simulating cooling circuits. When I was asked to present this library on our blog, my first thought was, “Who better to demonstrate the ideas of SmartCooling than the people who actually developed it?” So I asked Thomas Bäuml, one of the creators of SmartCooling, to help answer some of my questions regarding the principles behind the library and its applications.

SystemModeler and SmartCooling array

Thomas, could you please explain what SmartCooling is and what it can be used for?

The SmartCooling library is developed for the simulation of cooling circuits with mechanically and electrically operated auxiliaries. Like in a real workshop—but in a virtual environment—the library offers all the components you need to build your own cooling applications for testing, studying, or performing comprehensive experiments.

It contains a variety of components, such as cooling fans, heat exchangers, and valves, that can be used to create cooling circuits of basically any degree of complexity. But like in real life, selecting which components to use is crucial, and you have to think about how they interact with each other, and what their physical properties are. Of course, the physical data that you put into your models is also very important. Usually, data for SmartCooling models can be taken directly from ready-to-use specification sheets—for example, performance, temperature ranges, operating conditions, speed, etc.—or it is gained from measurements of a real cooling system. The models of the SmartCooling library can be parametrized by entering this data in input fields.

SmartCooling library can be parametrized by entering this data in input fields

With SmartCooling, it becomes a lot faster and a lot more efficient to design, simulate, and dimension cooling concepts for, for example, automotive applications.

An example of a typical automotive application is to use SmartCooling to investigate new cooling concepts in hybrid electric vehicles (HEV) and their impact on energy consumption. By using modeling and simulation, it can be shown that energy can be saved when substituting the water pump of an internal combustion engine (ICE) by an electrically operated water pump. This is because a mechanically operated water pump (mechanically powered by the ICE) is fixed to the speeds of the ICE, and is not working in its optimal operation area. With an electrically operated water pump, it is possible to control the speed of the water pump independently from the speed of the ICE. This results in better cooling of the ICE and an increase in efficiency, since the electric water pump can be operated in its optimum operating area. The impact of saving energy following this approach was the subject of the scientific paper “Optimization of a Cooling Circuit with a Parameterized Water Pump Model” (5th International Modelica Conference, Vienna, Austria).

Also, the SmartCooling library lets you choose between different levels of abstraction in its applications. This means that you can choose to use simplified models or more detailed models where the physical description is more extensive. These usually contain additional parameters and boundary conditions. Being able to choose a level of abstraction that fits your purpose, I think, is a great advantage. Simplified models help to save computing time, for example by using scalability (being able to transform small structures to large structures by using scaling factors), whereas detailed models allow you to more deeply investigate system behavior and phenomena. In contrast to scaling, each model is then considered individually.

What was your motivation for developing this library?

The reason why the SmartCooling library was developed came from praxis. At our business unit at AIT, the unit for Electric Drive Technologies (EDT), the focus of our work is on automotive applications. We especially target alternative vehicle concepts, like hybrid vehicles, pure electric vehicles, etc.

When you model an entire vehicle, you also have to take thermal aspects, such as thermal management and cooling, into account. The Modelica Standard Library (MSL) offers a lot of basic models and tools to build up applications for the automotive section, but sometimes it is rather unclear which components from which sublibrary of the MSL to use, and if they are appropriate for your particular application. This makes working in this domain difficult. As a result, we developed the SmartCooling library to allow us to model thermodynamics in an easy-to-use manner that is focused on automotive applications.

Could you give our readers an example of a great SmartCooling application?

Yes, of course!

A very good and realistic use case is that you need to evaluate a model of an electrically operated water pump. Let’s say it’s for the automotive application mentioned earlier, that the mechanically operated water pump of the cooling circuit in a conventional ICE-driven car is to be replaced by an electrically operated one.

The evaluation of models is necessary in order to get realistic simulation results. That also means that the models need to be well parametrized with realistic data. Much of the physical data can be taken from specification sheets, or gained from easily accessible measurement data, but you often find yourself in the situation where the determination of some parameter is difficult, and more detailed measurements have to be done. In this example, a real test bench for the electrically operated water pump is set up.

Water pump

To measure the pressure difference and flow, the power consumption, and the hydraulic efficiency, sensors have been set up in the test bench. It wouldn’t be possible to do that in the real ICE system in the car because the space there is restricted. Another important reason for using a test bench is that the measurements are not restricted to just one characteristic curve, which would be the case in the real system due to the cooling circuit of the car.

Below is a model of the test bench circuit in SystemModeler, modeled with components from the SmartCooling library. The valve component creates a pressure drop in the circuit. While the water pump is driven at a certain constant speed, equal to that of the test bench, the friction losses can be adjusted by the valve. The circuit is investigated for different applied rotational speeds, ranging from 2,000 to 7,000 rpm.

Model of test bench built with SmartCooling components

The model is easily built with drag and drop, using the components from the SmartCooling library. The library itself functions much like a virtual lab area providing the right tools and equipment. The figure below shows exactly which components were used in the test bench model:

Structure of SmartCooling library

The remaining components, such as the electric machine and the sensors, can be found in the Modelica Standard Library:

Other MSL components used for the example

When the model has been fully assembled, the water pump, the pipeline systems, and the electric machine are all parametrized with data obtained from the laboratory test bench.

A very good way of evaluating the test bench model is by using the Wolfram Language. The Wolfram Language makes it easy to run parameter sweeps of the geometrical and mechanical parameters in order to visualize the simulation results as a series of curves. These curves can then be compared with the measured data from the real test bench. Being able to run parameter sweeps in this manner makes the evaluation and validation process a lot simpler, and it leads to a better understanding of how certain values affect the behavior of the model.

Here is a parametric plot of the pressure increase (dp) over the volume flow (Vflow) of the water pump. The plot was generated with WSMLink by varying the water pump speed from 2,000 to 7,000 rpm.

Characteristic curves of the water pump

So in this test bench example, we used the SmartCooling library to investigate state-of-the-art approaches and innovative system design for cooling architectures. The findings from the SmartCooling model supported the theory that the cooling circuit for a conventional ICE-driven car can be improved if the mechanically operated water pump is replaced by an electrically operated one. This is a great advantage, since a more efficient cooling of the ICE helps to save both energy (less fuel consumption) and money.

The validity of the simulation results, and further details on the evaluation process, are covered in the paper “Optimization of a Cooling Circuit with a Parameterized Water Pump Model.”

Learn more

I’d like to thank Thomas for providing us with this fascinating demonstration of the SmartCooling library and its applications. If you, or perhaps your company, would like to try modeling cooling circuits with SmartCooling, it is available for purchase in the SystemModeler Library Store. From the website you can also download a free trial of SystemModeler. For more SmartCooling examples, check out Battery Stack: Modeling a Cooling Circuit and Cylinder Cooling in our collection of SystemModeler industry examples, or visit our online Documentation Center, which hosts the full SmartCooling library documentation.

http://blog.wolfram.com/2015/06/09/presenting-the-smartcooling-library-for-systemmodeler/feed/ 0
Throwing the Hackathon Gauntlet with Some Friendly Team Coding http://blog.wolfram.com/2015/06/02/throwing-the-hackathon-gauntlet-with-some-friendly-team-coding/ http://blog.wolfram.com/2015/06/02/throwing-the-hackathon-gauntlet-with-some-friendly-team-coding/#comments Tue, 02 Jun 2015 18:11:23 +0000 Jenna Giuffrida http://blog.internal.wolfram.com/?p=26351 It’s no secret that Wolfram loves hackathons, or that our technology is ideally suited to the fast-paced, high-pressure environment of these events. We’ve supported and/or participated in HackIllinois, MHacks, LAHacks, and many other hackathons. Given how much fun those have been (and just because we can), we decided to host a hackathon for Wolfram staff, pitting our talented and driven developers against one another to see what kind of out-of-the-box projects they could create with our technologies. In truth, the spirit of camaraderie and collaboration that is central to Wolfram could not be set aside, and the final projects were the result of shared ideas and teamwork.


The rules were simple. Each person had to work on a project outside of their role at Wolfram, the project had to be a not-for-profit hack that fit the theme “Greater Good,” and it could only be completed using technologies available to the public.

The hackathon started at noon and ended at 11pm with a science fair-style showcase of the completed hacks. Some of the completed submissions included:

Poisonous or Not?
This hack created both an iPhone Cloud app and a website using Wolfram Programming Cloud, PHP, JavaScript, CSS, and HTML, highlighting specifically our new ImageIdentify functionality. This project allows a user to upload a picture of a plant and get a result (with probabilities) on if that plant is poisonous or not.

Poisonous or not

Data Drop and Node-RED
During a very recent trip to RoboUniverse, one developer was made aware of an open-source visual tool for the Internet of Things called Node-RED. He wanted to explore this tool and offer users a way to connect Wolfram technologies to their projects. So with this hack, he created a drag-and-drop component to allow users of Node-RED to connect to the Wolfram Data Drop.

Data Drop and Node-RED

3D Dexterity Assist
The Dexterity Assist is a 3D-printed object created to aid children using horseback riding as therapy. These children often need assistance with dexterity, and gripping the thin reins of a horse can be difficult for them. The team modeled, tweaked, and 3D-printed a cylindrical object inside which a horse’s reins could be inserted. Mathematica was used to create the 3D model, specifically ParametricPlot3D and DiscretizeRegion, and a MakerBot printed the objects.

3D Printer

Additional participants worked on tools and packages to increase productivity or make an impact for our internal development teams, several of which will remain ongoing explorations. Overall, the hackathon was a huge success for team building and produced some truly awesome results in a short period of time. We can’t wait to plan another!

Know of an upcoming hackathon we should participate in or that you want to use our technology for? Check out our hackathon page and let us know!

http://blog.wolfram.com/2015/06/02/throwing-the-hackathon-gauntlet-with-some-friendly-team-coding/feed/ 0
John F. Nash, Jr. , In Memoriam http://blog.wolfram.com/2015/05/29/john-f-nash-jr-in-memoriam/ http://blog.wolfram.com/2015/05/29/john-f-nash-jr-in-memoriam/#comments Fri, 29 May 2015 20:30:11 +0000 Wolfram Blog Team http://blog.internal.wolfram.com/?p=26358 This past week, on May 23, 2015, the much loved and respected John F. Nash Jr., along with his wife, Alicia Nash, passed away in a tragic car accident while returning home from his receipt of the 2015 Abel Prize for his work in partial differential equations. The Nobel winner and his wife were the subject of the 2001 Academy Award winning film A Beautiful Mind. Nash’s most famous contribution to mathematics and economics was in the field of game theory, which has enabled others to build on that work and was the focus of the film.

Nash’s long career as a mathematician was marked by both brilliant achievements and terrible struggles with mental illness. Despite his battle with schizophrenia, Nash inspired generations of mathematicians and garnered a stunning array of awards, including the 1994 Nobel Prize in economic sciences, the American Mathematical Society’s 1999 Leroy P. Steele Prize for Seminal Contribution to Research, and the 1978 John von Neumann Theory Prize. We were personally honored in 2003 when Nash presented his work with Mathematica at the International Mathematica Symposium in London.

Nash presenting on Mathematica

We are saddened to see the loss of a valuable member of the mathematics community, but will always remember John F. Nash, Jr. for his remarkable life and career. His legacy will remain to encourage us to continue to break new ground, push boundaries, and explore mathematics boldly and fearlessly.

http://blog.wolfram.com/2015/05/29/john-f-nash-jr-in-memoriam/feed/ 2
New in the Wolfram Language: AnglePath http://blog.wolfram.com/2015/05/21/new-in-the-wolfram-language-anglepath/ http://blog.wolfram.com/2015/05/21/new-in-the-wolfram-language-anglepath/#comments Thu, 21 May 2015 19:40:47 +0000 José Martín-García http://blog.internal.wolfram.com/?p=26257 A brilliant aspect of the Wolfram Language is that not only you can do virtually anything with it, you can also do whatever you want in many different ways. You can choose the method you prefer, or even better, try several methods to understand your problem from different perspectives.

For example, when drawing a graphic, we usually specify the coordinates of its points or elements. But sometimes it’s simpler to express the graphic as a collection of relative displacements: move a distance r in a direction forming an angle θ with respect to the direction of the segment constructed in the previous step. This is known as turtle graphics in computer graphics, and is basically what the new function AnglePath does. If all steps have the same length, use AnglePath[{θ1,θ2,...}] to specify the angles. If each step has a different length, use AnglePath[{{r1,θ1},{r2,θ2}, ...}] to give the pairs {length, angle}. That’s it. Let’s see some results.

Turn 60 degrees to the left with respect to the previous direction six times. You get a hexagon:

Using AnglePath to create a hexagon

If the angle is 135 degrees and you repeat eight times, then this is the result. Note that 8 * 135° = 1080°, so we go around the center three times:

Repeating 135 degree angle 8 times

Suppose that we again keep turning the same positive angle θ, but we increase the lengths of the steps linearly, in increments dr, from 0 to 1:

Code for creating spirals

Then we get these curves that spiral outward, producing nice outputs:

Spirals created using AnglePath

If we choose the angles randomly, we get random walks on the plane:

Choosing angles randomly to create random walks on a plane

Now let’s try to combine multiple AnglePath lines. Suppose that at each step we choose randomly between two possible angles and that the lengths obey a power law, so they get smaller in each iteration:

Input code for combining multiple AnlgePath lines

The result does not look very interesting yet:

Graphic of combining multiple lines using AnglePath

But if we repeat the experiment 10 times, then we start seeing some structure:

Repeating combining lines ten times

We can construct all different lists of 10 choices of the two angles, using Tuples (there are 2^10=1024 possible lists). Replacing Line with BSplineCurve produces curved lines instead of straight segments. The result is a nice self-similar structure:

Using Tuples and BSplineCurve to create curved lines instead of straight

AnglePath allows us to construct fractal structures easily, with very compact code. In fact, the code fits in a tweet! These are two examples derive from Wolfram Tweet-a-Program:

The Koch curve:

Koch Curve from Wolfram Tweet-a-Program


Curlicues from Wolfram Tweet-a-Program

These curious spirals are approximate Cornu spirals. With larger steps, they develop interesting substructure:

Cornu spirals

This was a quick introduction to how useful and fun the function AnglePath can be. AnglePath is supported in Version 10.1 of the Wolfram Language and Mathematica, and is rolling out soon in all other Wolfram products. Start using it now, and tweet your results through Wolfram TaP!

Download this post as a Computable Document Format (CDF) file.

http://blog.wolfram.com/2015/05/21/new-in-the-wolfram-language-anglepath/feed/ 3
Biggest Little Polyhedron—New Solutions in Combinatorial Geometry http://blog.wolfram.com/2015/05/20/biggest-little-polyhedronnew-solutions-in-combinatorial-geometry/ http://blog.wolfram.com/2015/05/20/biggest-little-polyhedronnew-solutions-in-combinatorial-geometry/#comments Wed, 20 May 2015 19:26:44 +0000 Ed Pegg Jr http://blog.internal.wolfram.com/?p=26119 In many areas of mathematics, 1 is the answer. Squaring a number above or below 1 results in a new number that is larger or smaller. Sometimes, determining whether something is “big” is based on whether a largest dimension is greater than 1. For instance, with sides of length 13,800 km, Saturn’s hexagon would be considered big. A “little polygon” is defined as a polygon where 1 is the maximum distance between vertices. In 1975, Ron Graham found the biggest little hexagon, which has more area than the regular hexagon, as shown below. The red diagonals have length 1. All other diagonals (not drawn) are smaller than 1.

Regular hexagon, biggest little hexagon, biggest little octagon showing lengths of 1

I’ve often wondered what the biggest little polyhedra would look like. Mathematica 10 introduced Volume[ConvexHullMesh[points]], so I thought I could solve the problem by picking points at random. Below is some code for picking, calculating, and showing a random little polyhedron. If the code is run a thousand times, one of the solutions will be better than the others. Here, I ran it three times. One of these three solutions is (probably) better than the other two.

Random solutions for picking points on a polyhedron

With randomly selected points, images like the following emerge from the better solutions. I posted these on Wolfram Community under the discussion Biggest Little Polyhedra, and got some useful comments from Robin Houston and Todd Rowland. I thought of using results from “Visualizing the Thomson Problem” for solutions. In the Thomson problem, electrons repel each other on a sphere. Twelve repelling points move to the vertices of an icosahedron, which is inefficient for BLP, since all the longest distances pass through the center of the bounding sphere, just like the regular hexagon in the polygon case. I modified the Thomson code so that points repelled each other and all polar opposites, and that gave good starting values.

Starting values using modified Thomson code

Four points need a regular tetrahedron, with volume Square root of 2/12= 0.117851.
Five points need a unit equilateral triangle with a perpendicular unit line, with volume Square root of 3/12 = 0.1443375; solved in 1976 [1].

Regular tetrahedron and equilateral triangle points

I’ll use the name 6-BLP for the biggest little polyhedron on 6 points. In 2003, the volume for 6-BLP was solved to four decimals of accuracy [2, 3]. Graphics for 6-BLP and 7-BLP are below, with red lines for the unit diagonals.

6-BLP and 7-BLP

To find these on my own, I first picked the best solutions out of a thousand tries, then used simulated annealing to improve the solutions. For each of the points in the good solution, I introduced a tiny bit of randomness to try to find a better solution, thousands of times. Then I introduced a tinier bit of randomness, over and over again. Some of these solutions seemed to go to a symmetrical solution. For example, with seven points, the best solution seemed to be gradually drifting toward this polyhedron, with a value of r of about a half, which represents the relative size of the upper triangle △456.

Symmetrical solution for random polyhedron with seven points

The exact volume can be determined by the tetrahedra defined by the points {{2,3,4,7}, {2,4,6,7}, {5,4,7,6}}, with the volumes of the first two being tripled for symmetry. Look at the volumes of the tetrahedra, and switch any two numbers in a tetrahedron with negative volume.

Determining the exact volume of the tetrahedra by defined points

After changing the parity of the last tetrahedron, we can calculate the exact r that gives the exact optimal volume. In the same way, we’ll also solve a few others.

Calculating r for solution6, solution7, solution8, solution9

The solution for 16-BLP takes more than a minute, so I’ve separated it out.

Solution for 16-BLP

The first value in the solutions is the optimal volume as a Root object, and the second is the optimal value of r. Here’s a cleaner table of values.

Table of values for optimal value of r

That is far beyond anything I could have solved by hand. With random selection, annealing, symmetry spotting, Solve[], and Maximize[], I was also able to find the exact n-BLP (biggest little polygon) for n = 6, 7, 8, 9, and 16.

Here are a few views of the 8-BLP, with the red tubes showing unit-length diagonals.

Views of the 8-BLP with the red tubes showing unit-length diagonals

Some views of 9-BLP:

Views of 9-BLP with the red tubes showing unit-length diagonals

Some views of 16-BLP:

Views of 16-BLP with the red tubes showing unit-length diagonals

The labeled 8-BLP below features perpendicular unit lines 1-2 and 3-4 above and below the origin. The labeled 9-BLP below features stacked triangles △123, △456, and △789.

8-BLP featuring perpendicular line units and 9-BLP featuring stacked triangles

The labeled 16-BLP below features a truncated tetrahedron on points 1-12 and added points 13-16.

16-BLP featuring a truncated tetrahedron

Fairly complicated, right? With sphere point picking, random numbers –Pi to Pi and –1 to 1 can produce randomly distributed points on a unit sphere. Points on a unit sphere can be mapped back to points in the (–Pi to Pi, –1 to 1) rectangle. With the solutions for 8, 9, and 16, here’s what happens.

Sphere point picking for solutions with 8, 9, and 16 points

For 10-BLP, I haven’t been able to find an exact solution, but I did find a numerical solution to any desired level of accuracy. If anyone can find a root object for this, let me know. Open up the notebook version to see a rather difficult equation in the Initialization section.

10-BLP equation

Here’s a labeled view of 10-BLP from two different perspectives.

Two different perspectives of the labeled view of 10-BLP

In a similar fashion, a numerical solution for 11-BLP can be found.

11-BLP equation

Here are two views of 11-BLP.

Two views of 11-BLP

Have I really solved these? Maybe not. For these particular symmetries, I’m sure I’ve found the local maximum. For example, here’s a function with a local maximum of 5 at the value 1.

Plot showing found local maximum of 5

Plot a bit more, and the global maximum of 32 can be found at value -2.

Plot showing found global maximum of 32

In the related Thomson problem, there’s a proof that the 12 vertices of an icosahedron give a minimum energy configuration for 12 electrons. But 7, 8, 9, 10, 11, and 13+ electrons are all considered unsolved. The Kepler conjecture suggested that hexagonal close packing was the densest arrangement of spheres, but a complete formal proof by Thomas Hales wasn’t completed until August 10, 2014. The densest packing of regular tetrahedra, the fraction 4000/4671 = .856347…, wasn’t found until July 27, 2010, and still isn’t proven maximal. Take any solution claims with a grain of salt; geometric maximization problems are notoriously tricky.

For months, my best solution for 11 points was in an asymmetric local maximum. Some (or most) of the following solutions are likely local instead of global, but which ones? With that caveat, we can look at best known solutions for 12 points and above.

12-BLP seems to be the point 12, the slightly messy heptagon 11-6-7-10-8-5-9, and the quadrilateral 1-4-3-2.

13-BLP seems to be the point 13, the slightly messy heptagon 12-8-10-6-7-9-11, and the messy pentagon 1-2-3-4-5.

My attempts to add symmetry have resulted in figures with a lower volume.

12-BLP and 13-BLP

So far, my best solutions for a 14-BLP seems to have a lot of symmetry, but I haven’t solved it. I spent some time optimizing a point-heptagon-heptagon solution for a 15-BLP, only to watch my randomizer “improve” it relentlessly by increasing volume while sacrificing symmetry.

14-BLP and 15-BLP

17-BLP, 18-BLP—I believe 17-BLP has nice symmetry.

19-BLP, 20-BLP—20 is not the dodecahedron, due to inefficient unit lines through the center.

Symmetry for 17-BLP, 18-BLP, 19-BLP, and 20-BLP

The snub cube and half the vertices of the great rhombicuboctahedron both have lower volume than 24-BLP.

Snub cube and half the vertices of the great rhombicuboctahedron have lower volume than 24-BLP

21-BLP, 22-BLP—Lots of 7- and 9-pointed stars.

23-BLP, 24-BLP—My best 24-BLP has tetrahedral symmetry.

21-BLP, 22-BLP, 23-BLP, 24-BLP symmetry

Here’s some of the symmetry in the current best 24-BLP. Points 1-12 and 13-24 have respective norms of 0.512593 and 0.515168.

Symmetry in the current best 24-BLP

16-BLP, 17-BLP—Letting the unit lines define polygons. 16-BLP contains many 7-pointed stars.

16-BLP contains 7-pointed stars

The same polyhedra shown as solid objects, using ConvexHullMesh[]. That’s BLP 9-10-11-12, 13-14-15-16, 17-18-19-20, 21-22-23-24.

Polyhedra shown as solid objects using ConvexHullMesh

Here’s the current table of the best known values.

Current table of best known values

Here are the best solutions I’ve found so far for 4 to 24 points.

Best solutions for 4 to 24 points

Let the points be centered so that the maximal distance from the origin is as small as possible. The below scatterplot shows the distance from the origin for vertices of each polyhedron, from 8 to 24 vertices.

Distance from origin for vertices scatterplot

Mathematica 10.1 managed to exactly solve 6-BLP, 7-BLP, 8-BLP, 9-BLP, and 16-BLP. It found numerically exact solutions for 10-BLP and 11-BLP, and made good progress for up to 24 points. That gives solutions for seven previously unsolved problems in combinatorial geometry, all by repeating Volume[ConvexHullMesh[points]]. What new features have you had success with?


[1] B. Kind and P. Kleinschmidt, “On the Maximal Volume of Convex Bodies with Few Vertices,” Journal of Combinatorial Theory, Series A, 21(1) 1976 pp. 124-128.

[2] A. Klein and M. Wessler, “The Largest Small n-dimensional Polytope with n+3 Vertices,” Journal of Combinatorial Theory, Series A, 102(2), 2003 pp. 401-409.

[3] A. Klein and M. Wessler, “A Correction to ‘The Largest Small n-dimensional Polytope with n+3 Vertices,’” Journal of Combinatorial Theory, Series A, 112(1), 2005 pp. 173-174.

Download this post as a Computable Document Format (CDF) file.

http://blog.wolfram.com/2015/05/20/biggest-little-polyhedronnew-solutions-in-combinatorial-geometry/feed/ 1
Registration for the 2015 Wolfram Technology Conference Now Open! http://blog.wolfram.com/2015/05/18/registration-for-the-2015-wolfram-technology-conference-now-open/ http://blog.wolfram.com/2015/05/18/registration-for-the-2015-wolfram-technology-conference-now-open/#comments Mon, 18 May 2015 15:49:14 +0000 Jenna Giuffrida http://blog.internal.wolfram.com/?p=26075 The 2015 Wolfram Technology Conference is officially on the horizon, and we are getting excited to show you what we’ve been doing with the Wolfram Language and our growing technology stack. While assembling your calendar for the rest of the year, be sure to save the date for our conference from October 20–22, 2015. Registration is now open, so be sure to secure your spot and submit any talk proposals you may have.

If you’re looking for inspiration or just want a taste of what’s to come, videos from last year’s conference are available on our website. We saw an impressive array of presentations from both guests and our very own developers; below is a sampling of some of the most engaging innovations and projects that were shown.

Integrating Mathematica and the Unity Game Engine: Not Just for Games Anymore
George Danner

Wolfram Data Science Platform: Data Science in the Cloud
Dillon Tracy

Machine Learning
Etienne Bernard

Stitchcoding and Movie Color Maps
Theo Gray and Nina Paley

Rhino, Meet Mathematica
Chris Carlson

This year we have already introduced cutting-edge technologies to the Wolfram Language lineup, including the Wolfram Cloud, SystemModeler 4.1, Data Drop, and new Mathematica functionalities such as ImageIdentify and GrammarRules. We’ll see you in October to learn about all this and more!

http://blog.wolfram.com/2015/05/18/registration-for-the-2015-wolfram-technology-conference-now-open/feed/ 0