March 28, 2013 — Ed Pegg Jr, Editor, Wolfram Demonstrations Project

`RootApproximant` can turn an approximate solution into a perfect solution, such as for a square divided into fifty 45°-60°-75° triangles.

A square can be divided into triangles, for example by connecting opposite corners. It’s possible to divide a square into seven similar but differently sized triangles or ten acute isosceles triangles. Classic puzzles involve cutting a square into eight acute triangles, or twenty 1 – 2 – √5 triangles. The last image uses 45°-60°-75° triangles, but one triangle has a flaw.

It’s easy to divide a square with similar right triangles. Can a square be divided into similar non-right triangles? In his paper “Tilings of Polygons with Similar Triangles” (*Combinatorica*, 10(3), 1990 pp. 281–306), Laczkovich proved exactly three triangles provided solutions, with angles 22.5°-45°-122.5°, 15°-45°-120°, and 45°-60°-75°. I read his paper to try to make an image for the 45°-60°-75° case, but his construction was complex, and seemed to require thousands of triangles, so I tried to find my own solutions. All my attempts had flaws, such as the last image above, so I made a contest out of it: $200, minus a dollar for every triangle in the solution.

March 21, 2013 — Devendra Kapadia, Mathematica Algorithm R&D

Waiting in line is a common, though not always pleasant, experience for us all. We wait patiently to be served by the next free teller at a bank, clear the security check at an airport, or be answered by technical support when we call a phone service provider. At a more abstract level, these waiting lines, or queues, are also encountered in computer and communication systems. For example, every email you send is broken up into a series of packets. Each packet is then sent off to its destination by the best available route to avoid the queues formed by other packets in the network. Hence, queues play an important role in our lives, and it seems worthwhile to spend some time understanding their dynamics, with a view to answering questions such as, “How many tellers does your bank need to provide good customer service?” or “How can you speed up the security check?” or “On average, how long will you have to wait for technical support?” My purpose in writing this post is to give a gentle introduction to queueing theory, which attempts to answer such questions, using new functions that are available in *Mathematica* 9.

Queueing theory has its origins in the research of the Danish mathematician A. K. Erlang (1878–1929). While working for the Copenhagen Telephone Company, Erlang was interested in determining how many circuits and switchboard operators were needed to provide an acceptable telephone service. This investigation resulted in his seminal paper “The Theory of Probabilities and Telephone Conversations,” which was published in 1909. Erlang proved that the arrivals for such queues can be modeled as a Poisson process, which immediately made the problem mathematically tractable. Another major advance was made by the American engineer and computer scientist Leonard Kleinrock (1934–), who used queueing theory to develop the mathematical framework for packet switching networks, the basic technology behind the internet. Queueing theory has continued to be an active area of research and finds applications in diverse fields such as traffic engineering and hospital emergency room management.

February 4, 2013 — Oleksandr Pavlyk, Kernel Technology

On January 23, 1913 of the Julian calendar, Andrey A. Markov presented for the Royal Academy of Sciences in St. Petersburg his analysis of Pushkin’s *Eugene Onegin*. He found that the sequence of consonants and vowels in the text could be well described as a random sequence, where the likely category of a letter depended only on the category of the previous or previous two letters.

At the time, the Russian Empire was using the Julian calendar. The 100th anniversary of the celebrated presentation is actually February 5, 2013, in the now used Gregorian calendar.

To perform his analysis, Markov invented what are now known as “Markov chains,” which can be represented as probabilistic state diagrams where the transitions between states are labeled with the probabilities of their occurrences.

February 1, 2013

Oleg Marichev, Special Function Researcher

Michael Trott, Chief Scientist

In this blog post, we want to report some work in progress that might interest users of probability and statistics and also those who wonder how we add new knowledge every day to Wolfram|Alpha.

Since the beginning in 1988, *Mathematica* knew not only elementary functions (sqrt, exp, log, etc.) but many special functions of mathematical physics (such as the Bessel function K and the Riemann Zeta function) and number theoretical functions. All together, *Mathematica* knows now more than 300 such functions. The Wolfram Functions Site lists 300,000+ formulas and identities for these functions. And, based on *Mathematica*‘s algorithmic computation capabilities and the Functions Site’s identities, most of this knowledge is now easily accessible in Wolfram|Alpha. For example, relation between sin(*x*) and cos(*x*), series representations of the Beta function, relation between BesselJ(*n*, *x*) and AiryAi(*x*), differential equation for ellipticF(phi, *m*), and examples of complicated indefinite integrals containing erf.

But Wolfram|Alpha also knows about many special functions that are not in *Mathematica* because they are less common or less general. For instance, haversine(*x*), double factorial binomial(2*n*, *n*), Dickman rho(10/3), BesselPolynomialY[6, *x*], Conway’s base 13 function(4003/371293), and Goldbach function(1000).

*Mathematica* 7 knew 42 probability distributions; *Mathematica* 9 knows over 130 (parametric) probability distributions. Based on *Mathematica*, Wolfram|Alpha can answer a lot of queries about these distributions, such as characteristic function of the hyperbolic distribution or variance of the binomial distribution with *p* = 1/3, and give general overview pages for queries such as Student’s *t* distribution or Gumbel distribution.

December 20, 2012 — Paul-Jean Letourneau, Senior Data Scientist, Wolfram Research

This year is the 100th birthday of Alan Turing, so at the 2012 Wolfram Science Summer School we decided to turn a group of 40 unassuming nerds into ferocious hunters. No, we didn’t teach our geeks to take down big game. These are laptop warriors. And their prey? Turing machines!

In this blog post, I’m going to teach you to be a fellow hunter-gatherer in the computational universe. Your mission, should you choose to accept it, is to FIND YOUR FAVORITE TURING MACHINE.

First, I’ll show you how a Turing machine works, using pretty pictures that even my grandmother could understand. Then I’ll show you some of the awesome Turing machines that our summer school students found using *Mathematica*. And I’ll describe how I did an über-search through 373 million Turing machines using my Linux server back home, and had it send me email whenever it found an interesting one, for two weeks straight. I’ll keep the code to a minimum here, but you can find it all in the attached *Mathematica* notebook.

Excited? Primed for the hunt? Let me break it down for you.

The rules of Turing machines are actually super simple. There’s a row of cells called the *tape*:

November 5, 2012 — Michael Belcher, computerbasedmath.org

The computerbasedmath.org community has been growing steadily since the project first started in 2010. Several thousand of you have signed up to show your support, share your ideas, and help spread the word. The Computer-Based Math™ Education Summit has been a great tool for bringing the community together, but we wanted a central hub where the community can gather more than just once a year. So we’ve launched the The Computer-Based Math Education Forum.

Whatever your background, join the conversation and share your experiences.

October 24, 2012 — Jason Martinez, Research Programmer

Earlier this month, on a nice day, Felix Baumgartner jumped from 39,045 meters, or 24.26 miles, above the Earth from a capsule lifted by a 334-foot-tall helium filled balloon (twice the height of Nelson’s column and 2.5 times the diameter of the Hindenberg). Wolfram|Alpha tells us the jump was equivalent to a fall from 4.4 Mount Everests stacked on top of each other, or falling 93% of the length of a marathon.

At 24.26 miles above the Earth, the atmosphere is very thin and cold, only about -14 degrees Fahrenheit on average. The temperature, unlike air pressure, does not change linearly with altitude at such heights. As Wolfram|Alpha shows us, it rises and falls depending on factors such as the decreased density of air with rising altitude, but also the absorption of UV light by the ozone layer.

At 39 kilometers, the horizon is roughly 439 miles away. At this layer of the atmosphere, called the stratosphere, the air pressure is only 3.3 millibars, equivalent to 0.33% of the air pressure at sea level. To put it another way, the mass of the air above 39 kilometers is only 0.32851% of the total air mass. Given this knowledge, we know that 99.67% of the world’s atmosphere lay beneath him. This information was important to Felix’s goal to break the sound barrier in free fall because the rate of drag is directly related to air pressure. With less air around him, there would be less drag, and thus he could reach a higher maximum speed. Of course, this would require him to wear an oxygenated suit to allow him to breathe, in addition to keeping him warm.

October 23, 2012 — Michael Trott, Chief Scientist

In my last blog post, we discussed 3D charge configurations that have sharp edges. Reader Rich Heart commented on it and asked whether *Mathematica* can calculate the force between two charged cubes, as done by Bengt Fornberg and Nick Hale and in the appendix of Lloyd N. Trefethen’s book chapter.

The answer to the question from the post is: Yes, we can; I mean, yes, *Mathematica* can.

Actually, it is quite straightforward to treat a more general problem than two just-touching cubes of equal size:

- We can deal with two cubes of different edge lengths
*L*_{1}*L*_{2} - We can calculate the force for any separation
*X*, where*X*is the distance between the two cube centers (including the case of penetrating cubes; think plasma) - We will use a method that can be generalized to higher-dimensional cubes without having to do more nested integrals

Instead of calculating the force between the two cubes, we will calculate the total electrostatic energy of the system of the two cubes. The force is then simply the negative gradient of the total energy with respect to *X*. The electrostatic energy (in appropriate units) is given by:

(In the following calculations, we will skip the constant [with respect to *X*] prefactors *Q*_{1} *L*_{1}^{-3} *Q*_{2} *L*_{2}^{-3} or *Q*_{1} *Q*_{2} if not needed.)

Approaching this integral head-on doing one integral after another is possible, but a very tedious and time-consuming operation. Instead, to avoid having to carry out a nested six-dimensional integral, we remember the Laplace transform of 1 / √*s*.

September 27, 2012 — Michael Trott, Chief Scientist

In my last blog post, we looked at various examples of electrostatic potentials and magnetostatic fields. We ended with a rectangular current loop. Electrostatic and magnetostatic potentials for squares, cubes, and cuboids typically contain only elementary functions, but the expressions themselves are often quite large compared with simple systems with radial symmetry. In the following, we will discuss some 3D charge configurations that have sharp edges.

Let’s start with a charged 2D rectangle in 3D space. Again, the potential is an elementary function that contains a few logarithms.

July 20, 2012 — Michael Trott, Chief Scientist

*(This is the first post in a three-part series about electrostatic and magnetostatic problems involving sharp edges.)*

*Mathematica* can do a lot of different computations. Easy and complicated ones, numeric and symbolic ones, applied and theoretical ones, small and large ones. All by carrying out a *Mathematica* program.

Wolfram|Alpha too carries out a lot of computations (actually, tens of millions every day), all specified through free-form inputs, not *Mathematica* programs. Wolfram|Alpha is heavily based on *Mathematica*, and many of the mathematical calculations that Wolfram|Alpha carries out rely on the mathematical power of *Mathematica*. And while Wolfram|Alpha can carry out a vast amount of calculations, it cannot carry out all possible calculations, either because it does not (yet) know how to do a calculation or because the (underlying *Mathematica*) calculation would take a longer time than available through Wolfram|Alpha. So for a detailed investigation of a more complicated engineering, physics, or chemistry problem, having a copy of *Mathematica* handy is mandatory.

But there is also the reverse relation between *Mathematica* and Wolfram|Alpha: Wolfram|Alpha’s knowledge, especially its data knowledge, allows it to carry out investigations and calculations that can substantially increase the power of pure *Mathematica*. And all of this is because Wolfram|Alpha’s knowledge is accessible through the `WolframAlpha[]` function within *Mathematica*.