Wolfram Blog http://blog.wolfram.com News, views, and ideas from the front lines at Wolfram Research. Tue, 21 Jun 2016 17:51:38 +0000 en hourly 1 http://wordpress.org/?v=3.2.1 Explore Yoga with Wolfram|Alpha http://blog.wolfram.com/2016/06/20/explore-yoga-with-wolframalpha/ http://blog.wolfram.com/2016/06/20/explore-yoga-with-wolframalpha/#comments Mon, 20 Jun 2016 19:33:41 +0000 Kristin McCoy http://blog.internal.wolfram.com/?p=31316 Each person enters a yoga class with their own unique goals. Some hope to stretch their legs, while others might want to strengthen their core, improve their balance, perform an advanced pose, or simply destress. As a yoga teacher, my goal is to balance my classes to accommodate everyone’s needs and deliver information that will be potent and relevant for as many students as possible. However, there is so much information to explore in the field of yoga that it would be impossible to deliver it all in an hour-long class. Now it is possible for yoga enthusiasts and budding students alike to explore yoga using Wolfram|Alpha.

Camel Pose Extended Leg Stretch

You can now use Wolfram|Alpha to discover information about 216 yoga poses. If you want to learn about a pose, you can search by either its English or Sanskrit name and find basic instructions, along with an illustration. You can also look at the muscles that the pose stretches and strengthens, get ideas for ways to vary the pose, or learn about preparatory poses that you can use to build up toward more difficult poses. If you are recovering from an injury or ailment, you can check a list of precautions and contraindications to discover if the pose might be aggravating for your condition. You can also learn about commonly practiced sequences of yoga poses, such as the Sun Salutation.

Suppose you are a new yoga student and recently took a class where the instructor taught the Downward-Facing Dog Pose. You can take another look at the pose with Wolfram|Alpha and learn more about it. As is true with any physical activity, practicing yoga poses can be strenuous and carries certain risks, so it is important to pay attention to your body’s signals and seek the guidance of a qualified teacher:

Information on the Downward Facing Dog Pose in Wolfram|Alpha
Continuing with the new-yoga-student scenario, you may have learned the Downward-Facing Dog Pose as part of the Sun Salutation sequence. You can use Wolfram|Alpha to jog your memory on the order of the poses in the sequence or how to coordinate the poses with your breathing:

The Sun Salutation Pose in Wolfram|Alpha
Wolfram|Alpha can help you find yoga poses to meet your particular goals. Many beginners start yoga hoping to gain more flexibility in their legs and gain strength in their core. Wolfram|Alpha can help you identify the poses that will help you accomplish those goals, and lists them according to experience:
Wolfram|Alpha can tell you which poses stretch legs
You can also ask Wolfram|Alpha about strengthening, such as what yoga poses strengthen your core
A common question I get as a yoga teacher is about how to work up to more advanced postures. Maybe you’ve been sitting on your mat before class and stared in amazement as the person across from you floated up into a handstand. I’ve been there too! Wolfram|Alpha can give you ideas about simpler poses that you could do to work toward a bigger goal by searching for preparatory poses:

Poses that help you work up to doing a handstand
Certain styles of yoga use codified sequences, which you can explore using Wolfram|Alpha. You will find the Primary Series and Second Series of Ashtanga Yoga, Bikram Yoga’s Twenty-Six Postures, Sivananda Yoga’s Twelve Basic Postures, and the Spiritual Warrior sequence of Jivamukti Yoga:
Poses in the Ashtanga Second Series (as seen when you click "Show details")
Pose sequence for the Ashtanga Second Part 1 (as seen when you click "Show details")
You can explore even more complex queries with the Wolfram Language, which can be accessed easily in the Wolfram Open Cloud. For example, you might wonder about the number of muscles that are listed for the “stretched muscles” property and ask, “What pose stretches the most muscles?” This is not necessarily a question I would have been able to answer from experiential knowledge, so it is interesting to me to see what the data has to say:

What pose stretches the most muscles?

Seeing the answer, it makes sense that it would be Revolved Side Angle Pose. Revolved Side Angle Pose is an asymmetrical posture that incorporates opposites in the lower limbs, trunk, and upper limbs. As someone who practices this pose daily and watches several students grapple with it each day, I can attest to its complexity:

Revolved Side Angle Pose

For me, the beauty of yoga is that no one ever completes it. When I started, there were poses that seemed miraculous and that I couldn’t hope to accomplish. But with practice, some of those poses slowly became part of my daily practice, and inevitably, I became aware of a new set of seemingly miraculous poses. With each accomplishment, a new possibility always arises; therefore, yoga is a discipline for the endlessly curious. One of the important lessons of yoga is that with consistent practice and gradual progression, one can transcend perceived limitations. Yoga grounds this lesson in the body so that through practice there is a sense of potential. My hope is that you’ll find information in Wolfram|Alpha’s yoga data that is useful to you today, inspires you for what could be, sparks your curiosity about yoga, and encourages you to practice!

]]>
http://blog.wolfram.com/2016/06/20/explore-yoga-with-wolframalpha/feed/ 2
Special Event: Solving Image Processing Problems http://blog.wolfram.com/2016/06/17/special-event-solving-image-processing-problems/ http://blog.wolfram.com/2016/06/17/special-event-solving-image-processing-problems/#comments Fri, 17 Jun 2016 15:07:47 +0000 Zach Littrell http://blog.internal.wolfram.com/?p=31564 Satellite images, MRIs, live video feeds, and your family vacation photos can sometimes need light or heavy-duty touchups. Finding features, removing backgrounds, filtering for noise, and fixing oddities are common image processing problems for all sorts of 2D and 3D images. Luckily, the Wolfram Language can help you solve them.

Join us for a free special virtual event, Solving Image Processing Problems: Wolfram Language Virtual Workshop, on June 22, 2016, 1–3pm US EDT (5–7pm GMT). Learn how to tackle problems involving images using current and upcoming features of the Wolfram Language and Mathematica 11. Also engage in interactive Q&A with the workshop’s hosts, Wolfram Language experts Shadi Ashnai and Markus van Almsick.

The Leaning Tower of Pisa The Unleaning Tower of Pisa, courtesy of a few lines of Wolfram Language code

Do you recognize the famous image on the left? It’s the Leaning Tower of Pisa. On the right is the same image—but minus the leaning, courtesy of a few lines of Wolfram Language code. Throughout the workshop, we will explore numerous techniques to solve common image processing problems, including segmentation, filtering, registration, color processing, and morphological operations. Often in one or two lines of code, Wolfram technology can transform any 2D image or 3D model and extract important details and data.

From generating heat maps of an object’s motion to quantifying anatomy, we will highlight fascinating applications of Wolfram’s powerful built-in toolset that can be applied to countless areas, including your own personal projects.

A heat map of an object's motion, created with the Wolfram Language Images of human knee anatomy, created from data in the Wolfram Language knowledgebase

To join us at the free virtual event on June 22, 2016, 1–3pm US EDT (5–7pm GMT), please register here. All are welcome, with no prior experience in image processing or the Wolfram Language necessary.

]]>
http://blog.wolfram.com/2016/06/17/special-event-solving-image-processing-problems/feed/ 0
Wolfram Community Highlights: Animation, Chernoff Faces, Fingerprint ID, and More http://blog.wolfram.com/2016/06/14/wolfram-community-highlights-animation-chernoff-faces-fingerprint-id-and-more/ http://blog.wolfram.com/2016/06/14/wolfram-community-highlights-animation-chernoff-faces-fingerprint-id-and-more/#comments Tue, 14 Jun 2016 14:08:44 +0000 Emily Suess http://blog.internal.wolfram.com/?p=31459 Wolfram Community members continue to create amazing applications and visuals. Take a look at a few of our recent favorites.

Wolfram Language animations make it easier to understand and investigate concepts and phenomena. They’re also just plain fun. Among recent simple but stunning animations, you’ll find “Deformations of the Cairo Tiling” and “Contours of a Singular Surface” by Clayton Shonkwiler, a mathematician and artist interested in geometric models of physical systems, and “Transit of Mercury 2016” by Sander Huisman, a postdoc in Lyon, France, researching Lagrangian turbulence.

Recreation of the Cairo pentagonal tiling

In “Facing Your Data with Chernoff Faces,” Anton Antonov explores using face-like diagrams to visualize multidimensional data, a concept introduced by Herman Chernoff in 1973. The result is that each depiction “gives a face” to each record in the dataset.

Face-like diagrams to visualize multidimensional data

Parts like the eyes, eyebrows, mouth, and nose represent data values by their shape, size, and placement. Because humans easily recognize faces, it’s pretty easy to pick up on small changes in the depictions. Perhaps the biggest advantage of using Chernoff faces is discerning and classifying outliers in data.

Tushar Dwivedi, a student from the Wolfram Mentorships Program and High School Summer Camp, built a fingerprint identification and matching application using the Wolfram Cloud and the image processing framework in the Wolfram Language.

Fingerprint analysis

As Dwivedi points out, computational fingerprint analysis has been relevant in the field of criminology for a long time, but its applications are growing. For example, we now see it used in the fingerprint recognition functionality of smartphones. His example shows how the Wolfram Language makes it possible to detect fingerprints accurately, without specialized technology unavailable to the public.

Bianca Eifert, a PhD student from the University of Giessen, has designed a Wolfram Cloud–based application to view various crystal structures from VASP files. Currently working on a PhD in theoretical solid-state physics, she shows how she created her application, and provides sample VASP file content for you to grab.

Crystal structure viewer for VASP

Eifert’s earlier staff favorite, “Crystallica: A Package to Plot Crystal Structures,” is another to check out if you’re interested in crystal structures. It uses the Crystallica application available in the Wolfram Library Archive.

Dutch artist Theo Jansen is known for creating kinetic sculptures. His Strandbeest creations are wind-powered walking structures, and Community member Sander Huisman has animated the anatomy of Jansen’s beach beasts.

Animated anatomy of Jansen's beach beasts

In his post, Huisman invites others to animate the Strandbeest walking over bumpy terrain. We can’t wait to see what members contribute to the discussion.

Wolfram’s own Ed Pegg recently shared a popular article, “Squeezing Pi from a Menger Sponge,” which was featured on the standupmaths YouTube channel.

In the Menger sponge, a step divides a cube into 27 cubes. Then the center and six touching cubes are removed. In Pegg’s example, fractals of measure zero were taken and tweaked to get π. He asks if you can break the fractal into pieces and make a sphere. Be sure to share your response in the Community comment thread.

These impressive examples are just a sampling of the inventive things being done with the Wolfram Language. Visit Wolfram Community and subscribe to Staff Picks notifications for updates on all posts selected by our editorial team.

]]>
http://blog.wolfram.com/2016/06/14/wolfram-community-highlights-animation-chernoff-faces-fingerprint-id-and-more/feed/ 0
Special Event: Computational Thinking with Wolfram|Alpha http://blog.wolfram.com/2016/06/09/special-event-computational-thinking-with-wolframalpha/ http://blog.wolfram.com/2016/06/09/special-event-computational-thinking-with-wolframalpha/#comments Thu, 09 Jun 2016 15:35:44 +0000 Rob Morris http://blog.internal.wolfram.com/?p=31538 Last month marked the seventh anniversary of Wolfram|Alpha. Since its launch, Wolfram|Alpha has earned a reputation as an indispensable tool for learning math and many other topics. We have been continually adding new content and capabilities to Wolfram|Alpha, and now we want to show you how it can be used to support computational thinking in any classroom.

We invite you to join us at a special virtual event, Wolfram|Alpha in Your Classroom: Virtual Workshop for Educators, on June 15, 2016, 2–3pm US EDT (6–7pm GMT). Come see examples of how Wolfram|Alpha’s built-in data and analysis capabilities can be used to enrich many types of classes, and take the opportunity to preview upcoming tools from Wolfram that will make teaching and learning easier.

Special event: Wolfram|Alpha in Your Classroom: Virtual Workshop for Educators

During the workshop, we will explore Wolfram Problem Generator. Problem Generator has the ability to generate unlimited practice problems for topics ranging from arithmetic to calculus. You can instantly create individual problems or entire problem sets, complete with answers. And if you are stuck trying to solve a given problem, detailed step-by-step solutions are available. While Problem Generator has been around for a while, it is currently only available to Wolfram|Alpha Pro subscribers. Soon, however, this will change, and you’ll be able to practice for free.

Another big upcoming change is a brand-new set of tools called Web Apps. Web Apps help you perform complex queries and calculations in all kinds of subjects, making it easy to solve difficult integrals, systems of equations, and lots of other problems. Web Apps aren’t just limited to math and science, though—there are also Web Apps to help with everyday topics, like finding out how many calories you ate at lunch, how long you should stay out in the sun, or what all the anagrams of the word “smile” are.

In addition to these new tools, we will be providing example lesson materials that highlight some of the exciting possibilities for using Wolfram|Alpha in the classroom. These materials draw upon Wolfram|Alpha’s massive collection of built-in data, as well as the ability to upload your own datasets. And instead of coding tedious algorithms, we will analyze the data instantly using both Wolfram|Alpha’s automatic data analysis capabilities and natural language queries.

To see these new features and learn more about using Wolfram|Alpha in your classroom, please register here to join us at the virtual event—again, it’s on June 15, 2016, 2–3pm US EDT (6–7pm GMT). All are welcome, and no programming experience is necessary! This event is part of a series of workshops for educators, which cover topics like how to use Wolfram Programming Lab and teaching computational thinking principles. Recordings of previous events are also available.

]]>
http://blog.wolfram.com/2016/06/09/special-event-computational-thinking-with-wolframalpha/feed/ 0
What Do Gravitational Crystals Really Look (i.e. Move) Like? http://blog.wolfram.com/2016/06/02/what-do-gravitational-crystals-really-look-i-e-move-like/ http://blog.wolfram.com/2016/06/02/what-do-gravitational-crystals-really-look-i-e-move-like/#comments Thu, 02 Jun 2016 18:21:15 +0000 Michael Trott http://blog.internal.wolfram.com/?p=31322 In a recent blog, Stephen Wolfram discusses the idea of what he calls “gravitational crystals.” These are infinite arrays of gravitational bodies in periodic motion. Two animations of mesmerizing movements of points were given as examples of what gravitational crystals could look like, but no explicit orbit calculations were given.

In this blog, I will carefully calculate explicit numerical examples of gravitational crystal movements. The “really” in the title should be interpreted as a high-precision, numerical solution to an idealized model problem. It should not be interpreted as “real world.” No retardation, special or general relativistic effects, stability against perturbation, tidal effects, or so on are taken into account in the following calculations. More precisely, we will consider the simplest case of a gravitational crystal: two gravitationally interacting, rigid, periodic 2D planar arrays embedded in 3D (meaning a 1/distance2 force law) of masses that can move translationally with respect to each other (no rotations between the two lattices). Each infinite array can be considered a crystal, so we are looking at what could be called the two-crystal problem (parallel to, and at the same time in distinction to, the classical gravitational two-body problem).

Crystals in motion

Crystals have been considered for centuries as examples of eternal, never-changing objects. Interestingly, various other time-dependent versions of crystals have been suggested over the last few years. Shapere and Wilczek suggested space-time crystals in 2012, and Boyle, Khoo, and Smith suggested so-called choreographic crystals in 2014.

In the following, I will outline the detailed asymptotic calculation of the force inside a periodic array of point masses and the numerical methods to find periodic orbits in such a force field. Readers not interested in the calculation details should fast-forward to the interactive demonstration in the section “The resulting gravitational crystals.”

The force of a square grid of masses

Within an infinite crystal-like array of point masses, no net force is exerted on any of the point masses due to symmetry cancellation of the forces of the other point masses. This means we can consider the whole infinite array of point masses as rigid. But the space between the point masses has a nontrivial force field.

To calculate orbits of masses, we will have to solve Newton’s famous Newton's equation. So, we need the force of an infinite array of 1/r potentials. We will consider the simplest possible case, namely a square lattice of point masses and lattice constant L. The force at a point {x,y} is given by the following double sum:

The force at a point {x,y}

Unfortunately, we can’t sum this expression in closed form. Using the sum of the potential is not easier, either; it actually increases the likelihood of a complication in the form of the potential diverging. (Although deriving and subtracting the leading divergent term is possible: if we truncate the sums at ±M, we have a linearly divergent term 8 M sinh-1(1).)

Truncating the sums at ±<em>M</em>

So one could consider a finite 2D array of (2M+1)×(2M+1) point masses in the limit M→∞.

Finite 2D array of (2M+1)×(2M+1)

But the convergence of the double sum is far too slow to get precise values for the force. (We want the orbit periodicity to be correct to, say, 7 digits. This means we need to solve the differential equation to about 9 digits, and for this we need the force to be correct to at least 12 digits.)

Comparing force values for various lattice truncations

Because the force is proportional to 1/distance2, and the number of point masses grows with distance squared, taking all points into account is critical for a precise force value. Any approximation can’t make use of a finite number of point masses, but must instead include all point masses.

Borrowing some ideas from York and Wang, Lindbo and Tornberg, and Bleibel for calculating the Madelung constant to high precision, we can make use of one of the most popular integrals in mathematics.

One of the most popular math integrals

This allows us to write the force as:

Writing the force as an expression

Exchanging integration and summation, we can carry out the double sum over all (2∞+1)2 lattice points in terms of elliptic theta functions.

Double sum over all lattice points in terms of elliptic theta functions

Here we carry out the gradient operation under the integral sign:

Gradient operation under the integral sign

We obtain the following converging integral:

Obtaining a converging integral

While the integral does converge, numerical evaluation is still quite time consuming, and is not suited for a right-hand-side calculation in a differential equation.

Timing a force calculation

Now let’s remind ourselves about some properties of the Jacobi elliptic theta function 3. The two properties of relevance to us are its sum representation and its inversion formula.

Sum representation and inversion formula from the Jacobi elliptic theta function 3

The first identity shows that for t→0, the theta function (and its derivative) vanishes exponentially. The second identity shows that exponential decay can also be achieved at t→∞.

Using the sum representation, we can carry out the t integration in closed form after splitting the integration interval in two parts. As a result, we obtain for the force a sum representation that is exponentially convergent.

After some lengthy algebra, as one always says (which isn’t so bad when using the Wolfram Language, but is still too long for this short note), one obtains a formula for the force when using the above identities for ϑ3 and similar identities for ϑ´3. Here is the x component of the force. Note that M is now the limit of the sum representation of the elliptic theta function, not the size of the point mass lattice. The resulting expression for the force components is pretty large, with a leaf count of nearly 4,000. (Open the cell in the attached notebook to see the full expression.)

leaf count = 3744

Here is a condensed form for the force in the x direction that uses the abbreviation
ri j = (x + i L)2 + (y + j L)2:

Condensed form of the force

Truncating the exponentially convergent sums shows that truncation at around 5 terms gives about 17 correct digits for the force.

Truncation at around 5 terms gives about 17 correct digits for the force

The convergence speed is basically independent of the position {x, y}. In the next table, we use a point on the diagonal near to the point mass at the origin of the coordinate system.

Point on the diagonal near to the point mass at the origin of the coordinate system

For points near to a point mass, we recover, of course, the 1/distance2 law.

Radial expansion of the force

For an even faster numerical calculation of the force, we drop higher-order terms in the double sums and compile the force.

Dropping higher-order terms in the double sums to compile the force

All digits of the force are correct to machine precision.

Numerical force computation

And the calculation of a single force value takes about a tenth of a millisecond, which is well suited for further numerical calculations.

Timing of numerical force computation

For further use, we define the function forceXY that for approximate position values returns the 2D force vector.

Definition of force computation

The space of possible orbits

So now that we have a fast-converging series expansion for the force for the full infinite array of point masses, we are in good shape to calculate orbits.

The simplest possible situation is two square lattices of identical lattice spaces with the same orientation, moving relative to each other. In this situation, every point mass of lattice 1 experiences the same cumulative force from lattice 2, and vice versa. And within each lattice, the total force on each point mass vanishes because of symmetry.

Similar to the well-known central force situation, we can also separate the center of mass from the relative motion. The result is the equation of motion for a single mass point in the field of one lattice.

Here is a plot of the magnitude of the resulting force.

Plot of the magnitude of the resulting force

And here is a plot of the direction field of the force. The dark red dots symbolize the positions of the point masses.

Plot of the direction field of the force

How much does the field strength of the periodic array differ from the field strength of a single point mass? The following graphic shows the relative difference. On the horizontal and vertical lines in the middle of the rows and columns of the point masses, the difference is maximal. Due to the singularity of the force at the point masses, the force of a single mass point and the one of the lattice become identical in the vicinity of a point mass.

Difference between field strength of the periodic array and the field strength of a single point mass

The next plot shows the direction field of the difference between a single point mass and the periodized version.

Plot showing the direction field of the difference between a single point mass and the periodized version

Once we have the force field, inverting the relation Right vector over F (right vector over r) =-grad V (right vector over r) numerically allows us (because the force is obviously conservative) to calculate the potential surface of the infinite square array of point masses.

Calculating the potential surface of the infinite square array of point masses
Calculating the potential surface of the infinite square array of point masses

Now lets us look at actual orbits in the potential shown in the last two images.

The following Manipulate allows us to interactively explore the motion of a particle in the gravitational field of the lattice of point masses.

Manipulate exploring the motion of a particle in the gravitational field of the lattice of point masses

The relatively large (five-dimensional) space of possible orbits becomes more manageable if we look especially for some symmetric orbits, e.g. we enforce that the orbit crosses the line x = 0 or
x = 1/2 horizontally. Many orbits that one would intuitively expect to exist that move around 1, 2, or 3 point masses fall into this category. We use a large 2D slider to allow a more fine-grained control of the initial conditions.

Manipulate of orbits with restricted initial conditions

Another highly symmetric situation is a starting point along the diagonal with an initial velocity perpendicular to it.

Manipulate of orbits with restricted initial conditions

Finding periodic orbits

For the desired motion we are looking for, we demand that after a period, the particle comes back to either its original position with its original velocity vector or has moved to an equivalent lattice position.

Given an initial position, velocity, mass, and approximate period, it is straightforward to write a simple root-finding routine to zoom into an actual periodic orbit. We implement this simply by solving the differential equation for a time greater than the approximate orbit time, and find the time where the sum of the difference Right vector over x sub i - Right vector over x sub f + Right vector over v sub i - Right vector over v sub f of the initial and final positions (right vector over x sub i and right vector over x sub f) and initial and final velocities (right vector over v sub i and right vector over v sub f) is minimal. The function findPeriodicOrbit carries out the search. This method is well suited for orbits whose periods are not too long. This will yield a nice collection of orbits. For longer orbits, errors in the solution of the differential equation will accumulate, and more specialized methods could be employed, e.g. relaxation methods.

Given some starting values, findPeriodicOrbit attempts to find a periodic orbit, and returns the corresponding initial position and velocity.

findPeriodicOrbit attempting to find a periodic orbit

Given initial conditions and a maximal solution time, the function minReturnData determines the exact time at which the differences between the initial and final positions and velocities are minimal. The most time-consuming step in the search process is the solution of the differential equation. To avoid repeating work, we do not include the period time as an explicit search variable, but rather solve the differential equation for a fixed time T and then carry out a one-dimensional minimization to find the time at which the sum of the position and velocity differences becomes minimal.

One-dimensional minimization to find the time at which the sum of the position and velocity differences becomes minimal

As the search will take about a minute per orbit, we monitor the current orbit shape to entertain us while we wait. Typically, after a couple hundred steps we either find a periodic orbit, or we know that we failed to find a periodic orbit. In the latter case, the local minimum of the function to be minimized (the sum of the norms of initial versus final positions and velocities) has a finite value and so does not correspond to a periodic orbit.

Here is a successful search for a periodic orbit. The initial conditions for the search we either get interactively from the above Manipulate or from a random search selecting viable candidate initial conditions.

Search for a periodic orbit

Here is a successful search for an orbit that ends at an equivalent lattice position.

Search for an orbit that ends at an equivalent lattice position

So what kind of periodic orbits can we find? As the result of about half a million solutions with random initial positions, velocities, masses, and solution times of the equations of motion, we find the following types of solutions:

    1. Closed orbits around a single point mass

    2. Closed orbits around a finite (≥ 2) point mass

    3. “Traveling orbits” that don’t return to the initial position but to an equivalent position in another lattice cell

(In this classification, we ignore “head-on” collision orbits and separatrix-like orbits along the symmetry lines between rows and columns of point masses.)

Here is a collection of initial values and periods for periodic orbits found in the carried-out searches. The small summary table gives the counts of the orbits found.

Initial values and periods for periodic orbits found in the carried-out searches
Summary table giving the counts of the orbits found

Using minReturnDistance, we can numerically check the accuracy of the orbits. At the “return time” (the last element of the sublists of orbitData), the sum of the differences of the position and velocity vectors is quite small.

Using minReturnDistance to numerically check the accuracy of orbits

Now let’s make some graphics showing the orbits from the list orbitData using the function showOrbit.

Making graphics showing orbits using showOrbit from the list orbitData

1. Orbits around a single point mass

In the simplest case, these are just topologically equivalent to a circle. This type of solution is not unexpected; for initial conditions close to a point mass, the influence of the other lattice point masses will be small.

Orbits around a single point mass

2. Orbits around two point masses

In the simplest case, these are again topologically equivalent to a circle, but more complicated orbits exist. Here are some examples.

Orbits around two point masses

3. “Traveling orbits” (open orbits) that don’t return to the initial position but to an equivalent position in another lattice cell

These orbits come in self-crossing and non-self-crossing versions. Here are some examples.

Self-crossing and non-self-crossing versions of orbits

Individually, the open orbits look quite different from the closed ones. When plotting the continuations of the open orbits, their relation to the closed orbits becomes much more obvious.

Plotting continuations of open orbits

For instance, the following open orbit reminds me of the last closed orbit.

Showing multiple closed orbits

The last graphic suggests that closed orbits around a finite number of points could become traveling orbits after small perturbations by “hopping” from a closed orbit around a single or finite number of point masses to the next single or finite group of point masses.

But there are also situations where one intuitively might expect closed orbits to exist, but numerically one does not succeed in finding a precise solution. One example is a simple rounded-corner, triangle-shaped orbit that encloses three point masses.

Simple rounded-corner, triangle-shaped orbit that encloses three point masses

Showing 100 orbits with slightly disturbed initial conditions gives an idea of why a smooth match of the initial point and the final point does not work out. While we can make the initial and final point match, the velocity vectors do not agree in this case.

Family of nearly closed orbits

Another orbit that seems not to exist, although one can make the initial and final points and velocities match pretty well, is the following double-slingshot orbit. But reducing the residue further by small modifications of the initial position and velocity seems not to be possible.

Data for the slingshot orbit

Here are a third and fourth type of orbit that nearly match up, but the function findPeriodicOrbit can’t find parameters that bring the difference below 10-5.

Data for the coathanger orbit

Here are two graphics of the last two orbits.

Three examples of open orbits

There are many more periodic orbits. The above is just a small selection of all possible orbits. Exploring a family of trajectories at once shows the wide variety of orbits that can arise. We let all orbits start at the line segment {{x, 1/2}|-1/2 ≤ x ≤ 1/2} with an angle α(x) = 𝜋(1/2-|x|).

Manipulate of families of orbits

If we plot sufficiently many orbits and select the ones that do not move approximately uniformly, we can construct an elegant gravitational crystal church.

Gravitational crystal church

The last image nicely shows the “branching” of the trajectories at point masses where the overall shape of the trajectory changes discontinuously. Displaying the flow in the three-dimensional x-t-y space shows the branching even better.

Displaying the flow in the three-dimensional x-t-y space

General trajectories

We were looking for concrete periodic orbits in the field of an infinite square array of point masses. For more general results on trajectories in such a potential, see Knauf. Knauf proves that the behavior of average orbits is diffusive. Periodic orbits are the exception in the space of initial conditions. Almost all orbits will wander randomly around. So let’s have a quick look at a larger number of orbits. The following calculation will take about six hours, and evaluates the final points and velocities of masses starting at {x,0.5} with a velocity {0,v} on a dense x-v grid with 0 ≤ x ≤ 1 and 1 ≤ v ≤ 3.

Calculating diffusive trajectories

If we display all final positions, we get the following graphic that gives a feeling of the theoretically predicted diffusive behavior of the orbits.

Displaying all final positons

While diffusive in average, as we are solving a differential equation, we expect (at least piecewise) that the final positions depend continuously on the initial conditions. So we burn another six hours of CPU time and calculate the final positions of 800,000 test particles that start radially from a circle around a lattice point mass. (Because of the symmetry of the force field, we have only 100,000 different initial value problems to solve numerically.)

Calculating diffusive trajectories

Here are the points of the final positions of the 800,000 points. We again see how nicely the point masses of the lattice temporarily deflect the test masses.

Points of the final positions of the 800,000 points

We repeat a variation of the last calculation and determine the minimum value of right vector over x sub i - right vector over x sub f + right vector over v sub i - right vector over v sub f in the x-v plane, where x and v are the initial conditions of the particle starting at y = 0.5 perpendicular upward.

Calculation of phase-space differences

We solve the equations of motions for 0 ≤ t ≤ 2.5 and display the value of the minimum of right vector over x sub i - right vector over x sub f + right vector over v sub i - right vector over v sub f in the time range 0.5 ≤ t ≤ 2.5. If the minimum occurs for t=0.5, we use a light gray color; if the minimum occurs for t=2.5, a dark gray color; and for 0.5 < t < 2.5, we color the sum of norms from pink to green. Not unexpectedly, the distance sum shows a fractal-like behavior, meaning the periodic orbits form a thinly spaced subset of initial conditions.

Visualization of phase-space distances

A (2D) grain of salt

Now that we have the force field of a square array of point masses, we can also use this force to model electrostatic problems, as these obey the same force law.

Identical charges would form a Wigner crystal, which is hexagonal. Two interlaced square lattices of opposite charges would make a model for a 2D NaCl salt crystal.

2D NaCl salt crystal

By summing the (signed) forces of the four sublattices, we can again calculate a resulting force of a test particle.

Calculating the resulting force of a test particle

The trajectories of a test charge become more irregular as compared with the gravitational model considered above. The following Manipulate allows us to get a feeling for the resulting trajectories. The (purple) test charge is attracted to the green charges of the lattice and repelled from the purple charges of the lattice.

Trajectories of a test charge becoming more irregular

The resulting gravitational crystals

We can now combine all the elements together to visualize the resulting gravitational crystals. We plot the resulting lattice movements in the reference frame of one lattice (the blue lattice). The red lattice moves with respect to the blue lattice.

Lattice point orbits in gravitational crystals

Summary

Using detailed numerical calculation, we verified the existence of the suggested gravitational crystals. For the simplest case, the two square lattice, many periodic orbits of small period were found. More extensive searches would surely return more, longer period solutions.

Using the general form of the Poisson summation formula for general lattices, the above calculations could be extended to different lattices, e.g. hexagonal lattices or 3D lattices.

Download this post as a Computable Document Format (CDF) file. New to CDF? Get your copy for free with this one-time download.

]]>
http://blog.wolfram.com/2016/06/02/what-do-gravitational-crystals-really-look-i-e-move-like/feed/ 3
2016 Wolfram European Technology Tour Dates and Locations http://blog.wolfram.com/2016/05/26/2016-wolfram-european-technology-tour-dates-and-locations/ http://blog.wolfram.com/2016/05/26/2016-wolfram-european-technology-tour-dates-and-locations/#comments Thu, 26 May 2016 13:52:34 +0000 Jon McLoone http://blog.internal.wolfram.com/?p=31299 Following three years of successful European Wolfram Technology Conferences in Frankfurt, we decided to do things a bit differently this year and bring the conference to you.

Over a span of five days from June 6 to 10, we will be running a series of one-day mini conferences in London, Zurich, Berlin, Eindhoven, and Warsaw.

Wolfram European Technology Tour cities

By decreasing everyone’s travel time, reducing the time commitment to one day, and slashing the cost of the events, we hope that more people than ever will have the chance to meet other users, share their experiences, talk to our developers, and hear about the latest technology.

I’ll be opening each event, and will take the opportunity to give some of the first public glimpses of Mathematica 11. There will be talks on machine learning, cloud computing, the Internet of Things, network analysis, and much more. The set of talks will be different at each event as developers and local users join us to present their insights, so check out the schedule for your local event.

The events are only a couple of weeks away! Sign up now to reserve your spot.

]]>
http://blog.wolfram.com/2016/05/26/2016-wolfram-european-technology-tour-dates-and-locations/feed/ 0
An Exact Value for the Planck Constant: Why Reaching It Took 100 Years http://blog.wolfram.com/2016/05/19/an-exact-value-for-the-planck-constant-why-reaching-it-took-100-years/ http://blog.wolfram.com/2016/05/19/an-exact-value-for-the-planck-constant-why-reaching-it-took-100-years/#comments Thu, 19 May 2016 20:53:24 +0000 Michael Trott http://blog.internal.wolfram.com/?p=30964
Blog communicated on behalf of Jean-Charles de Borda.

Some thoughts for World Metrology Day 2016

Please allow me to introduce myself
I’m a man of precision and science
I’ve been around for a long, long time
Stole many a man’s pound and toise
And I was around when Louis XVI
Had his moment of doubt and pain
Made damn sure that metric rules
Through platinum standards made forever
Pleased to meet you
Hope you guess my name

Introduction and about me

In case you can’t guess: I am Jean-Charles de Borda, sailor, mathematician, scientist, and member of the Académie des Sciences, born on May 4, 1733, in Dax, France. Two weeks ago would have been my 283rd birthday. This is me:

Jean-Charles de Borda

In my hometown of Dax there is a statue of me. Please stop by when you visit. In case you do not know where Dax is, here is a map:

Map of Dax and statue of Jean-Charles de Borda

In Europe when I was a boy, France looked basically like it does today. We had a bit less territory on our eastern border. On the American continent, my country owned a good fraction of land:

France and French territory in America in 1733

I led a diverse earthly life. At 32 years old I carried out a lot of military and scientific work at sea. As a result, in my forties I commanded several ships in the Seven Years’ War. Most of the rest of my life I devoted to the sciences.

But today nobody even knows where my grave is, as my physical body died on February 19, 1799, in Paris, France, in the upheaval of the French Revolution. (Of course, I know where it is, but I can’t communicate it anymore.) My name is the twelfth listed on the northeast side of the Eiffel Tower:

Borda listed on the northeast side of the Eiffel Tower

Over the centuries many of my fellow Frenchman who joined me up here told me that I deserved a place in the Panthéon. But you will not find me there, nor at the Père Lachaise, Montparnasse, or Montmartre cemeteries.

But this is not why I still cannot rest in peace. I am a humble man; it is the kilogram that keeps me up at night. But soon I will be able to rest in peace at night for all time and approach new scientific challenges.

Let me tell you why I will soon find a good night’s sleep.

All my life, I was into mathematics, geometry, physics, and hydrology. And overall, I loved to measure things. You might have heard of substitution weighing (also called Borda’s method)—yes, this was my invention, as was the Borda count method. I also substantially improved the repeating circle. Here is where the story starts. The repeating circle was crucial in making a high-precision determination of the size of the Earth, which in turn defined the meter. (A good discussion of my circle can be found here.)

Repeating circle

I lived in France when it was still a monarchy. Times were difficult for many people—especially peasants—partially because trade and commerce were difficult due to the lack of measures all over the country. If you enjoy reading about history, I highly recommend Kula’s Measures and Men to understand the weights and measurements situation in France in 1790. The state of the weights and measures were similar in other countries; see for instance Johann Georg Trallesreport about the situation in Switzerland.

In August 1790, I was made the chairman of the Commission of Weights and Measures as a result of a 1789 initiative from Louis XVI. (I still find it quite miraculous that 1,000 years after Charlemagne’s initiative to unify weights and measures, the next big initiative in this direction would be started.) Our commission created the metric system that today is the International System of Units, often abbreviated as SI (le Système international d’unités in French).

In the commission were, among others, Pierre-Simon Laplace (think the Laplace equation), Adrien-Marie Legendre (Legendre polynomials), Joseph-Louis Lagrange (think Lagrangian), Antoine Lavoisier (conservation of mass), and the Marquis de Condorcet. (I always told Adrien-Marie that he should have some proper portrait made of him, but he always said he was too busy calculating. But for 10 years now, the politician Louis Legendre’s portrait has not been used in math books instead of Adrien-Marie’s. Over the last decades, Adrien-Marie befriended Jacques-Louis David, and Jacques-Louis has made a whole collection of paintings of Adrien-Marie; unfortunately, mortals will never see them.) Lagrange, Laplace, Monge, Condorcet, and I were on the original team. (And, in the very beginning, Jérôme Lalande was also involved; later, some others were as well, such as Louis Lefèvre‑Gineau.)

Portraits of Pierre-Simon Laplace, Adrien-Marie Legendre, Joseph-Louis Lagrange, Antoine Lavoisier, and Marquis de Condorcet

Three of us (Monge, Lagrange, and Condorcet) are today interred or commemorated at the Panthéon. It is my strong hope that Pierre-Simon is one day added; he really deserves it.

As I said before, things were difficult for French citizens in this era. Laplace wrote:

The prodigious number of measures in use, not only among different people, but in the same nation; their whimsical divisions, inconvenient for calculation, and the difficulty of knowing and comparing them; finally, the embarrassments and frauds which they produce in commerce, cannot be observed without acknowledging that the adoption of a system of measures, of which the uniform divisions are easily subjected to calculation, and which are derived in a manner the least arbitrary, from a fundamental measure, indicated by nature itself, would be one of the most important services which any government could confer on society. A nation which would originate such a system of measures, would combine the advantage of gathering the first fruits of it with that of seeing its example followed by other nations, of which it would thus become the benefactor; for the slow but irresistible empire of reason predominates at length over all national jealousies, and surmounts all the obstacles which oppose themselves to an advantage, which would be universally felt.

All five of the mathematicians (Monge, Lagrange, Laplace, Legendre, and Condorcet) have made historic contributions to mathematics. Their names are still used for many mathematical theorems, structures, and operations:

Monge, Lagrange, Laplace, Legendre, and Condorcet's contributions to mathematics
Monge, Lagrange, Laplace, Legendre, and Condorcet's contributions to mathematics

In 1979, Ruth Inez Champagne wrote a detailed thesis about the influence of my five fellow citizens on the creation of the metric system. For Legendre’s contribution especially, see C. Doris Hellman’s paper. Today it seems to me that most mathematicians no longer care much about units and measures and that physicists are the driving force behind advancements in units and measures. But I did like Theodore P. Hill’s arXiv paper about the method of conflations of probability distributions that allows one to consolidate knowledge from various experiments. (Yes, before you ask, we do have instant access to arXiv up here. Actually, I would say that the direct arXiv connection has been the greatest improvement here in the last millennium.)

Our task was to make standardized units of measure for time, length, volume, and mass. We needed measures that were easily extensible, and could be useful for both tiny things and astronomic scales. The principles of our approach were nicely summarized by John Quincy Adams, Secretary of State of the United States, in his 1821 book Report upon the Weights and Measures.

Excerpt from John Quincy Adams' Report upon Weights and Measures

Originally we (we being the metric men, as we call ourselves up here) had suggested just a few prefixes: kilo-, deca-, hecto-, deci-, centi-, milli-, and the no-longer-used myria-. In some old books you can find the myria- units.

We had the idea of using prefixes quite early in the process of developing the new measurements. Here are our original proposals from 1794:

Excerpts of original proposals from 1794

Side note: in my time, we also used the demis and the doubles, such as a demi-hectoliter (=50 liters) or a double dekaliter (=20 liters).

As inhabitants of the twenty-first century know, times, lengths, and masses are measured in physics, chemistry, and astronomy over ranges spanning more than 50 orders of magnitude. And the units we created in the tumultuous era of the French Revolution stood the test of time:

Orders of magnitude plots for length and area

Orders of magnitude plots for length Orders of magnitude plot for area

In the future, the SI might need some more prefixes. In a recent LIGO discovery, the length of the interferometer arms changed on the order of 10 yoctometers. Yoctogram resolution mass sensors exist. One yoctometer equals 10–24 meter. Mankind can already measure tiny forces on the order of zeptonewtons.

On the other hand, astronomy needs prefixes larger than 1024. One day, these prefixes might become official.

Proposed prefixes larger than 10^24

I am a man of strict rules, and it drives me nuts when I see people in the twenty-first century not obeying the rules for using SI prefixes. Recently I saw somebody writing on a whiteboard that a year is pretty much exactly 𝜋 dekamegaseconds (𝜋 daMs):

1 year approximately pi dekamegaseconds

While it’s a good approximation (only 0.4% off), when will this person learn that one shouldn’t concatenate prefixes?

The technological progress of mankind has occurred quickly in the last two centuries. And mega-, giga-, tera- or nano-, pico-, and femto- are common prefixes in the twenty-first century. Measured in meters per second, here is the probability distribution of speed values used by people. Some speeds (like speed limits, the speed of sound, or the speed of light) are much more common than others, but many local maxima can be found in the distribution function:

Probability distribution of speed values used by people

Here is the report we delivered in March of 1791 that started the metric system and gave the conceptual meaning of the meter and the kilogram, signed by myself, Lagrange, Laplace, Monge, and Concordet (now even available through what the modern world calls a “digital object identifier,” or DOI, like 10.3931/e-rara-28950):

Report from 1791 that started the metric system and gave conceptual meaning of the meter and kilogram

Today most people think that base 10 and the meter, second, and kilogram units are intimately related. But only on October 27, 1790, did we decide to use base 10 for subdividing the units. We were seriously considering a base-12 subdivision, because the divisibility by 2, 3, 4, and 6 is a nice feature for trading objects. It is clear today, though, that we made the right choice. Lagrange’s insistence on base 10 was the right thing. At the time of the French Revolution, we made no compromises. On November 5, 1792, I even suggested changing clocks to a decimal system. (D’Alambert had suggested this in 1754; for the detailed history of decimal time, see this paper.) Mankind was not ready yet; maybe in the twenty-first century decimal clocks and clock readings would finally be recognized as much better than 24 hours, 60 minutes, and 60 seconds. I loved our decimal clocks—they were so beautiful. So it’s a real surprise to me today that mankind still divides the angle into 90 degrees. In my repeating circle, I was dividing the right angle into 100 grades.

We wanted to make the new (metric) units truly equal for all people, not base them, for instance, on the length of the forearm of a king. Rather, “For all time, for all people” (“À tous les temps, à tous les peuples”). Now, in just a few years, this dream will be achieved.

And I am sure there will come the day where Mendeleev’s prediction (“Let us facilitate the universal spreading of the metric system and thus assist the common welfare and the desired future rapprochement of the peoples. It will come not yet, slowly, but surely.”) will come true even in the three remaining countries of the world that have not yet gone metric:

Countries that have not gone metric

The SI units have been legal for trade in the USA since the mid-twentieth century, when United States customary units became derived from the SI definitions of the base units. Citizens can choose which units they want for trade.

We also introduced the decimal subdivision of money, and our franc was in use from 1793 to 2002. At least today all countries divide their money on the basis of base 10—no coins with label 12 are in use anymore. Here is the coin label breakdown by country:

Coin label breakdown by country

We took the “all” in “all people” quite seriously, and worked with our archenemy Britain and the new United States (through Thomas Jefferson personally) together to make a new system of units for all the major countries in my time. But, as is still so often the case today, politics won over reason.

I died on February 19, 1799, just a few months before the our group’s efforts. On June 22, 1799, my dear friend Laplace gave a speech about the finished efforts to build new units of length and mass before the new prototypes were delivered to the Archives of the Republic (where they are still today).

In case the reader is interested in my eventful life, Jean Mascart wrote a nice biography about me in 1919, and it is now available as a reprint from the Sorbonne.

From the beginnings of the metric system to today

Two of my friends, Jean Baptiste Joseph Delambre and Pierre Méchain, were sent out to measure distances in France and Spain from mountain to mountain to define the meter as one ten-millionth of the distance from the North Pole to the equator of the Earth. Historically, I am glad the mission was approved. Louis XVI was already under arrest when he approved the financing of the mission. My dear friend Lavoisier called their task “the most important mission that any man has ever been charged with.”

Pierre Méchain and Jean Baptiste Joseph Delambre

If you haven’t done so, you must read the book The Measure of All Things by Ken Adler. There is even a German movie about the adventures of my two old friends. Equipped with a special instrument that I had built for them, they did the work that resulted in the meter. Although we wanted the length of the meter to be one ten-millionth of the length of the half-meridian through Paris from pole to equator, I think today this is a beautiful definition conceptually. That the Earth isn’t quite as round as we had hoped for we did not know at the time, and this resulted in a small, regrettable error of 0.2 mm due to a miscalculation of the flattening of the Earth. Here is the length of the half-meridian through Paris, expressed through meters along an ellipsoid that approximates the Earth:

Length of the half-meridian through Paris, expressed through meters along an ellipsoid that approximates the Earth

If they had elevation taken into account (which they did not do—Delambre and Méchain would have had to travel the whole meridian to catch every mountain and hill!), and had used 3D coordinates (meaning including the elevation of the terrain) every few kilometers, they would have ended up with a meter that was 0.4 mm too short:

 Length of the meridian meter when taking elevation into account

Here is the elevation profile along the Paris meridian:

Elevation along the Paris meridian

And the meter would be another 0.9 mm longer if measured with a yardstick the length of a few hundred meters:

Length of the meridian meter when taking detailed elevation into account

Because of the fractality of the Earth’s surface, an even smaller yardstick would have given an even longer half-meridian.

It’s more realistic to follow the sea-level height. The difference between the length of the sea-level meridian meter and the ellipsoid approximation meter is just a few micrometers:

Difference between the length of the sea-level meridian and the ellipsoid approximation meter

But at least the meridian had to go through Paris (not London, as some British scientists of my time proposed). But anyway, the meridian length was only a stepping stone to make a meter prototype. Once we had the meter prototype, we didn’t have to refer to the meridian anymore.

Here is a sketch of the triangulation carried out by Pierre and Jean Baptiste in their adventurous six-year expedition. Thanks to the internet and various French digitization projects, the French-speaking reader interested in metrology and history can now read the original results online and reproduce our calculations:

Reproducing the triangulation carried out by Pierre and Jean Baptiste

The part of the meridian through Paris (and especially through the Paris Observatory, marked in red) is today marked with the Arago markers—do not miss them during your next visit to Paris! François Arago remeasured the Paris meridian. After Méchain joined me up here in 1804, Laplace got the go-ahead (and the money) from Napoléon to remeasure the meridian and to verify and improve our work:

Plotting the meridian through Paris and the Arago markers

Plotting the meridian through Paris

The second we derived from the length of a year. And the kilogram as a unit of mass we wanted to (and did) derive from a liter of water. If any liquid is special, it is surely water. Lavoisier and I had many discussions about the ideal temperature. The two temperatures that stand out are 0 °C and
4 °C. Originally we were thinking about 0 °C, as with ice water it is easy to see. But because of the maximal density of water at 4 °C, we later thought that would be the better choice. The switch to
4 °C was suggested by Louis Lefèvre-Gineau. The liter as a volume in turn we defined as one-tenth of a meter cubed. As it turns out, compared with high-precision measurements of distilled water,
1 kg equals the mass of 1.000028 dm3 of water. The interested reader can find many more details of the process of the water measurements here and about making the original metric system here. A shorter history in English can be found in the recent book by Williams and the ten-part series by Chisholm.

I don’t want to brag, but we also came up with the name “meter” (derived from the Greek metron and the Latin metrum), which we suggested on July 11 of 1792 as the name of the new unit of length. And then we had the area (=100 m2) and the stere (=1 m3).

And I have to mention this for historical accuracy: until I entered the heavenly spheres, I always thought our group was the first to carry out such an undertaking. How amazed and impressed I was when shortly after my arrival up here, I-Hsing and Nankung Yiieh introduced themselves to me and told me about their expedition from the years 721 to 725, more than 1,000 years before ours, to define a unit of length.

I am so glad we defined the meter this way. Originally the idea was to define a meter through a pendulum of proper length as a period of one second. But I didn’t want any potential change in the second to affect the length of the meter. While dependencies will be unavoidable in a complete unit system, they should be minimized.

Basing the meter on the Earth’s shape and the second on the Earth’s movement around the Sun seemed like a good idea at the time. Actually, it was the best idea that we could technologically realize at this time. We did not know how tides and time changed the shape of the Earth, or how continents drift apart. But we believed in the future of mankind, in ever-increasing measurement precision, but we did not know what concretely would change. But it was our initial steps for precisely measuring distances in France that were carried out. Today we have high-precision geo potential maps as high-order series of Legendre polynomials:

GeogravityModelData for the astronomical observatory in Paris

With great care, the finest craftsmen of my time melted platinum, and we forged a meter bar and a kilogram. It was an exciting time. Twice a week I would stop by Janety’s place when he was forging our first kilograms. Melting and forming platinum was still a very new process. And Janety, Louis XVI’s goldsmith, was a true master of forming platinum—to be precise, a spongelike eutectic made of platinum and arsenic. Just a few years earlier, on June 6, 1782, Lavoisier showed the melting of platinum in a hydrogen-oxygen flame to (the future) Tsar Paul I at a garden party at Versailles; Tsar Paul I was visiting Marie Antoinette and Loius XVI. And Étienne Lenoir made our platinum meter, and Jean Nicolas Fortin our platinum kilogram. For the reader interested in the history of platinum, I recommend McDonald’s and Hunt’s book.

Platinum is a very special metal; it has a high density and is chemically very inert. It is also not as soft as gold. The best kilogram realizations today are made from a platinum-iridium mixture (10% iridium), as adding iridium to platinum does improve its mechanical properties. Here is a comparison of some physical characteristics of platinum, gold, and iridium:

Comparison of physical characteristics of platinum, gold, and iridium

This sounds easy, but at the time the best scientists spent countless hours calculating and experimenting to find the best materials, the best shapes, and the best conditions to define the new units. But both the new meter bar and the new kilogram cylinder were macroscopic bodies. And the meter has two markings of finite width. All macroscopic artifacts are difficult to transport (we developed special travel cases); they change by very small amounts over a hundred years through usage, absorption, desorption, heating, and cooling. In the amazing technological progress of the nineteenth and twentieth centuries, measuring time, mass, and length with precisions better than one in a billion has become possible. And measuring time can even be done a billion times better.

I still vividly remember when, after we had made and delivered the new meter and the mass prototypes, Lavoisier said, “Never has anything grander and simpler and more coherent in all its parts come from the hands of man.” And I still feel so today.

Our goal was to make units that truly belonged to everyone. “For all time, for all people” was our motto. We put copies of the meter all over Paris to let everybody know how long it was. (If you have not done so, next time you visit Paris, make sure to visit the mètre étalon near to the Luxembourg Palace.) Here is a picture I recently found, showing an interested German tourist studying the history of one of the few remaining mètres étalons:

German tourist studying the history of one of the few remaining mètres étalons

It was an exciting time (even if I was no longer around when the committee’s work was done). Our units served many European countries well into the nineteenth and large parts of the twentieth century. We made the meter, the second, and the kilogram. Four more base units (the ampere, the candela, the mole, and the kelvin) have been added since our work. And with these extensions, the metric system has served mankind very well for 200+ years.

How the metric system took off after 1875, the year of the Metre Convention, can be seen by plotting how often the words kilogram, kilometer, and kilohertz appear in books:

How often the words kilogram, kilometer, and kilohertz appear in books

We defined only the meter, the seond, the liter, and the kilogram. Today many more name units belong to the SI: becquerel, coulomb, farad, gray, henry, hertz, joule, katal, lumen, lux, newton, ohm, pascal, siemen, sievert, tesla, volt, watt, and weber. Here is a list of the dimensional relations (no physical meaning implied) between the derived units:

List of the dimensional relations between the derived units

List of the dimensional relations between the derived units

Many new named units have been added since my death, often related to electrical and magnetic phenomena that were not yet known when I was alive. And although I am a serious person in general, I am often open to a joke or a pun—I just don’t like when fun is made of units. Like Don Knuth’s Potrzebie system of units, with units such as the potrzebie, ngogn, blintz, whatmeworry, cowznofski, vreeble, hoo, and hah. Not only are their names nonsensical, but so are their values:

Portzerbies and blintz units

Or look at Max Pettersson’s proposal for units for biology. The names of the units and the prefixes might sound funny, but for me units are too serious a subject to make fun of:

Max Pettersson's proposal for units for biology

These unit names do not even rhyme with any of the proper names:

Words that rhyme with meter
Words that rhyme with mile

To reiterate, I am all in favor of having fun, even with units, but it must be clear that it is not meant seriously:

Converting humorous units of measurement

Or explicitly nonscientific units, such as helens for beauty, puppies for happiness, or darwins for fame are fine with me:

Measuring beauty in helens

Measuring happiness in puppies

Measuring fame in darwins

I am so proud that the SI units are not just dead paper symbols, but tools that govern the modern world in an ever-increasing way. Although I am not a comics guy, I love the recent promotion of the base units to superheroes by the National Institute of Standards and Technology:

Base units to superheroes

Base units to superheroes

Note that, to honor the contributions of the five great mathematicians to the metric system, the curves in the rightmost column of the unit-representing characters are given as mathematical formulas, e.g. for Dr. Kelvin we have the following purely trigonometric parametrization:

Purely trigonometric parametrization of Dr. Kelvin

So we can plot Dr. Kelvin:

Plotting Dr. Kelvin

Having the characters in parametric form is handy: when my family has reunions, the little ones’ favorite activity is coloring SI superheroes. I just print the curves, and then the kids can go crazy with the crayons. (I got this idea a couple years ago from a coloring book by the NCSA.)

Printing randomly colored curves

And whenever a new episode comes out, all us “measure men” (George Clooney, if you see this: hint, hint for an exciting movie set in the 1790s!) come together to watch it. As you can imagine, the last episode is our all-time favorite. Rumor has it up here that there will be a forthcoming book The Return of the Metrologists (2018 would be a perfect year) complementing the current book.

And I am glad to see that the importance of measuring and the underlying metric system is in modern times honored through the World Metrology Day on May 20, which is today.

In my lifetime, most of what people measured were goods: corn, potatoes, and other foods, wine, fabric, and firewood, etc. So all my country really needed were length, area, volume, angles, and, of course, time units. I always knew that the importance of measuring would increase over time. But I find it quite remarkable that only 200 years after I entered the heavenly spheres, hundreds and hundreds of different physical quantities are measured. Today even the International Organization for Standardization (ISO) lists, defines, and describes what physical quantities to use. Below is an image of an interactive Demonstration (download the notebook at the bottom of this post to interact with it) showing graphically the dimensions of physical quantities for subsets of selectable dimensions. First select two or three dimensions (base units). Then the resulting graphics show spheres with sizes proportional to the number of different physical quantities with these dimensions. Mouse over the spheres in the notebook to see the dimensions. For example, with “meter”, “second”, and “kilogram” checked, the diagram shows the units of physical quantities like momentum (kg1 m1 s–1) or energy (kg2 m1 s–2):

Physical quantities of given dimensions

Here is a an excerpt of the code that I used to make these graphics. These are all physical quantities that have dimensions L2 M1 T–1. The last one is the slightly exotic electrodynamic observable
DESCRIPTION:

Excerpt of code from physical quantities of given dimensions demonstration

Today with smart phones and wearable devices, a large number of physical quantities are measured all the time by ordinary people. “Measuring rules,” as I like to say. Or, as my (since 1907) dear friend William Thomson liked to say:

… when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely, in your thoughts, advanced to the stage of science, whatever the matter may be.

Here is a graphical visualization of the physical quantities that are measured by the most common measurement devices:

Graphical visualization of the physical quantities that are measured by the most common measurement devices

Electrical and magnetic phenomena were just starting to become popular when I was around. Electromagnetic effects related to physical quantities that are expressed through the electric current only become popular much later:

Electrical and magnetic phenomena timeline

Electrical and magnetic phenomena timeline

I remember how excited I was when in the second half of the nineteenth century and the beginning of the twentieth century the various physical quantities of electromagnetism were discovered and their connections were understood. (And, not to be forgotten: the recent addition of memristance.) Here is a diagram showing the most important electric/magnetic physical quantities qk that have a relation of the form qk=qi qj with each other:

Diagram showing the most important electric/magnetic physical quantities q sub k, with relation of the form q subk = q sub i, q sub j, with each other

On the other hand, I was sure that temperature-related phenomena would soon be fully understood after my death. And indeed just 25 years later, Carnot proved that heat and mechanical work are equivalent. Now I also know about time dilation and length contraction due to Einstein’s theories. But mankind still does not know if a moving body is colder or warmer than a stationary body (or if they have the same temperature). I hear every week from Josiah Willard about the related topic of negative temperatures. And recently, he was so excited about a value for a maximal temperature for a given volume V expressed through fundamental constants:

Maximal temperature for a given volume V expressed through fundamental constants

For one cubic centimeter, the maximal temperature is about 5PK:

Maximal temperature for once cubic centimeter

The rise of the constants

Long after my physical death, some of the giants of physics of the nineteenth century and early twentieth century, foremost among them James Clerk Maxwell, George Johnstone Stoney, and Max Planck (and Gilbert Lewis) were considering units for time, length, and mass that were built from unchanging properties of microscopic particles and the associated fundamental constants of physics (speed of light, gravitational constant, electron charge, Planck constant, etc.):

James Clerk Maxwell, George Johnstone Stoney, and Max Planck

Maxwell wrote in 1870:

Yet, after all, the dimensions of our Earth and its time of rotation, though, relative to our present means of comparison, very permanent, are not so by any physical necessity. The earth might contract by cooling, or it might be enlarged by a layer of meteorites falling on it, or its rate of revolution might slowly slacken, and yet it would continue to be as much a planet as before.

But a molecule, say of hydrogen, if either its mass or its time of vibration were to be altered in the least, would no longer be a molecule of hydrogen.

If, then, we wish to obtain standards of length, time, and mass which shall be absolutely permanent, we must seek them not in the dimensions, or the motion, or the mass of our planet, but in the wavelength, the period of vibration, and the absolute mass of these imperishable and unalterable and perfectly similar molecules.

When we find that here, and in the starry heavens, there are innumerable multitudes of little bodies of exactly the same mass, so many, and no more, to the grain, and vibrating in exactly the same time, so many times, and no more, in a second, and when we reflect that no power in nature can now alter in the least either the mass or the period of any one of them, we seem to have advanced along the path of natural knowledge to one of those points at which we must accept the guidance of that faith by which we understand that “that which is seen was not made of things which do appear.’

At the time when Maxwell wrote this, I was already a man’s lifetime up here, and when I read it I applauded him (although at this time I still had some skepticism toward all ideas coming from Britain). I knew that this was the path forward to immortalize the units we forged in the French Revolution.

There are many physical constants. And they are not all known to the same precision. Here are some examples:

Examples of physical constants

Converting the values of constants with uncertainties into arbitrary precision numbers is convenient for the following computations. The connection between the intervals and the number of digits is given as follows. The arbitrary precision number that corresponds to v ± δ is the number v with precision –log10(2 δ/v) Conversely, given an arbitrary precision number (numbers are always convenient for computations), we can recover the v ± δ form:

Converting arbitrary precision numbers to intervals

After the exactly defined constants, the Rydberg constant with 11 known digits stands out for a very precisely known constant. On the end of the spectrum is G, the gravitational constant. At least once a month Henry Cavendish stops at my place with yet another idea on how to build a tabletop device to measure G. Sometimes his ideas are based on cold atoms, sometimes on superconductors, and sometimes on high-precision spheres. If he could still communicate with the living, he would write a comment to Nature every week. A little over a year ago Henry was worried that he should have done his measurements in winter as well in summer, but he was relieved to see that no seasonal dependence of G’s value seems to exist. The preliminary proposal deadline for the NSF’s Big G Challenge was just four days ago. I think sometime next week I will take a heavenly peek at the program officer’s preselected experiments.

There are more physical constants, and they are not all equal. Some are more fundamental than others, but for reasons of length I don’t want to get into a detailed discussion about this topic now. A good start for interested readers is Lévy-Leblond’s papers (also here), as well as this paper, this paper, and the now-classic Duff–Okun–Veneziano paper. For the purpose of making units from physical constants, the distinction of the various classes of physical constants is not so relevant.

The absolute values of the constants and their relations to heaven, hell, and Earth is an interesting subject on its own. It is a hot topic of discussion for mortals (also see this paper), as well as up here. Some numerical coincidences (?) are just too puzzling:

Absolute values of the constants and their relations to heaven, hell, and Earth

Of course, using modern mathematical algorithms, such as lattice reduction, we can indulge in the numerology of the numerical part of physical constants:

Numerology of the numerical part of physical constants

For instance, how can we form 𝜋 out of fundamental constant products?

Forming pi out of fundamental constant products

Or let’s look at my favorite number, 10, the mathematical basis of the metric system:

Forming 10 out of fundamental constant products

And given a set of constants, there are many ways to form a unit of a given unit. There are so many physical constants in use today, you have to be really interested to keep up on them. Here are some of the lesser-known constants:

Some of the lesser-known physical constants

Physical constants appear in so many equations of modern physics. Here is a selection of 100 simple physics formulas that contain the fundamental constants:

100 simple physics formulas that contain the fundamental constants

Of course, more complicated formulas also contain the physical constants. For instance, the gravitational constant appears (of course!) in the formula of the gravitational potentials of various objects, e.g. for the potential of a line segment and of a triangle:

Gravitational constant appears in formula of gravitational potentials of various objects

My friend Maurits Cornelis Escher loves these kinds of formulas. He recently showed me some variations of a few of his 3D pictures that show the equipotential surfaces of all objects in the pictures by triangulating all surfaces, then using the above formula—like his Escher solid. The graphic shows a cut version of two equipotential surfaces:

Equipotential surfaces of all objects in the pictures by triangulating all surfaces

I frequently stop by at Maurits Cornelis’, and often he has company—usually, it is Albrecht Dürer. The two love to play with shapes, surfaces, and polyhedra. They deform them, Kelvin-invert them, everse them, and more. Albrecht also likes the technique of smoothing with gravitational potentials, but he often does this with just the edges. Here is what a Dürer solid’s equipotential surfaces look like:

Dürer solid's equipotential surfaces

And here is a visualization of formulas that contain cα–hβ–Gγ in the exponent space αγβγγ. The size of the spheres is proportional to the number of formulas containing cα·hβ·Gγ; mousing over the balls in the attached notebook shows the actual formulas. We treat positive and negative exponents identically:

Visualization of formulas that contain c^alpha-h^beta-G^gamma in the exponant space of alpha-beta-gamma

One of my all-time favorite formulas is for the quantum-corrected gravitational force between two bodies, which contains my three favorite constants: the speed of light, the gravitational constants, and the Planck constant:

Quantum-corrected gravitational force between two bodies

Another of my favorite formulas is the one for the entropy of a black hole. It contains the Boltzmann constant in addition to c, h, and G:

Entropy of a black hole

And, of course, the second-order correction to the speed of light in a vacuum in the presence of an electric or magnetic field due to photon-photon scattering (ignoring a polarization-dependent constant). Even in very large electric and magnetic fields, the changes in the speed of light are very small:

Second-order correction to the speed of light in a vacuum in the presence of an electric or magnetic field

In my lifetime, we did not yet understand the physical world enough to have come up with the idea of natural units. That took until 1874, when Stoney proposed for the first time natural units in his lecture to the British Science Association. And then, in his 1906–07 lectures, Planck made use of the now-called Planck units extensively, already introduced in his famous 1900 article in Annalen der Physik. Unfortunately, both these unit systems use the gravitational constant G prominently. It is a constant that we today cannot measure very accurately. As a result, also the values of the Planck units in the SI have only about four digits:

Use of Planck units

These units were never intended for daily use because they are either far too small or far too large compared to the typical lengths, areas, volumes, and masses that humans deal with on a daily basis. But why not base the units of daily use on such unchanging microscopic properties?

(Side note: The funny thing is that in the last 20 years Max Planck again doubts if his constant h is truly fundamental. He had hoped in 1900 to derive its value from a semi-classical theory. Now he hopes to derive it from some holographic arguments. Or at least he thinks he can derive the value of h/kB from first principles. I don’t know if he will succeed, but who knows? He is a smart guy and just might be able to.)

Many exact and approximate relations between fundamental constants are known today. Some more might be discovered in the future. One of my favorites is the following identity—within a small integer factor, is the value of the Planck constant potentially related to the size of the universe?

Is the value of the Planck constant potentially related to the size of the universe?

Another one is Beck’s formula, showing a remarkable coincidence (?):

Beck's formula

But nevertheless, in my time we never thought it would be possible to express the height of a giraffe through the fundamental constants. But how amazed I was nearly ten years ago, when looking through the newly arrived arXiv preprints to find a closed form for the height of the tallest running, breathing organism derived by Don Page. Within a factor of two he got the height of a giraffe (Brachiosaurus and Sauroposeidon don’t count because they can’t run) derived in terms of fundamental constants—I find this just amazing:

Typical height of a giraffe

I should not have been surprised, as in 1983 Press, Lightman, Peierls, and Gold expressed the maximal running speed of a human (see also Press’ earlier paper):

Maximal running speed of a human

In the same spirit, I really liked Burrows’ and Ostriker’s work on expressing the sizes of a variety of astronomical objects through fundamental constants only. For instance, for a typical galaxy mass we obtain the following expression:

Expression for a typical galaxy mass

This value is within a small factor from the mass of the Milky Way:

Mass of the Milky Way

But back to units, and fast forward another 100+ years to the second half of the twentieth century: the idea of basing units on microscopic properties of objects gained more and more ground.

Since 1967, the second has been defined through 9,192,631,770 periods of the light from the transition between the two hyperfine levels of the ground state of the cesium 133, and the meter has been defined since 1983 as the distance light travels in one second when we define the speed of light as the exact quantity 299,792,458 meters per second. To be precise, this definition is to be realized at rest, at a temperature of 0 K, and at sea level, as motion, temperature, and the gravitational potential influence the oscillation period and (proper) time. Ignoring the sea-level condition can lead to significant measurement errors; the center of the Earth is about 2.5 years younger than its surface due to differences in the gravitational potential.

Now, these definitions for the unit second and meter are truly equal for all people. Equal not just for people on Earth right now, but also for in the future and far, far away from Earth for any alien. (One day, the 9,192,631,770 periods of cesium might be replaced by a larger number of periods of another element, but that will not change its universal character.)

But if we wanted to ground all units in physical constants, which ones should we choose? There are often many, many ways to express a base unit through a set of constants. Using the constants from the table above, there are thirty (thirty!) ways to combine them to make a mass dimension:

Thirty ways to combine constants to make a mass dimension

Because of the varying precision of the constants, the combinations are also of varying precision (and of course, of different numerical values):

Combinations are of varying precision

Now the question is which constants should be selected to define the units of the metric system? Many aspects, from precision to practicality to the overall coherence (meaning there is no need for various prefactors in equations to compensate for unit factors) must be kept in mind. We want our formulas to look like F = m a, rather than containing explicit numbers such as in the Thanksgiving turkey cooking time formulas (assuming a spherical turkey):

Turkey cooking time formulas

Or in the PLANK formula (Max hates this name) for the calculation of indicated horsepower:

Calculation of indicated horsepower

Here in the clouds of heaven, we can’t use physical computers, so I am glad that I can use the more virtual Wolfram Open Cloud to do my calculations and mathematical experimentation. I have played for many hours with the interactive units-constants explorer below, and agree fully with the choices made by the International Bureau of Weights and Measures (BIPM), meaning the speed of light, the Planck constant, the elementary charge, the Avogadro constant, and the Boltzmann constant. I showed a preliminary version of this blog to Edgar, and he was very pleased to see this table based on his old paper:

Tables based on Edgar's paper

I want to mention that the most popular physical constant, the fine-structure constant, is not really useful for building units. Just by its special status as a unitless physical quantity, it can’t be directly connected to a unit. But it is, of course, one of the most important physical constants in our universe (and is probably only surpassed by the simple integer constant describing how many spatial dimensions our universe has). Often various dimensionless combinations can be found from a given set of physical constants because of relations between the constants, such as c2=1/(ε0 μ0). Here are some examples:

Various dimensionless combinations found from a given set of physical constants

But there is probably no other constant that Paul Adrien Maurice Dirac and I have discussed more over the last 32 years than the fine-structure constant α=e2/(4 𝜋 ε0 ħ c). Although up here we meet with the Lord regularly in a friendly and productive atmosphere, he still refuses to tell us a closed form of α . And he will not even tell us if he selected the same value for all times and all places. For the related topic of the values of the constants chosen, he also refuses to discuss fine tuning and alternative values. He says that he chose a beautiful expression, and one day we will find out. He gave some bounds, but they were not much sharper than the ones we know from the Earth’s existence. So, like living mortals, for now we must just guess mathematical formulas:

Conjectured exact forms of the fine-structure constant

Or guess combinations of constants:

Guessing combinations of constants

And here is one of my favorite coincidences:

Favorite coincidence

And a few more:

A few more coincidences

The rise in importance and usage of the physical constants is nicely reflected in the scientific literature. Here is a plot of how often (in publications per year) the most common constants appear in scientific publications from the publishing company Springer. The logarithmic vertical axis shows the exponential increase in how often physical constants are mentioned:

How often the most common constants appear in scientific publications from the publishing company Springer

While the fundamental constants are everywhere in physics and chemistry, one does not see them so much in newspapers, movies, or advertisements, as they deserve. I was very pleased to see the introduction of the Measures for Measure column in Nature recently.

Fundamental constants in Measures for Measure column

To give the physical constants the presence they deserve, I hope that before (or at least not long after) the redefinition we will see some interesting video games released that allow players to change the values of at least c, G, and h to see how the world around us would change if the constants had different values. It makes me want to play such a video game right now. With large values of h, not only could one build a world with macroscopic Schrödinger cats, but interpersonal correlations would also become much stronger. This could make the constants known to children at a young age. Such a video game would be a kind of twenty-first-century Mr. Tompkins adventure:

Mr. Tompkins

It will be interesting to see how quickly and efficiently the human brain will adapt to a possible life in a different universe. Initial research seems to be pretty encouraging. But maybe our world and our heaven are really especially fine-tuned.

The current SI and the issue with the kilogram

The modern system of units, the current SI has, in addition to the second, the meter, and the kilogram, other units. The ampere is defined as the force between two infinitely long wires, the kelvin through the triple point of water, the mole through the kilogram and carbon-12, and the candela through blackbody radiation. If you have never read the SI brochure, I strongly encourage you to do so.

Two infinitely long wires are surely macroscopic and do not fulfill Maxwell’s demand (but it is at least an idealized system), and de facto it defines the magnetic constant. And the triple point of water needs a macroscopic amount of water. This is not perfect, but it’s OK. Carbon-12 atoms are already microscopic objects. Blackbody radiation is again an ensemble of microscopic objects, but a very reproducible one. So some of the current SI fulfills in some sense Maxwell’s goals.

But most of my insomnia over the last 50 years has been caused by the kilogram. It caused me real headaches, and sometimes even nightmares, when we could not put it on the same level as the second and the meter.

In the year of my physical death (1799), the first prototype of a kilogram, a little platinum cylinder, was made. About 39.7 mm in height and 39.4 mm in diameter, this was for 75 years “the” kilogram. It was made from the forged platinum sponge made by Janety. Miller gives a lot of the details of this kilogram. It is today in the Archives nationales. In 1879, Johnson Matthey (in Britain—the country I fought with my ships!), using new melting techniques, made the material for three new kilogram prototypes. Because of a slightly higher density, these kilograms were slightly smaller in size, at 39.14 mm in height. The cylinder was called KIII and became the current international prototype kilogram K. Here is the last sentence from the preface of the mass determination of the the international prototype kilogram from 1885, introducing K:

The cylinder was called KIII and became the current international prototype kilogram K

A few kilograms were selected and carefully compared to our original kilogram; for the detailed measurements, see this book. All three kilograms had a mass less than 1 mg different from the original kilogram. But one stood out: it had a mass difference of less than 0.01 mg compared to the original kilogram. For a detailed history of the making of K, see Quinn. And so, still today, per definition, a kilogram is the mass of a small metal cylinder sitting in a safe at the International Bureau of Weights and Measures near Paris. (It’s technically actually not on French soil, but this is another issue.) In the safe, which needs three keys to be opened, under three glass domes, is a small platinum-iridium cylinder that defines what a kilogram is. For the reader’s geographical orientation, here is a map of Paris with the current kilogram prototype (in the southwest), our original one (in the northeast), both with a yellow border, and some other Paris visitor essentials:

Map of Paris with current kilogram prototype (in the southwest) and our original one (in the northeast)

In addition to being an artifact, it was so difficult to get access to the kilogram (which made me unhappy). Once a year, a small group of people checks if it is still there, and every few years its weight (mass) is measured. Of course, the result is, per definition and the agreement made at the first General Conference on Weights and Measures in 1889, exactly one kilogram.

Over the years the original kilogram prototype gained dozens of siblings in the form of other countries’ national prototypes, all of the same size, material, and weight (up to a few micrograms, which are carefully recorded). (I wish the internet had been invented earlier, so that I had a communication path to tell what happened with the stolen Argentine prototype 45; since then, it has been melted down.) At least, when they were made they had the same weight. Same material, same size, similarly stored—one would expect that all these cylinders would keep their weight. But this is not what history showed. Rather than all staying at the same weight, repeated measurements showed that virtually all other prototypes got heavier and heavier over the years. Or, more probable, the international prototype has gotten lighter.

From my place here in heaven I have watched many of these the comparisons with both great interest and concern. Comparing their weights (a.k.a. masses) is a big ordeal. First you must get the national prototypes to Paris. I have silently listened in on long discussions with TSA members (and other countries’ equivalents) when a metrologist comes with a kilogram of platinum, worth north of $50k in materials—and add another $20k for the making (in its cute, golden, shiny, special travel container that should only be opened in a clean room with gloves and mouth guard, and never ever touched by a human hand)—and explains all of this to the TSA. An official letter is of great help here. The instances that I have watched from up here were even funnier than the scene in the movie 1001 Grams.

Then comes a complicated cleaning procedure with hot water, alcohol, and UV light. The kilograms all lose weight in this process. And they are all carefully compared with each other. And the result is that with very high probability, “the” kilogram, our beloved international prototype kilogram (IPK), loses weight. This fact steals my sleep.

Here are the results from the third periodic verification (1988 to 1992). The graphic shows the weight difference compared to the international prototype:

Weight difference between countries' national kilograms versus the international prototype

For some newer measurements from the last two years, see this paper.

What I mean by “the” kilogram losing weight is the following. Per definition (independent of its “real objective” mass), the international prototype has a mass of exactly 1 kg. Compared with this mass, most other kilogram prototypes of the world seem to gain weight. As the other prototypes were made, using different techniques over more than 100 years, very likely the real issue is that the international prototype is losing weight. (And no, it is not because of Ceaușescu’s greed and theft of platinum that Romania’s prototype is so much lighter; in 1889 the Romanian prototype was already 953 μg lighter than the international prototype kilogram.)

Josiah Willard Gibbs, who has been my friend up here for more than 110 years, always mentions that his home country is still using the pound rather than the kilogram. His vote in this year’s election would clearly go to Bernie. But at least the pound is an exact fraction of the kilogram, so anything that will happen to the kilogram will affect the pound the same way:

The pound is an exact fraction of the kilogram

The new SI

But soon all my dreams and centuries-long hopes will come true and I can find sleep again. In 2018, two years from now, the greatest change in the history of units and measures since my work with my friend Laplace and the others will happen.

All units will be based on things that are accessible to everybody everywhere (assuming access to some modern physical instruments and devices).

The so-called new SI will reduce all of the seven base units to seven fundamental constants of physics or basic properties of microscopic objects. Down on Earth, they started calling them “reference constants.”

Some people also call the new SI quantum SI because of its dependence on the Planck constant h and the elementary charge e. In addition to the importance of the Planck constant h in quantum mechanics, the following two quantum effects are connecting h and e: the Josephson effect and its associated Josephson constant KJ = 2 e / h, and the quantum Hall effect with the von Klitzing constant RK = h / e2. The quantum metrological triangle: connecting frequency and electric current through a singe electron tunneling device, connecting frequency and voltage through the Josephson effect, and connecting voltage and electric current through the quantum Hall effect will be a beautiful realization of electric quantities. (One day in the future, as Penin has pointed out, we will have to worry about second-order QED effects, but this will be many years from now.)

The BIPM already has a new logo for the future International System of Units:

New logo for the future International System of Units

Concretely, the proposal is:

    1. The second will continue to be defined through cesium atom microwave radiation.

    2. The meter will continue to be defined through an exactly defined speed of light.

    3. The kilogram will be defined through an exactly defined value of the Planck constant.

    4. The ampere will be defined through an exactly defined value of the elementary charge.

    5. The kelvin will be defined through an exactly defined value of the Boltzmann constant.

    6. The mole will be defined through an exact (counting) value.

    7. The candela will be defined through an exact value of the candela steradian-to-watt ratio at a fixed frequency (already now the case).

I highly recommend a reading of the draft of the new SI brochure. Laplace and I have discussed it a lot here in heaven, and (modulo some small issues) we love it. Here is a quick word cloud summary of the new SI brochure:

Word cloud summary of new SI brochure

Before I forget, and before continuing the kilogram discussion, some comments on the other units.

The second

I still remember when we discussed introducing metric time in the 1790s: a 10-hour day, with 100 minutes per hour, and 100 seconds per minute, and we were so excited by this prospect. In hindsight, this wasn’t such a good idea. The habits of people are sometimes too hard to change. And I am so glad I could get Albert Einstein interested in the whole metrology over the past 50 years. We have had so many discussions about the meaning of time and that the second measures local time, and the difference between measurable local time and coordinate time. But this is a discussion for another day. The uncertainty of a second is today less than 10−16. Maybe one day in the future, cesium will be replaced by aluminum or other elements to achieve 100 to 1,000 times smaller uncertainties. But this does not alter the spirit of the new SI; it’s just a small technical change. (For a detailed history of the second, see this article.)

Clearly, today’s definition of second is much better than one that depends on the Earth. At a time when stock market prices are compared at the microsecond level, the change of the length of a day due to earthquakes, polar melting, continental drift, and other phenomena over a century is quite large:

Change in the length of a day over time

The mole

I have heard some chemists complain that their beloved unit, the mole, introduced into the SI only in 1971, will become trivialized. In the currently used SI, the mole relates to an actual chemical, carbon-12. In the new SI, it will be just a count of objects. A true chemical equivalent to a baker’s dozen, the chemist’s dozen. Based on the Avogadro constant, the mole is crucial in connecting the micro world with the macro world. A more down-to-Earth definition of the mole matters for such quantitative values—for example, pH values. The second is the SI base unit of time; the mole is the SI base unit of the physical quantity, or amount of substance:

Mole is the SI base unit of the physical quantity

But not everybody likes the term “amount of substance.” Even this year (2016), alternative names are being proposed, e.g. stoichiometric amount. Over the last decades, a variety of names have been proposed to replace “amount of substance.” Here are some examples:

Alternative names for "amount of substance"

But the SI system only defines the unit “mole.” The naming of the physical quantity that is measured in moles is up to the International Union of Pure and Applied Chemistry.

For recent discussions from this year, see the article by Leonard, “Why Is ‘Amount of Substance’ So Poorly Understood? The Mysterious Avogadro Constant Is the Culprit!”, and the article by Giunta, “What’s in a Name? Amount of Substance, Chemical Amount, and Stoichiometric Amount.”

Wouldn’t it be nice if we could have made a “perfect cube” (number) that would represent the Avogadro number? Such a representation would be easy to conceptualize. This was suggested a few years back, and at the time was compatible with the value of the Avogadro constant, and would have been a cube of edge length 84,446,888 items. I asked Srinivasa Ramanujan, while playing a heavenly round of cricket with him and Godfrey Harold Hardy, his longtime friend, what’s special about 84,446,888, but he hasn’t come up with anything deep yet. He said that 84,446,888=2^3*17*620933, and that 620,933 appears starting at position 1,031,622 in the decimal digits of 𝜋, but I can’t see any metrological relevance in this. With the latest value of the Avogadro constant, no third power of an integer number falls into the possible values, so no wonder there is nothing special.

Here is the latest CODATA (Committee on Data for Science and Technology) value from the NIST Reference on Constants, Units, and Uncertainty:

Latest CODATA value from NIST Reference on Constants, Units, and Uncertainty

The candidate number 84,446,885 cubed is too small, and adding a one gives too large a number:

Candidate number 84,446,885

Interestingly, if we would settle for a body-centered lattice, with one additional atom per unit cell, then we could still maintain a cube interpretation:

Maintaining a cube interpretation with a body-centered lattice

A face-centered lattice would not work, either:

Using a face-centered lattice

But a diamond (silicon) lattice would work:

Diamond (silicon) lattice

To summarize:

Lattice summary

Here is a little trivia:

Sometime amid the heights of the Cold War, the accepted value of the Avogadro constant suddenly changed in the third digit! This was quite a change, considering that there is currently a lingering controversy regarding the discrepancy in the sixth digit. Can you explain the sudden decrease in Avogadro constant during the Cold War?

Do you know the answer? If not, see here or here.

But I am diverting from my main thread of thoughts. As I am more interested in the mechanical units anyway, I will let my old friend Antoine Lavoisier judge the new mole definition, as he was the chemist on our team.

The kelvin

Josiah Willard Gibbs even convinced me that temperature should be defined mechanically. I am still trying to understand John von Neumann’s opinion on this subject, but because I never fully understand his evening lectures on type II and type III factors, I don’t have a firm opinion on the kelvin. Different temperatures correspond to inequivalent representations of the algebras. As I am currently still working my way through Ruetsche’s book, I haven’t made my mind up on how to best define the kelvin from an algebraic quantum field theory point of view. I had asked John for his opinion of a first-principle evaluation of h / k based on KMS states and Tomita–Takesaki theory, and even he wasn’t sure about it. He told me some things about thermal time and diamond temperature that I didn’t fully understand.

And then there is the possibility of deriving the value of the Boltzmann constant. Even 40 years after the Koppe–Huber paper, it is not clear if it is possible. It is a subject I am still pondering, and I am taking various options into account. As mentioned earlier, the meaning of temperature and how to define its units are not fully clear to me. There is no question that the new definition of the kelvin will be a big step forward, but I don’t know if it will be the end of the story.

The ampere

This is one of the most direct, intuitive, and beautiful definitions in the new SI: the current is just the number of electrons that flow per second. Defining the value of the ampere through the number of elementary charges moved around is just a stroke of genius. When it was first suggested, Robert Andrews Millikan up here was so happy he invited many of us to an afternoon gathering in his yard. In practice (and in theoretical calculations), we have to exercise a bit more care, as we mainly measure the electric current of electrons in crystalline objects, and electrons are no longer “bare” electrons, but quasiparticles. But we’ve known since 1959, thanks to Walter Kohn, that we shouldn’t worry too much about this, and expect the charge of the electron in a crystal to be the same as the charge of a bare electron. As an elementary charge is a pretty small charge, the issue of measuring fractional charges as currents is not a practical one for now. I personally feel that Robert’s contribution to determining the value of the physical constants in the beginning of the twentieth century are not pointed out enough (Robert Andrews really knew what he was doing).

The candela

No, you will not get me started on my opinion the candela. Does it deserve to be a base unit? The whole story of human-centered physiological units is a complicated one. Obviously they are enormously useful. We all see and hear every day, even every second. But what if the human race continues to develop (in Darwin’s sense)? How will it fit together with our “for all time” mantra? I have my thoughts on this, but laying them out here and now would sidetrack me from my main discussion topic for today.

Why seven base units?

I also want to mention that originally I was very concerned about the introduction of some of the additional units that are in use today. In endless discussions with my chess partner Carl Friedrich Gauss here in heaven, he had originally convinced me that we can reduce all measurements of electric quantities to measurements of mechanical properties, and I already was pretty fluent in his CGS system, that originally I did not like it at all. But as a human-created unit system, it should be as useful as possible, and if seven units do the job best, it should be seven. In principle one could even eliminate a mass unit and express a mass through time and length. In addition to just being impractical, I strongly believe this is conceptually not the right approach. I recently discussed this with Carl Friedrich. He said he had the idea of just using time and length in the late 1820s, but abandoned such an approach. While alive, Carl Friedrich never had the opportunity to discuss the notion of mass as a synthetic a priori with Immanual, over the last century the two (Carl Friedrich and Immanuel) agreed on mass as an a priori (at least in this universe).

Our motto for the original metric system was, “For all time, for all people.” The current SI already realizes “for all people,” and by grounding the new SI in the fundamental constants of physics, the first promise “for all time” will finally become true. You cannot imagine what this means to me. If at all, fundamental constants seem to change maximally with rates on the order of 10–18 per year. This is many orders of magnitude away from the currently realized precisions for most units.

Granted, some things will get a bit numerically more cumbersome in the new SI. If we take the current CODATA values as exact values, then, for instance, the von Klitzing constant e2/h will be a big fraction:

von Klitzing contant with current CODATA values and exact values as a big fraction

The integer part of the last result is, of course, 25,812Ω. Now, is this a periodic decimal fraction or a terminating fraction? The prime factorization of the denominator tells us that it is periodic:

Prime factorization of the denominator tells us that it is periodic

Progress is good, but as happens so often, it comes at a price. While the new constant-based definitions of the SI units are beautiful, they are a bit harder to understand, and physics and chemistry teachers will have to come up with some innovative ways to explain the new definitions to pupils. (For recent first attempts, see this paper and this paper.)

And in how many textbooks have I seen that the value of the magnetic constant (permeability of the vacuum) μ0 is 4 𝜋 10–7 N / A2? The magnetic and the electric constants will in the new SI become measured quantities with an error term. Concretely, from the current exact value:

Current exact value

With the Planck constant h exactly and the elementary charge e exactly, the value of μ0 would incur the uncertainty of the fine-structure constant α. Fortunately, the dimensionless fine-structure constant α is one of the best-known constants:

Dimensionless fine-structure constant alpha

But so what? Textbook publishers will not mind having a reason to print new editions of all their books. They will like it—a reason to sell more new books.

With μ0 a measured quantity in the future, I predict one will see many more uses of the current underdog of the fundamental constant, the impedance of the vacuum Z in the future:

Impedance of the vacuum Z

I applaud all physicists and metrologist for the hard work they’ve carried out in continuation of my committee’s work over the last 225 years, which culminated in the new, physical constant-based definitions of the units. So do my fellow original committee members. These definitions are beautiful and truly forever.

(I know it is a bit indiscreet to reveal this, but Joseph Louis Lagrange told me privately that he regrets a bit that we did not introduce base and derived units as such in the 1790s. Now with the Planck constant being too important for the new SI, he thought we should have had a named base unit for the action (the time integral over his Lagrangian). And then make mass a derived quantity. While this would be the high road of classical mechanics, he does understand that a base unit for the action would not have become popular with farmers and peasants as a daily unit needed for masses.)

I don’t have the time today to go into any detailed discussion of the quarterly garden fests that Percy Williams Bridgman holds. As my schedule allows, I try to participate in every single one of them. It is also so intellectually stimulating to listen to the general discussions about the pros and cons of alternative unit systems. As you can imagine, Julius Wallot, Jan de Boer, Edward Guggenheim, William Stroud, Giovanni Giorgi, Otto Hölder, Rudolf Fleischmann, Ulrich Stille, Hassler Whitney, and Chester Page are, not unexpectedly, most outspoken at these parties. The discussion about coherence and completeness of unit systems and what is a physical quantity go on and on. At the last event, the discussion of whether probability is or is not a physical quantity went on for six hours, with no decision at the end. I suggested inviting Richard von Mises and Hans Reichenbach the next time. They might have something to contribute. At the parties, Otto always complains that mathematicians do not care enough anymore about units and unit systems as they did in the past, and he is so happy to see at least theoretical physicists pick up the topic from time to time, like the recent vector-based differentiation of physical quantities or the recent paper on the general structure of unit systems. And when he saw in an article from last year’s Dagstuhl proceedings that modern type theory met units and physical dimensions, he was the most excited he had been in decades.

Interestingly, basically the same discussions came up three years ago (and since then regularly) in the monthly mountain walks that Claude Shannon organizes. Leo Szilard argues that the “bit” has to become a base unit of the SI in the future. In his opinion, information as a physical quantity has been grossly underrated.

Once again: the new SI will be just great! There are a few more details that I would like to see changed. The current status of the radian and the steradian, which SP 811 now defines as derived units, saying, “The radian and steradian are special names for the number one that may be used to convey information about the quantity concerned.” But I see with satisfaction that the experts are discussing this topic recently quite in detail.

To celebrate the upcoming new SI here in heaven, we held a crowd-based fundraiser to celebrate this event. We raised enough funds to actually hire the master himself, Michelangelo. He will be making a sculpture. Some early sketches shown to the committee (I am fortunate to have the honorary chairmanship) are intriguing. I am sure it will be an eternal piece rivaling the David. One day every human will have the chance to see it (may it be a long time until then, dependent on your current age and your smoking habits). In addition to the constants and the units on their own, he plans to also work Planck himself, Boltzmann, and Avogadro into the sculpture, as these are the only three constants named after a person. Max was immediately accessible to model, but we are still having issues getting permission for Boltzmann to leave hell for a while to be a model. (Millikan and Fletcher were, understandably, a bit disappointed.) Ironically, it was Paul Adrien Maurice Dirac who came up with a great idea on how to convince Lucifer to get Boltzmann a Sabbath-ical. Ironically—because Paul himself is not so keen on the new SI because of the time dependence of the constants themselves over billions of years. But anyway, Paul’s clever idea was to point out that three fundamental constants, the Planck constant (6.62… × 1034 J · s), the Avogradro constant (6.02… × 1023 / mol), and the gravitational constant (6.6… × 10–11 m3 / (kg · s)) all start with the digit 6. And forming the number of the beast, 666, through three fundamental constants really made an impression on Lucifer, and I expect him to approve Ludwig’s temporary leave.

As an ex-mariner with an affinity for the oceans, I also pointed out to Lucifer that the mean ocean depth is exactly 66% of his height (2,443 m, according to a detailed re-analysis of Dante’s Divine Comedy). He liked this cute fact so much that he owes me a favor.

Mean depth of the oceans

So far, Lucifer insists on having the combination G(me / (h k))1/2 on the sculpture. For obvious reasons:

Lucifer's favorite combination

We will see how this discussion turns out. As there is really nothing wrong with this combination, even if it is not physically meaningful, we might agree to his demands.

All of the new SI 2018 committee up here has also already agreed on the music, we will play Wojciech Kilar’s Sinfonia de motu, which uniquely represents the physical constants as a musical composition using only the notes c, g, e, h (b-flat in the English-speaking world), and a (where a represents the cesium atom). And we could convince Rainer Maria Rilke to write a poem for the event. Needless to say, Wojciech, who has now been with us for more than two years, agreed, and even offered to compose an exact version.

Down on Earth, the arrival of the constants-based units will surely also be celebrated in many ways and many places. I am looking forward especially to the documentary The State of the Unit, which will be about the history of the kilogram and its redefinition through the Planck constant.

The path to the redefinition of the kilogram

As I already touched on, the most central point of the new SI will be the new definition of the kilogram. After all, the kilogram is the one artifact still present in the current SI that should be eliminated. In addition to the kilogram itself, many more derived units depend on it, say, the volt: 1 volt = 1 kilogram meters2/(ampere second3). Redefining the kilogram will make many (at least the theoretically inclined) electricians happy. Electrician have been using their exact conventional values for 25 years.

Exact conventional values

The value resulting from the convential value for the von Klitzing constant and the Josephson constant is very near to the latest CODATA value of the Planck constant:

Value resulting from the convential value for the von Klitzing constant and the Josephson constant

A side note on the physical quantity that the kilogram represents: The kilogram is the SI base unit for the physical quantity mass. Mass is most relevant for mechanics. Through Newton’s second law, Newton's second law, mass is intimately related to force. Assume we have understood length and time (and so also acceleration). What is next in line, force or mass? William Francis Magie wrote in 1912:

It would be very improper to dogmatize, and I shall accordingly have to crave your pardon for a frequent expression of my own opinion, believing it less objectionable to be egotistic than to be dogmatic…. The first question which I shall consider is that raised by the advocates of the dynamical definition of force, as to the order in which the concepts of force and mass come in thought when one is constructing the science of mechanics, or in other words, whether force or mass is the primary concept…. He [Newton] further supplies the measurement of mass as a fundamental quantity which is needed to establish the dynamical measure of force…. I cannot find that Lagrange gives any definition of mass…. To get the measure of mass we must start with the intuitional knowledge of force, and use it in the experiments by which we first define and then measure mass…. Now owing to the permanency of masses of matter it is convenient to construct our system of units with a mass as one of the fundamental units.

And Henri Poincaré in his Science and Method says, “Knowing force, it is easy to define mass; this time the definition should be borrowed from dynamics; there is no way of doing otherwise, since the end to be attained is to give understanding of the distinction between mass and weight. Here again, the definition should be led up to by experiments.”

While I always had an intuitive feeling for the meaning of mass in mechanics, up until the middle of the twentieth century, I never was able to put it into a crystal-clear statement. Only over the last decades, with the help of Valentine Bargmann and Jean-Marie Souriau did I fully understand the role of mass in mechanics: mass is an element in the second cohomology group of the Lie algebra of the Galilei group.

Mass as a physical quantity manifests itself in different domains of physics. In classical mechanics it is related to dynamics, in general relativity to the curvature of space, and in quantum field theory mass occurs as one of the Casimir operators of the Poincaré group.

In our weekly “Philosophy of Physics” seminar, this year led by Immanuel himself, Hans Reichenbach, and Carl Friedrich von Weizsäcker (Pascual Jordan suggested this Dreimännerführung of the seminars), we discuss the nature of mass in five seminars. The topics for this year’s series are mass superselection rules in nonrelativistic and relativistic theories, the concept and uses of negative mass, mass-time uncertainty relations, non-Higgs mechanisms for mass generation, and mass scaling in biology and sports. I need at least three days of preparation for each seminar, as the recommended reading list is more than nine pages—and this year they emphasize the condensed matter appearance of these phenomena a lot! I am really looking forward to this year’s mass seminars; I am sure that I will learn a lot about the nature of mass. I hope Ehrenfest, Pauli, and Landau don’t constantly interrupt the speakers, as they did last year (the talk on mass in general relativity was particularly bad). In the last seminar of the series, I have to give my talk. In addition to metabolic scaling laws, my favorite example is the following:

Shaking frequency of wet animal

I also intend to speak about the recently found predator-prey power laws.

For sports, I already have a good example inspired by Texier et al.: the relation between the mass of a sports ball and its maximal speed. The following diagram lets me conjecture speedmax~ln(mass). In the downloadable notebook, mouse over to see the sport, the mass of the ball, and the top speeds:

Mass of sports ball and its maximal speed

For the negative mass seminar, we had some interesting homework: visualize the trajectories of a classical point particle with complex mass in a double-well potential. As I had seen some of Bender’s papers on complex energy trajectories, the trajectories I got for complex masses did not surprise me:

Trajectories for complex masses

End side note.

The complete new definition reads thus: The kilogram, kg, is the unit of mass; its magnitude is set by fixing the numerical value of the Planck constant to be equal to exactly 6.62606X*10–34 when it is expressed in the unit s–1 · m2 · kg, which is equal to J · s. Here X stands for some digits soon to be explicitly stated that will represent the latest experimental values.

And the kilogram cylinder can finally retire as the world’s most precious artifact. I expect soon after this event the international kilogram prototype will finally be displayed in the Louvre. As the Louvre had been declared “a place for bringing together monuments of all the sciences and arts” in May 1791 and opened in 1793, all of us on the committee agreed that one day, when the original kilogram was to be replaced with something else, it would end up in the Louvre. Ruling the kingdom of mass for more than a century, IPK deserves its eternal place as a true monument of the sciences. I will make a bet—in a few years the retired kilogram, under its three glass domes, will become one of the Louvre’s most popular objects. And the queue that physicists, chemists, mathematicians, engineers, and metrologists will form to see it will, in a few years, be longer than the queue for the Mona Lisa. I would also make a bet that the beautiful miniature kilogram replicas will within a few years become the best-selling item in the Louvre’s museum store:

Miniature kilogram replicas

At the same time, as a metrologist, maybe the international kilogram prototype should stay where it is for another 50 years, so that it can be measured against a post-2018 kilogram made from an exact value of the Planck constant. Then we would finally know for sure if the international kilogram prototype is/was really losing weight.

Let me quickly recapitulate the steps toward the new “electronic” kilogram.

Intuitively, one could have thought to define the kilogram through the Avogadro constant as a certain number of atoms of, say, 12C. But because of binding energies and surface effects in a pile of carbon (e.g. diamond, graphene) made up from n = round(1 kg / m (12C)) atoms to realize the mass of one kilogram, all the n carbon-12 atoms would have to be well separated. Otherwise we would have a mass defect (remember Albert’s famous E = m c2 formula), and the mass equivalent for one kilogram or compact carbon versus the same number of individual, well-separated atoms is on the order of 10–10. Using the carbon-carbon bond energry, here is an estimation of the mass difference:

Estimation of the mass difference using the carbon-carbon bond energy

A mass difference of this size can for a 1 kg weight can be detected without problems with a modern mass comparator.

To give a sense of scale, this would be equivalent to the (Einsteinian) relativistic mass conversion of the energy expenditure of fencing for most of a day:

Energy expenditure of fencing for most of a day

This does not mean one could not define a kilogram through the mass of an atom or a fraction of it. Given the mass of a carbon atom m (12C), the atomic mass constant u = m (12C) / 12 follows, and using u we can easily connect to the Planck constant:

Connecting to the Planck constant

I read with great interest the recent comparison of using different sets of constants for the kilogram definition. Of course, if the mass of a 12C atom would be the defined value, then the Planck constant would become a measured, meaning nonexact, value. For me, having an exact value for the Planck constant is aesthetically preferable.

I have been so excited over the last decade following the steps toward the redefinition of the kilogram. For more than 20 years now, there has been a light visible at the end of the tunnel that would eliminate the one kilogram from its throne.

And when I read 11 years ago the article by Ian Mills, Peter Mohr, Terry Quinn, Barry Taylor, and Edwin Williams entitled “Redefinition of the Kilogram: A Decision Whose Time Has Come” in Metrologia (my second-favorite, late-morning Tuesday monthly read, after the daily New Arrivals, a joint publication of Hells’ Press, the Heaven Publishing Group, Jannah Media, and Deva University Press), I knew that soon my dreams would come true. The moment I read the Appendix A.1 Definitions that fix the value of the Planck constant h, I knew that was the way to go. While the idea had been floating around for much longer, it now became a real program to be implemented within a decade (give or take a few years).

James Clerk Maxwell wrote in his 1873 A Treatise on Electricity and Magnetism:

In framing a universal system of units we may either deduce the unit of mass in this way from those of length and time already defined, and this we can do to a rough approximation in the present state of science; or, if we expect soon to be able to determine the mass of a single molecule of a standard substance, we may wait for this determination before fixing a universal standard of mass.

Until around 2005, James Clerk thought that mass should be defined through the mass of an atom, but he came around over the last decade and now favors the definition through Planck’s constant.

In a discussion with Albert Einstein and Max Planck (I believe this was in the early seventies) in a Vienna-style coffee house (Max loves the Sachertorte and was so happy when Franz and Eduard Sacher opened their now-famous HHS (“Heavenly Hotel Sacher”)), Albert suggested using his two famous equations, E = m c2 and E = h f, to solve for m to get m = h f / c2. So, if we define h as was done with c, then we know m because we can measure frequencies pretty well. (Compton was arguing that this is just his equation rewritten, and Niels Bohr was remarking that we cannot really trust E = m c2 because of its relatively weak experimental verification, but I think he was just mocking Einstein, retaliating for some of the Solvay Conference Gedankenexperiment discussions. And of course, Bohr could not resist bringing up Δm Δt ~ h / c2 as a reason why we cannot define the second and the kilogram independently, as one implies an error in the other for any finite mass measurement time. But Léon Rosenfeld convinced Bohr that this is really quite remote, as for a day measurement time this limits the mass measurement precision to about 10–52 kg for a kilogram mass m.)

An explicit frequency equivalent f = m c2 / h is not practical for a mass of a kilogram as it would mean f ~ 1.35 1050 Hz, which is far, far too large for any experiment, dwarfing even the Planck frequency by about seven orders of magnitude. But some recent experiments from Berkeley from the last few years will maybe allow the use of such techniques at the microscopic scale. For more than 25 years now, in every meeting of the HPS (Heavenly Physical Society), Louis de Broglie insists on these frequencies being real physical processes, not just convenient mathematical tools.

So we need to know the value of the Planck constant h. Still today, the kilogram is defined as the mass of the IPK. As a result, we can measure the value of h using the current definition of the kilogram. Once we know the value of h to a few times 10–8 (this is basically where we are right now), we will then define a concrete value of h (very near or at the measured value). From then on, the kilogram will become implicitly defined through the value of the Planck constant. At the transition, the two definitions overlap in their uncertainties, and no discontinuities arise for any derived quantities. The international prototype has lost over the last 100 years on the order of 50 μg weight, which is a relative change of 5 × 10–8, so a value for the Planck constant with an error less than 2 × 10–8 does guarantee that the mass of objects will not change in a noticeable manner.

Looking back over the last 116 years, the value of the Planck constant gained about seven digits in precision. A real success story! In his paper “Ueber das Gesetz der Energieverteilung im Normalspectrum,” Max Planck for the first time used the symbol h, and gave for the first time a numerical value for the Planck constant (in a paper published a few months earlier, Max used the symbol b instead of h):

Excerpts from "Ueber das Gesetz der Energieverteilung im Normalspectrum"

(I had asked Max why he choose the symbol h, and he said he can’t remember anymore. Anyway, he said it was a natural choice in conjunction with the symbol k for the Boltzmann constant. Sometimes one reads today that h was used to express the German word Hilfsgrösse (auxiliary helping quantity); Max said that this was possible, and that he really doesn’t remember.)

In 1919, Raymond Thayer Birge published the first detailed comparison of various measurements of the Planck constant:

Various measurements of the Planck constant

From Planck’s value 6.55 × 10–34 J · s to the 2016 value 6.626070073(94) × 10–34 J · s, amazing measurement progress has been made.

The next interactive Demonstration allows you to zoom in and see the progress in measuring h over the last century. Mouse over the Bell curves (indicating the uncertainties of the values) in the notebook to see the experiment (for detailed discussions of many of the experiments for determining h, see this paper):

History of measurement of the Planck constant  h

There have been two major experiments carried out over the last few years that my original group eagerly followed from the heavens: the watt balance experiment (actually, there is more than one of them—one at NIST, two in Paris, one in Bern…) and the Avogadro project. As a person who built mechanical measurements when I was alive, I personally love the watt balance experiment. Building a mechanical device that through a clever trick by Bryan Kibble eliminates an unknown geometric quantity gets my applause. The recent do-it-yourself LEGO home version is especially fun. With an investment of a few hundred dollars, everybody can measure the Planck constant at home! The world has come a long way since my lifetime. You could perhaps even check your memory stick before and after you put a file on it and see if its mass has changed.

But my dear friend Lavoisier, not unexpectedly, always loved the Avogadro project that determines the value of the Avogadro constant to high precision. Having 99.995% pure silicon makes the heart of a chemist beat faster. I deeply admire the efforts (and results) in making nearly perfect spheres out of them. The product of the Avogadro constant with the Planck constant NA h is related to the Rydberg constant. Fortunately, as we saw above, the Rydberg constant is known to about 11 digits; this means that knowing NA h to a high precision allows us to find the value of our beloved Planck constant h to high precision. In my lifetime, we started to understand the nature of the chemical elements. We knew nothing about isotopes yet—if you had told me that there are more than 20 silicon isotopes, I would not even have understood the statement:

Silicon isotopes

I am deeply impressed how mankind today can even sort the individual atoms by their neutron count. The silicon spheres of the Avogadro project are 99.995 % silicon 28—much, much more than the natural fraction of this isotope:

Silicon spheres of the Avogadro project

While the highest-end beam balances and mass comparators achieve precisions of 10–11, they can only compare masses but not realize one. Once the Planck constant has a fixed value using the watt balance, a mass can be constructively realized.

I personally think the Planck constant is one of the most fascinating constants. It reigns in the micro world and is barely visible at macroscopic scales directly, yet every macroscopic object holds together just because of it.

A few years ago I was getting quite concerned that our dream of eternal unit definitions would never be realized. I could not get a good night’s sleep when the value for the Planck constant from the watt balance experiments and the Avogadro silicon sphere experiments were far apart. How relieved I was to see that over the last few years the discrepancies were resolved! And now the working mass is again in sync with the international prototype.

Before ending, let me say a few words about the Planck constant itself. The Planck constant is the archetypal quantity that one expects to appear in quantum-mechanical phenomena. And when the Planck constant goes to zero, we recover classical mechanics (in a singular limit). This is what I myself thought until recently. But since I go to the weekly afternoon lectures of Vladimir Arnold, which he started giving in the summer of 2010 after getting settled up here, I now have strong reservations against such simplistic views. In his lecture about high-dimensional geometry, he covered the symplectic camel; since then, I view the Heisenberg uncertainty relations more as a classical relic than a quantum property. And since Werner Heisenberg recently showed me the Brodsky–Hoyer paper on ħ expansions, I have a much more reserved view on the BZO cube (the Bronshtein–Zelmanov–Okun cGh physics cube). And let’s not forget recent attempts to express quantum mechanics without reference to Planck’s constant at all. While we understand a lot about the Planck constant, its obvious occurrences and uses (such as a “conversion factor” between frequency and energy of photons in a vacuum), I think its deepest secrets have not yet been discovered. We will need a long ride on a symplectic camel into the deserts of hypothetical multiverses to unlock it. And Paul Dirac thinks that the role of the Planck constant in classical mechanics is still not well enough understood.

For the longest time, Max himself thought that in phase space (classical or through a Wigner transform), the minimal volume would be on the order of his constant h. As one of the fathers of quantum mechanics, Max follows the conceptual developments still today, especially the decoherence program. How amazed was he when sub-h structures were discovered 15 years ago. Eugene Wigner told me that he had conjectured such fine structures since the late 1930s. Since then, he has loved to play around with plotting Wigner functions for all kind of hypergeometric potentials and quantum carpets. His favorite is still the Duffing oscillator’s Wigner function. A high-precision solution of the time-dependent Schrödinger equations followed by a fractional Fourier transform-based Wigner function construction can be done in a straightforward and fast way. Here is how a Gaussian initial wavepacket looks after three periods of the external force. The blue rectangle is an area with in the x p plane of area h:

How Gaussian initial wavepacket looks after three periods of the external force

Here are some zoomed-in (colored according to the sign of the Wigner function) images of the last Wigner function. Each square has an area of 4 h and shows a variety of sub-Planckian structures:

Zoomed-in images of the last Wigner function

For me, the forthcoming definition of the kilogram through the Planck constant is a great intellectual and technological achievement of mankind. It represents two centuries of hard work at metrological institutes, and cements some of the deepest physical truths found in the twentieth century into the foundations of our unit system. At once a whole slew of units, unit conversions, and fundamental constants will be known with greater precision. (Make sure you get a new CODATA sheet after the redefinition and have the pocket card with the new constant values with you always until you know all the numbers by heart!) This will open a path to new physics and new technologies. In case you make your own experiments determining the values of the constants, keep in mind that the deadline for the inclusion of your values is July 1, 2017.

The transition from the platinum-iridium kilogram, historically denoted platinum-iridium kilogram, to the kilogram based on the Planck constant h can be nicely visualized graphically as a 3D object that contains both characters. Rotating it shows a smooth transition of the projection shape from platinum-iridium kilogram to h representing over 200 years of progress in metrology and physics:

3D object of both the platinum-iridium kilogram and the Planck constant h

The interested reader can order a beautiful, shiny, 3D-printed version here. It will make a perfect gift for your significant other (or ask your significant other to get you one) for Christmas to be ready for the 2018 redefinition, and you can show public support for it as a pendent or as earrings. (Available in a variety of metals, platinum is, obviously, the most natural choice, and it is under $5k—but the $82.36 polished silver version looks pretty nice too.)

Here are some images of golden-looking versions of KToh3D (up here, gold, not platinum is the preferred metal color):

Golden-looking versions of KToh3D

I realize that not everybody is (or can be) as excited as I am about these developments. But I see forward to the year 2018 when, after about 225 years, the kilogram as a material artifact will retire and a fundamental constant will replace it. The new SI will base our most important measurement standards on twenty-first century technology.

If the reader has questions or comments, don’t hesitate to email me at jeancharlesdeborda@gmail.com; based on recent advances in the technological implications of EPR=ER, we now have a much faster and more direct connection to Earth.

À tous les temps, à tous les peuples!

Download this post as a Computable Document Format (CDF) file. New to CDF? Get your copy for free with this one-time download.

]]>
http://blog.wolfram.com/2016/05/19/an-exact-value-for-the-planck-constant-why-reaching-it-took-100-years/feed/ 8
New Derivatives of the Bessel Functions Have Been Discovered with the Help of the Wolfram Language! http://blog.wolfram.com/2016/05/16/new-derivatives-of-the-bessel-functions-have-been-discovered-with-the-help-of-the-wolfram-language/ http://blog.wolfram.com/2016/05/16/new-derivatives-of-the-bessel-functions-have-been-discovered-with-the-help-of-the-wolfram-language/#comments Mon, 16 May 2016 16:35:43 +0000 Oleg Marichev http://blog.internal.wolfram.com/?p=30736
Nearly two hundred years after Friedrich Bessel introduced his eponymous functions, expressions for their derivatives with respect to parameters, valid over the double complex plane, have been found.


In this blog we will show and briefly discuss some formerly unknown derivatives of special functions (primarily Bessel and related functions), and explore the history and current status of differentiation by parameters of hypergeometric and other functions. One of the main formulas found (more details below) is a closed form for the first derivative of one of the most popular special functions, the Bessel function J:

The first derivative of the Bessel J function with respect to its parameter

Many functions of mathematical physics (i.e. functions that are used often and therefore have special names) depend on several variables. One of them is usually singled out and designated as the “argument,” while others are usually called “parameters” or sometimes “indices.” These special functions can have any number of parameters. For example (see the Wolfram Functions Site), the Bessel functions Jv(z) and Iv(z), the Neumann function Yv(z), the Macdonald function Kv(z), and the Struve functions Hv(z) and Lv(z) take only one parameter (called the index), while the Whittaker functions Mμ,v(z) and Wμ,v(z) as well as the confluent hypergeometric functions 1F1(a; b; z) and U (a, b, z) take two parameters. The Anger functions Jv(z) and J superscript v over subscript mu (z) as well as the Weber functions Ev(z) and E superscript v over subscript mu (z) can have one or two parameters (in the cases of two parameters, they are called generalized Anger and Weber functions). The Appell and Humbert functions mostly have from three to five parameters, while more complicated special functions such as the generalized hypergeometric function pFq(a1, …, ap; b1, …, bq; z) can have any finite number of parameters.

Among other properties, differentiation of special functions plays an essential role, since derivatives characterize the behavior of functions when these variables change, and they are also important for studying the differential equations of these functions. Usually, differentiation of a special function with respect to its argument presents no essential difficulties. The largest collection of such derivatives, comprising the first, second, symbolic, and even fractional order for 200+ functions, can be found in the section “Differentiation” at the Wolfram Functions Site (for example, see this section, which includes 21 such derivatives for the Bessel function Jv(z)), or Y. A. Brychkov’s Handbook of Special Functions). The majority of these formulas are also available directly in the Wolfram Language through the use of the built-in symbols MathematicalFunctionData and EntityValue.

Derivatives with respect to parameters (as distinct from the argument), however, can generally be much more difficult to compute; that is the subject of this blog. Remarkably, the formula above, involving the generic first-order (with respect to the single parameter v) derivative of one of the most commonly occurring special functions in mathematical physics, has only just been discovered in closed form, and this perhaps surprising fact speaks to the difficulty of the general problem. So, using the Bessel J function as a characteristic example, let us take a brief walking tour through the history of special function differentiation.

Derivatives aren’t easy

Often, people even well acquainted with calculus tend to think that integration is difficult and differentiation is easy. The folklore is that “differentiation is mechanics, integration is art.” But the spirit of this saying is true only for elementary functions, where differentiation again produces elementary functions (or combinations thereof). For hypergeometric and other lesser-known special functions, when differentiation is carried out with respect to parameters, it can typically produce complicated functions of a more general class.

The distinction between differentiation with respect to parameters versus differentiation with respect to argument is exemplified by the Bessel J function. The derivative of Bessel J with respect to its argument z has been known for quite some time, and has this relatively simple closed form:

The first derivative of the Bessel J function with respect to its argument

However, the analytic evaluation of its derivative with respect to parameters (e.g. v in the above equation) is more complicated. Often, derivatives such as these can be written in the form of an integral or infinite series, but those objects cannot be represented in closed form through other simple or well-known functions. Historically, some special functions were introduced for the sole purpose of giving a simple notation for the derivatives of other, more basic functions. For example, the polygamma function arose in this way as a means of representing derivatives of the gamma function.

The generalized hypergeometric function pFq(a1, …, ap; b1, …, bq; z) and its derivatives play an essential role in the solution of various problems in theoretical and applied mathematics (see, for example, this article by L. U. Ancarani and G. Gasaneo concerning the applications of derivatives by parameters in quantum mechanics). The generalized hypergeometric function generates as special cases many of the most-used elementary functions (e.g. the trigonometric, hyperbolic, logarithmic, and inverse trigonometric functions) as well as many families of more specialized functions, including the Bessel, Struve, Kelvin, Anger–Weber, incomplete gamma, and integral (exponential, sine, and cosine) functions. In the case p = 0, q = 1, the generalized hypergeometric function pFq subsumes the Bessel family of functions Jv(z), Iv(z), Yv(z), and Kv(z). To be precise, Bessel J, for example, has the following hypergeometric representation:

Hypergeometric representation for Bessel J

Interestingly, the history of the function Jv(z) starts nearly exactly 200 years ago. In the 1816–17 proceedings of the Berlin Academy (published in 1819), in the work Analytische Auflösung der Keplerschen Aufgabe, Friedrich Wilhelm Bessel deals with the so-called Kepler equation
M=E-e sin(E), where M is the mean anomaly, E is the eccentric anomaly, and e is the eccentricity of a Keplerian orbit. The solution of this equation can be represented (in today’s notation) through Bessel functions of integer order:

Kepler equation solution in terms of integer-order Bessel J functions

In this first work, Bessel does not yet use the modern notation, but his function appears already implicitly. For example, he uses the following sum (note that Bessel uses Gauss’ notation 𝜋i for i!):

Bessel's sum in old-style notation

In modern times, we could write this as the sum of two Bessel functions, which can be shown in the Wolfram Language:

Simplifying Bessel's sum with the Wolfram Language

Furthermore, this sum is just the first derivative of the single Bessel function -2 a e J(i, e i):

Showing the equality of Bessel's sum with a derivative of a single Bessel J

In a second work from 1824, Bessel uses the nearly modern notation (with JI) to denote his function:

Bessel's second work using the nearly modern notation to denote his function

He also derives fundamental relations for this function:

An early derivative relation derived by Bessel

Various special instances of the generic Bessel function occur already in the writings of Bernoulli, Euler, d’Alembert, and others (see this article for a detailed account). The main reference about Bessel functions today is still the classical monograph by G. N. Watson, A Treatise on the Theory of Bessel Functions, which has been republished and extended many times since 1922.

So while the derivatives of Bessel J with respect to the argument z were known since the beginning of the nineteenth century, it took until the middle of the twentieth century before special cases for derivatives with respect to the index v were found. The derivatives of some Bessel functions with respect to the parameter v at the points v ==0, 1, 2,… and v == 1/2 were obtained by J. R. Airey in 1935, and the expressions for other Bessel family functions were given by W. Magnus, F. Oberhettinger, and R. P. Soni in “Formulas and Theorems for the Special Functions of Mathematical Physics” (1966):

Closed-form Bessel J derivatives known prior to 2002

Generalizations to any half-integer values of v were reported more recently at an international conference on abstract and applied analysis (Hanoi, 2002) as the following:

 First derivatives with respect to parameter of Bessel J at arbitrary half-integer order

These results, along with expressions for the parametric derivatives of Struve functions at integer and half-integer values, were published in 2004–2005. Various new formulas for differentiation with respect to parameters of the Anger and Weber functions, Kelvin functions, incomplete gamma functions, parabolic cylinder functions, Legendre functions, and the Gauss, generalized, and confluent hypergeometric functions can be found in the Handbook of Special Functions: Derivatives, Integrals, Series and Other Formulas. For a short survey and references, see H. Cohl.

But perhaps amazingly, given all this work, the first derivatives of the Bessel functions in closed form for arbitrary values of the parameter were obtained only in 2015 (Y. A. Brychkov, “Higher Derivatives of the Bessel Functions with Respect to the Order” (2016)). They are expressed as combinations of products of Bessel functions and generalized hypergeometric functions; for example:

First derivatives of the Bessel functions in closed form for arbitrary values of the parameter

The plots below give some impressions about the behavior of the Bessel function Jv(z) and its derivative on the domains of interest. First, we plot (in the real v-z plane) the expression giving the first derivative of Jv(z) with respect to v (see the first equation of this article):

Plotting the first derivative with respect to parameter of Bessel J in the real v-z plane

For a fixed index, specifically v = 𝜋, we show the Bessel function J, together with its first two derivatives:

Bessel J and its first two derivatives at v=Pi

It is interesting to note that the first two derivatives (with respect to z and with respect to v) have nearly the same zeros.

How did we get here?

It is remarkable that even almost 300 years after the introduction of a classical function (the Bessel function J0(z) was introduced by Daniel Bernoulli in 1732), it is still possible to find new and relatively simple formulas relating to such functions. The actual derivation of the formula introduced above for J superscript (1, 0) over subscript v (z) (along with the related results for I superscript (1, 0) over subscript v (z), and the Neumann, Macdonald, and Kelvin functions) was complicated, and was achieved using the help of the Wolfram Language. Details of the derivation are published, and here we give a rough sketch of derivation of the approach that was used.

First, we recall that the Bessel and other functions in which we are interested for this program are of the hypergeometric type; differentiation by parameters of the generic hypergeometric function of a single variable pFq(a1, …, ap; b1, …, bq; z) requires more complicated functions of the hypergeometric type with more than one variable (see this article by L. U. Ancarani and G. Gasaneo). The first derivative with respect to an upper “parameter” ak, and all derivatives of symbolic integer order m with respect to a “lower” parameter bk of the generalized hypergeometric function, can be expressed in terms of the Kampé de Fériet hypergeometric function F sup A, B, C over sub P, Q, S of two variables by the following formulas:

First derivatives of Hypergeometric pFq with respect to parameters in terms of the Kampé de Fériet function

Above, the Kampé de Fériet hypergeometric function is defined by the double series (see defining expressions here and here).

Kampé de Fériet hypergeometric function is defined by a double series

The Kampé de Fériet function can be considered as a generalization of the hypergeometric function to two variables:

The Kampé de Fériet function considered as a generalization of the hypergeometric function to two variables

A corresponding regularized version of the function can also be defined by replacing the product of Pochhammer symbols Replacing the Pochhammer symbols in the denominator with Replacing the Pochhammer symbols in the denominator

The Kampé de Fériet function can be used to obtain a representation of the derivatives of the Bessel function J with respect to its parameter:

First derivative of Bessel J with respect to parameter in terms of Kampé de Fériet

This expression coincides with the simpler formula above, which involves hypergeometric functions of one variable, though this is not necessarily easy to see (we don’t yet have a fully algorithmic treatment for the reduction of multivariate hypergeometric functions into expressions containing only univariate hypergeometric functions, and this has contributed to the difficulty in discovering formulas like the one discussed here).

Double series, like the one given above defining the generalized hypergeometric functions of two variables, also arise in the evaluation of Mellin transforms of products of three Meijer G-functions:

Mellin transform of the product of three Meijer G-functions

The right side of this formula includes a Meijer G-function of two variables that generically can be represented, in the non-logarithmic case, as a finite sum of Kampé de Fériet hypergeometric functions with some coefficients, by analogy with these two formulas. Finally, the Kampé de Fériet function also arises in the separation of the real and imaginary parts of hypergeometric functions of one variable, z==x+ⅈy, with real arguments:

Kampé de Fériet function also arises in the separation of the real and imaginary parts of hypergeometric functions of one variable, z==x+iy

It should be mentioned that in recent years the hypergeometric functions of many variables have found growing applications in the realms of quantum field theory, chemistry, engineering, and, in particular, communication theory and radio location. Many quite practical results can be represented using such functions, and consequently, most of the principal results in this field are obtained in the applied science literature. The theory of such functions in theoretical mathematics circles has thus far been elaborated relatively weakly.

Symbolic derivatives in the Wolfram Language

We are lucky here at Wolfram to have the originator of these new and exciting symbolic derivative formulas, Yury Brychkov, as part of our team, enabling us to bring this constantly developing area of research work into the grasp of our users. We are also lucky to have at our disposal the Entity framework of the Wolfram Language, which allows, among other things, for the integration of cutting-edge new results such as these on the timescale of weeks or days, in a computable format, easily accessible from a variety of Wolfram Language platforms. For example, in Mathematica, one can evaluate the following:

Obtaining derivatives with respect to parameters in the Wolfram Language

This obtains the principal formula of this article. We can attempt to confirm the formula numerically by first substituting global values of v and z and activating; we get:

Substituting global variables in for v and z and activating

Next, we separate the left- and right- hand sides (to allow for floating-point numerical errors) and substitute random values for the argument and parameter to obtain:

Numerical verification of derivative formula using random values for argument and parameter

The numerical derivative of the left-hand side is computed internally in the Wolfram Language via a limiting procedure. The equality of left- and right-hand sides, and therefore the correctness of the original derivative formula, is thus apparent.

Aside from the many new results for symbolic and parametric derivatives alluded to in this article and available only through EntityValue (though deeper integration into future versions of the Wolfram Language is an ongoing effort), a large number of long-standing results in this field have already been implemented into the Mathematica kernel and the core Wolfram Language. By reason of their complexity, such derivatives by parameters are not evaluated automatically, but can be seen using the FunctionExpand command. For example:

FunctionExpand will explicitly evaluate some derivatives

At the second and higher order, derivatives of Bessel and related functions can still be expressed in terms of the Kampé de Fériet hypergeometric function F sup A, B, C over sub P, Q, S of two variables, but the resulting formulas can be rather complicated, and can include the Bell Y polynomials:

At higher order, derivatives will be more complicated and can involve Bell Y

The latter arise from expressing the Bessel function Jv(z) as the composition of the function
0F1(; v+1; w) and function w==- z to the second power over 4

Expressing Bessel J as the composition of two functions

We utilize Faà di Bruno’s formula, which describes the nth derivative of a composition of m functions fi(z)/; 1 ≤ im. In the m=2 case (see here and here), we obtain, for instance, an expression involving the following:

Bell Y expressions arising in higher-order derivatives from Faà di Bruno's formula

The corresponding formula for generic m and n can be obtained and verified in the Wolfram Language as:

Verifying the nth-order derivative of the composition of m functions for arbitrary m and n

While the Bell Y’s—for which no generic closed forms exist—are generally needed to express higher-order derivatives, as this blog was headed to press one of the authors, Yury Brychkov, even found a way to eliminate Bell Y from the nth derivatives with respect to the parameter of the Bessel functions, leaving us with the remarkable

Eliminating Bell Y from the nth-order derivative with respect to parameter of the  Bessel function J

For the convenience of interested users who would like to see in one place all known formulas for derivatives of special functions with respect to parameters (including those listed above), we have collected and presented these formulas in the following ways:

    1. In a Grid format (download here).
    2. In notebook format (download here).
    3. The subset of formulas that were known prior to circa 2009 can be seen on the Wolfram Functions Site in the “Differentiation” sections of the various functions (for example, see this page).

In our next blog, we will continue presenting closed forms for derivatives, for a collection for 400+ functions with generic rules for derivatives of symbolic and fractional order. In the meantime, we hope you enjoy exploring on your own the world of special function differentiation in the Wolfram Language!

Download this post as a Computable Document Format (CDF) file. New to CDF? Get your copy for free with this one-time download.

]]>
http://blog.wolfram.com/2016/05/16/new-derivatives-of-the-bessel-functions-have-been-discovered-with-the-help-of-the-wolfram-language/feed/ 4
Special Event: New Wolfram Language Resources for the Classroom http://blog.wolfram.com/2016/05/13/special-event-new-wolfram-language-resources-for-the-classroom/ http://blog.wolfram.com/2016/05/13/special-event-new-wolfram-language-resources-for-the-classroom/#comments Fri, 13 May 2016 16:31:22 +0000 Rob Morris http://blog.internal.wolfram.com/?p=30814 Earlier this year we launched Wolfram Programming Lab as the place to start learning the Wolfram Language. And since launch, we’ve received a lot of feedback and support from educators and students interested in using Programming Lab in their classrooms.

Programming Lab was conceived and designed with teaching in mind, and to help make Programming Lab the best possible learning environment, we’ve developed some new tools for both students and teachers. We invite you to preview these new materials at a special virtual event, New Resources for the Classroom: Virtual Workshop for Educators.

New Resources for the Classroom: Virtual Workshop for Educators

Programming Lab is built on two major components utilizing two different learning styles to teach coding in the Wolfram Language—Explorations are based on a jump-right-in-and-explore approach, and Stephen Wolfram’s book An Elementary Introduction to the Wolfram Language provides the basis for a systematic approach. We are introducing new educator-focused functionality for both of these components.

First are the Explorations. These are bite-sized coding exercises designed to create something or answer a specific question. We are enhancing these by introducing Teacher’s Editions, which provide goals, procedures, and helpful comments for each step within an Exploration. In addition to these, we have also developed “Programming Lab modules.” These are handy planning guides that are tailor-made to fit a specific duration: a three-day introduction, a five-day series, and so on. They range across topics that include astronomy, geography, math, and many others. These modules provide educators with the material they need to present Explorations that will complement the curriculum, or as a fun way to introduce a topic while simultaneously adding a programming language to their toolkits.

Wolfram Programming Lab

Explorations work well as targeted activities for students, but some students will want a more rigorous framework for learning the Wolfram Language. Those students will benefit from working through Stephen Wolfram’s An Elementary Introduction to the Wolfram Language, which is built into Programming Lab. At the end of each section of the book, there are exercises to help solidify your understanding. We’re enhancing these exercises by providing instant, automated grading of your answers.

To preview these new materials and to be the first to try them out, please join us at the virtual event on May 17, 2016, from 4–5pm US EDT (8–9pm GMT). All are welcome, as no programming experience is necessary! This event is part of a series of workshops for educators, ranging in focus from how to use Programming Lab to teaching computational thinking principles. Recordings of past events are available online. You can register for the latest educator workshop here.

]]>
http://blog.wolfram.com/2016/05/13/special-event-new-wolfram-language-resources-for-the-classroom/feed/ 0
Computational Stippling: Can Machines Do as Well as Humans? http://blog.wolfram.com/2016/05/06/computational-stippling-can-machines-do-as-well-as-humans/ http://blog.wolfram.com/2016/05/06/computational-stippling-can-machines-do-as-well-as-humans/#comments Fri, 06 May 2016 16:37:56 +0000 Silvia Hao http://blog.internal.wolfram.com/?p=30628 Stippling in art

Stippling is a kind of drawing style using only points to mimic lines, edges, and grayscale. The entire drawing consists only of dots on a white background. The density of the points gives the impression of grayscale shading.

Back in 1510, stippling was first invented as an engraving technique, and then became popular in many fields because it requires just one color of ink.

Here is a photo of a fine example taken from an exhibition of lithography and copperplate art (the Centenary of European Engraving Exhibition held at the Hubei Museum of Art in March 2015; in case you’re curious, here is the museum’s official page in English).

Photo of a piece from lithography and copperplate art exhibit at Hubei Museum of Art

The art piece is a lithographic print. From a view of one meter away, it shows remarkable detail and appears realistic. However, looking at it from a much closer distance, you can see it’s made up of hundreds of thousands of handcrafted stipples, with the original marble stone giving it even more texture.

Lithographic print from one meter away

From my point of view, this artistic stippling method is like a simulation of variable solidity using small dotted patterns, like a really dense dot matrix printer, except these patterns are handmade. It is fascinating looking at lithographs, observing how a macro reality emerges from uncountable random dots: the mirror reflection on the floor, the soft sense of the cloths, and the mottling on the building engraved by history. (Photos were taken from the same exhibition.)

Photos of lithographs from lithography and copperplate exhibit at Hubei Museum of Art

As a technique invented five hundred years ago, stippling has become popular in so many fields way beyond its original intention because of its simplicity. It was once the standard for technical illustrations, especially in math and mechanics books. Even today, many people are interested in it. Experts from different domains are still using stippling in daily work.

Stippling

Typical stippling almost always uses one or two colors. Actually, one can create all sorts of textures and tones without introducing artifacts—one cannot achieve both with techniques like hatching or line drawing—just by varying the density and distribution of the dots. That makes stippling an excellent choice when archaeologists, geologists, and biologists are doing fieldwork and want to record things they find immediately. They don’t want to travel along with heavy painting gear; a single pen is good enough for stippling. Even in modern days, compared to photography, stippling still holds some advantages despite requiring a large amount of time. For example, the author can mark and emphasize any features he or she wants right on the drawing; thus, the technique is an important skill—one that is still being taught and researched today.

Here to support my point, I made these dinnertime paper-napkin stipplings (if they look like a complete disaster to you, please forgive me, as I’m not an expert and they only took about five minutes):

Dinnertime paper-napkin stipplings

It is also very interesting to note that stippling is such a natural way to distribute dense points that the strategy is actually adopted by many living things, like this one in my dish:

Example of stippling found in nature

In the computer age, there is a special computer graphics domain called non-photorealistic rendering. Stippling is one of its basic artistic painting styles, and can be used as the foundation of many other synthetic styles, such as mosaic, stained glass, hatching, etc.

Back to the art side. Stippling shows interesting connections with Divisionism and Pointillism, which both belong to Neo-Impressionism.

A Sunday Afternoon on the Island of La Grande Jatte by Georges Seurat

In popular art, people are still attracted to this ancient art. There are even stipple tattoos on Pinterest. You can watch people doing it with stunning effect. Perhaps you can even learn stippling yourself on the internet.

Needless to say, stippling as a handcrafted art really requires lots of patience. The manual process is time consuming. Moreover, it takes a lot of practice to become skillful.

After trying so hard on the napkin without satisfaction, I had a closer look at the masterpieces. I noticed that the points in the drawings seem random but are actually ordered, a bit like the structure seen in a quasicrystal:

Random dots versus ordered dots

In the above comparison, the left figure corresponds to randomly distributed points, while the right figure corresponds to a distribution like those found in stippling drawings. Obviously, they are very different from each other.

It turns out when using points to approximate grayscale shading, a random set of points with uniform distribution is usually not good enough.

To illustrate that, both images in the comparison below have 63,024 points sampled from the same distribution with the same contrast. If we think of the original grayscale image as a two-dimensional distribution, the local density of the two resulting images are the same for any corresponding position. Therefore, the contrast of the two images must be the same as well, which can be illustrated by resizing them to a small enough scale so the detail of the points won’t distract our eyes.

Comparing the stippling distribution two ways with the same image

Nevertheless, the one on the right is a good stippling (or, as it’s usually called, well spaced), while the one on the left has too many unwanted small artifacts—clumps and voids that do not exist in the original image.

Now it is not very hard to see that in the stippling graphics, the triangle formed with any three points nearest to each other is nearly equilateral. This turns out to be the essential property of a “good” stippling.

Given that unique micro scale character, I couldn’t help wondering: is it possible to generate a stippling drawing from any image programmatically?

The answer is yes—as I have just shown one of the results from my generator, and which I will describe in detail in the rest of this blog.

Actually, there are not only lots of papers on this topic, but also well-made applications.

Back to the first comparison example. In order to quantitatively reveal the “well-spaced” property, I draw the DelaunayMesh of the points and observe them:

Drawing the DelaunayMesh points

With the meshes, I can confirm my equilateral triangle conjecture numerically by graphing the distribution of the interior angles of the cells in the meshes:

Graphing the distribution of the interior angles of the cells in the meshes

Recalling the duality between a Delaunay mesh and a Voronoi mesh, and with the help of lots of academic papers, I eventually inferred that my equilateral Delaunay mesh corresponds to the so-called centroidal Voronoi diagram (CVD). Indeed, the CVD is the de facto method for computer-generated stippling. Moreover, there is a dedicated algorithm for it due to Lloyd:

    1. Generate n random points inside the region of interest
    2. Generate the Voronoi diagram of the n points
    3. Find the centroid (i.e. center of mass) of each Voronoi cell
    4. Use the n centroids as the resulting points
    5. If satisfied, stop; otherwise, return to step 1

Here the key steps are the Voronoi diagram generation and the centroid finding. The former is a perfect match for the bounded version of the built-in function VoronoiMesh. The latter, as the Voronoi cells for a closed region are always closed convex polygons, has a simple formula that I’d like to briefly describe as follows for completeness. If you’re not into math, it can be skipped safely without harming the joy of stippling!

Now suppose I have a cell defined by n vertices ordered clockwise (or counterclockwise) {P1=(x1, y1), P2=(x2, y2),… Pn=(xn, yn)}; then its centroid C can be determined as:

Determining the centroid C

As a test, we generate 2,500 uniformly distributed random points in a square region [-1,1]×[-1,1]:

Generating 2,500 uniformly distributed random points in a square region

Its Voronoi diagram is:

Voronoi diagram

Lloyd’s algorithm can be expressed as the following findCentroid function:

Expressing Lloyd's algorithm as findCentroid function

Here, just to illustrate the principle of the algorithm, and to make things simpler to understand by ensuring that the code structure is similar to the weighted centroidal Voronoi diagram (which I will describe later), I have defined my own findCentroid function. Note that for uniform cases, there is the much more efficient built-in function, MeshCellCentroid:

Using MeshCellCentroid function

Each Polygon from the Voronoi mesh can be extracted with MeshPrimitives[...,2], which then should be piped to the findCentroid to complete one iteration:

Extracting each polygon from the Voroni mesh

Now we’re ready to animate the first 50 iteration results to give a rough illustration of the CVD:

Animating the first 50 iteration results to give a rough illustration of the CVD

Animating the first 50 ineration results to give a rough illustration of the CVD

There are various ways to show the difference between the point distributions before and after the process:

Point distributions before and after the process

For example, I can use NearestNeighborGraph to illustrate their connectivity, which will highlight the unwanted voids in the former case:

Using NearestNeighborGraph to illustrate thier connectivity

Alternatively, as was shown earlier, I can compare the statistics of the interior angles. After Lloyd’s algorithm, the mesh angles are much nearer to 60°:

Comparing the statistics of the interior angles

Finally, to give another intuitive impression on the “well-spaced” property from a different point of view, I’d like to compare the discrete Fourier transform of the initial points with the one of the refined points:

Comparing the discrete Fourier transform of the initial points with the one of the refined points

Basically, the Fourier transform counts the significances of periodic structures with different periods. The ones with the largest periods (say, a structure repeating every 100 pixels) correspond to the center-most positions, and the ones with the smallest periods (say, a structure repeating every 2 pixels) correspond to the furthest positions. The value at a position indicates the significance of the corresponding structure. In the above plots, the larger values are illustrated with whiter colors. The center white spots correspond to the sum of all the data, which is not of interest here. In the initial case, the significances of different periodic structures are almost the same; this indicates that it is a uniform random distribution (or so-called white noise). The refined case is more interesting. The large-scale periodic structures are all flattened out, illustrated as a black disk region around the center spot, which means the points distribute uniformly on the macro scale. Then there is a bright circle around the dark disk region. The circle shape indicates that the positions in that circle correspond to structures with nearly the same period but in different directions, which are the approximately equilateral triangles in the mesh with nearly the same size but random directions. The outer “rings” correspond to high-order structures, which should not appear in a perfect refined case. So it can be clearly seen that the refined points’ case effectively achieves both isotropism and low-stop filter. This is another facet of view to understand the term “well spaced.” Now I can say that the property is also related to the concept of the “blue color” of noise.

So far, it seems I have solved the puzzle of stippling a uniform area, but how about a non-uniform area—say, the Lena example?

After some trials, I realized Lloyd’s algorithm could not be naively adapted here, as it always smoothed out the distribution and resulted in a uniform CVD. It looked like I had to find another way.

The answer turned out to be a simple modification of Lloyd’s algorithm. However, back when I thought about the problem, a totally different idea showed an interesting possibility. It’s a mathematically beautiful method, which I abandoned only because I did not have enough time to investigate it deeply. So I’d like to talk about it in brief.

Recalling that I was looking for a Delaunay mesh, with cells that resemble equilateral triangles as much as possible, one thing that immediately came to my mind was the conformal map, which transforms between two spaces while preserving angles locally. Therefore, under a carefully designed conformal map, a good stippling on a uniform area is guaranteed to be transformed to a good one on a specified non-uniform area.

A simple example of a conformal map is a complex holomorphic function f, say f(z)=(z-(z^2/2))exp(-1/5(z-1)^2)

Using a complex holomorphic function

This transforms the following stippling points refinedPts uniformly, distributing on square [-1,1]×[-1,1] to a new points distribution transPts:

Using transPts for a new points distribution

The result looks very nice in the sense of being well spaced, which can also be confirmed with its Delaunay mesh, its Voronoi mesh, and its connectivity:

Delaunay mesh

Voronoi mesh

Connectivity

So far, so good! However, this theoretically elegant approach is not easily generalizable. Finding the right f for an arbitrary target distribution (e.g. the grayscale distribution of an image) will ask for highly sophisticated mathematical skills. My numerical trials all failed, thus I’m not going to talk about the theoretical details here. If you are interested in them, there is a dedicated research field called computational conformal mapping.

Despite the elegance of the conformal mapping, I was forced back to Lloyd’s algorithm. I had to give it another thought. Yes, it doesn’t work on non-uniform cases, but it’s close! Maybe I can generalize it. After reading through some papers, I confirmed that thought. The popular method for stippling non-uniform areas indeed comes from a modification of the CVD, developed by Adrian Secord, called the weighted centroidal Voronoi diagram.

The new algorithm is similar to Lloyd’s, except that in step three, when looking for the centroid of the Voronoi cell, a variable areal density ρ(P) is considered, which is proportional to the grayscale at the location P. Thus instead of using equation 1, I can calculate the centroid according to the definition:

Calculating the centroid

Clearly the integrations are much more time-consuming than in equation 1. In his paper and master thesis, Secord presents an efficient way to compute them that involves a precomputation of certain integrations. However, I noticed (without theoretical proof) that the weighted CVD can be sufficiently approximated in a much cheaper way if we accept a compromise not to stick to the exact formula of the centroid but emphasize the core idea of choosing C closer to points with larger weights.

My new idea was simple and naive. For a cell of n vertices {P1,…,Pn}, the algorithm acts as follows:

Cell of n vertices

    1. Compute the geometric centroid C
    2. Compute the weights of the vertices as {w1,…,wn}
    3. Compute normalized weights as W sub k = (w sub k)/max{(w sub 1,...,w sub n})
    4. For every vertex Pk, move it along Arrow over C'P sub k with the factor of Wk to new position Pk (so the vertex with the largest weight does not move, while the vertex with the smallest weight moves most)
    5. Compute the geometric centroid C of the new cell defined by {P1,…,Pn} as the final result

Written in the Wolfram Language, it’s this findCentroidW function:

findCentroidW function

Since the convergence of the algorithm was not obvious (and since it’s quite time-consuming), I decided to do an early stopping after dozens of iterations during the numerical experiment.

Now that everything was set up, I felt so confident in trying my functions in the real world! The first example I chose was the Lena photo.

I imported the original image, keeping both the grayscale and color versions for later use in the artistic rendering section:

Importing original Lena photo and converting to grayscale

For the convenience of Interpolation, I used ColorNegate. For a better visual feeling, I enhanced the edges that have large gradients:

Converting image to ColorNegate

I rescaled the image coordinates to the rectangle region [-1,1]×[-1,1] for convenience, and used Interpolation on the whole image to get a smooth description of the grayscale field:

Using Interpolation to get a description of the grayscale field

To have a good initial point distribution, I sampled the points so that the local density is proportional to the local grayscale (though I won’t need to have this precisely, as the weighted Voronoi process will smooth the distribution anyway). By taking advantage of ContourPlot, I generated a few regions according to the level set of the densityFunc:

Using ContourPlot to generate a few regions according to the level set of the densityFunc
Using ContourPlot to generate a few regions according to the level set of the densityFunc

For each region in levelRegions, at first I sampled the points inside it on a regular grid, with the area of the grid cell inversely proportional to the level counter of the region. Then I immediately noticed the regular grid can be a steady state of my algorithm, so to ensure an isotropic result, initial randomness was needed. For that purpose, I then added a dithering effect on the points, with its strength specified by a parameter 0≤κ≤1 (where 0 gives no dithering, while 1 gives a globally random distribution):

Adding dithering effect on point, with its strength specified by a parameter 0<=κ<=1

The total number of points I had sampled as initPts was huge:

Number of points sampled as initPts

However, their quality was poor:

Quality of the number of points

The weighted Voronoi relaxation process, which is similar to the one in the uniform case, was going to improve the distribution in a dramatic way, though the computation was a bit slow due to the amount of points. Notice that I forced an early stopping after 30 iterations:

The weighted Voronoi relaxation process

In spite of only 30 iterations, the result was in my opinion fairly good (note that some visual artifacts in the following image are due to the rasterizing process during display):

Result of weighted Voronoi relaxation process
Result of weighted Voronoi relaxation process

I then applied the similar visual analysis on it, and found out that the pattern of the connectivity graph formed some kind of rather interesting self-similar, multi-resolution tiles:

Applying the connectivity graph to the image
Applying the connectivity graph to the image

Statistics on the interior angles of the corresponding DelaunayMesh also indicated I had indeed achieved the well-spaced distribution:

Statistics on the interior angles of the corresponding DelaunayMesh

Now I felt quite satisfied!

However, as a person who likes to reinvent my own wheel, I would not stop here!

As I have mentioned before, some artistic rendering effects can be simulated based on the stippling result. Encouraged by the above result, I intended to try replicating some of them.

One effect is a simple hatching.

Inspired by the example under the Properties & Relations section in the documentation of GradientOrientationFilter, I initialized a stroke at each stippling point, with the lengths and orientations controlled by local grayscale and gradient magnitude, respectively:

Producing the hatching effect using GradientOrientationFilter
Producing the hatching effect using GradientOrientationFilter

There seemed to be some artifacts in the dark regions, possibly due to the naive approach, but the whole result was acceptable for me.

Another styling I tried was Pointillism.

For this simulation, I had to take care of the colors:

Pointillism used on Lena image starting with the colors

Then I added random deviations to the colors of each points:

Adding random deviations to the colors of each point

I finalized the process with the morphological Opening operation and matched the histogram with the original color image using HistogramTransform:

Finalizing the process with Opening and matching the histogram with the original color image using HistogramTransform

However, I’d like to try something more “Impressionist.”

The rendering in the last example used strokes with identical size, but in practice it could often vary, with some local properties of the targets being drawn, so I tried to include that characteristic.

For the special property to be reflected, I chose the ImageSaliencyFilter (but it is, of course, totally fine to choose other ones):

ImageSaliencyFilter

Like the densityFunc, I interpolated the coarseMask and generated some level regions:

Interpolating the coarseMask and generating level regions

Now the stippling points could be grouped according to the coarseLevelRegions:

Stippling points grouped according to coarseLevelRegions

I then composed the stroke template as a square centered at the given point, with its color determined and randomized in a way similar to that in the last example:

Composing the stroke template

Now I was ready to paint the picture layer by layer (note that the following code can be quite time consuming if you try to display coarseLayers!):

Painting the picture layer by layer

Composing the layers, I got the final Impressionist result:

Impressionist result

Note the local vivid color blocks, especially those at the feather and hat regions, and how the rendering still preserved an accurate scene at the macro scale.

With such an accomplishment, I couldn’t wait to try the procedure on other pictures. One of them I want to share here is a photo of a beautiful seacoast taken in Shenzhen, China.

The original photo was taken on my smart phone. I chose it mainly because of the rich texture of the stones and waves. With a bit of performance optimization, I got some quite-gorgeous results in a reasonable time. (Precisely speaking, it’s 35,581 stipples finished in 557 seconds! Much faster than my bare hand!)

Stippling of beach photo

The hatching and Pointillism versions were also quite nice:

Hatching of beach photo

Pointillism of beach photo

For now, my adventure has stopped here. However, there are still many things I’m planning to explore in the future. I may try more realistic stroke models. I may try simulating different types of the infiltration of ink on canvas. It will also surely be fun to control a robot arm with a pen drawing the graphics out in the real world on a real canvas. So many possibilities, so much stippling joy! If you love them as well, I’d like to hear suggestions from you, and hope to have some enjoyable discussions.

Download this post as a Computable Document Format (CDF) file. New to CDF? Get your copy for free with this one-time download.

]]>
http://blog.wolfram.com/2016/05/06/computational-stippling-can-machines-do-as-well-as-humans/feed/ 12