An important vote for the future weights and measures used in science, technology, commerce and even daily life happened here today. This morning’s agreement is the culmination of at least 230 years of wishing and labor by some of the world’s most famous scientists. The preface to the story entails Galileo and Kepler. Chapter one involves Laplace, Legendre and many other late18thcentury French scientists. Chapter two includes Arago and Gauss. Some of the main figures of chapter three (which I would call “The Rise of the Constants”) are Maxwell and Planck. And the final chapter (“Reign of the Constants”) begins today and builds on the work of contemporary Nobel laureates like Klaus von Klitzing, Bill Phillips and Brian Josephson.
I had the good fortune to witness today’s historic event in person.
In today’s session of the 26th meeting of the General Conference on Weights and Measures was a vote on Draft Resolution A that elevates the definitions of the units through fundamental constants. The vote passed, and so the draft resolution has been elevated to an international binding agreement.
While the vote was the culmination of the day, this morning we heard four interesting talks (“SI” stands for Système international d’unités, or the International System of Units):
It was a very interesting morning. Here are some pictures from the event:
Yes, these are really tattoos of the new value of the Planck constant.
So why do I write about this? There are a few reasons why I care about fundamental constants, units and the new SI.
Although I am deeply involved with units and fundamental constants in connection with their use in WolframAlpha and the Wolfram Language, I was wearing a media badge today because I have been the science adviser—and sometimes the best boy grip—for the forthcoming documentary film The State of the Unit.
Lastly, this blog is a natural continuation of my blog from two years ago, “An Exact Value for the Planck Constant: Why Reaching It Took 100 Years.” This blog discusses in more detail the efforts related to the future definition of the kilogram through the Planck constant.
A lot could be written about the highprecision experiments that made the 2019 SI possible. The hardest (and most expensive) part was the determination of the Planck constant. It involved half a dozen socalled Kibble balances that employ two macroscopic quantum effects (the Josephson and quantum Hall effects) and the famous “roundest objects in the world”: silicon shapes of unprecedented purity that are nearly perfect spheres. The State of the Unit will show details of these experiments and interviews with the researchers.
Before discussing in some more detail the event today and what this redefinition of our units means, let me briefly recall the beginnings of the story.
Something very important for modern science, technology and more happened on June 22, 1799. At the time, this day in Paris was called 4 Messidor an 7 according to the French Revolutionary calendar.
A nineyear journey led by the top mathematicians, physicists, astronomers, chemists and philosophers of France (including Laplace, Legendre, Condorcet, Berthollet, Lavoisier, Haüy, de Borda, Fourcroy, Monge, Prony, Coulomb, Delambre, Méchain) came to a natural end. It was carried out in the middle of the French Revolution; some of the main figures of the story lost their lives in it. And in the end, the metric system was born.
The journey started seriously when, on April 17, 1790, Charles Maurice de TalleyrandPérigord (Bishop of Autun) presented a plan to the French National Assembly to build a new system of measures based on nature (the Earth) using the decimal system, which was not generally used at this time. At the end of April 1790, the main daily French newspaper Gazette Nationale ou le Moniteur Universel devoted a large article to Talleyrand’s presentation.
The different weights and measures throughout France had become a serious economic obstacle for trade and a means for the aristocracy to exploit peasants by silently changing the measures that were under their control. Not surprisingly, measuring land matters a lot. And so, the Department of Agriculture and Trade was the first to join (in 1790) Talleyrand’s call for new standardized measures.
A few months later in August 1790, the project of building a new system of measures became law. France at this time was still under the reign of Louis XVI.
To make physical realizations of the new measures, the group employed Louis XVI’s goldsmith, MarcEtienne Janety, to prepare the purest platinum possible at the time.
And to determine the absolute size of the new standards, the length of the meridian was measured with unprecedented precision through a net of triangles between Dunkirk and Barcelona. The socalled Paris meridian goes right through Paris, and actually through the main room (through the middle of the largest window) of the Paris Observatory that was built in 1671.
And to disseminate the new measures, the public had to be convinced of and educated about the advantages of using base 10 for calculations and trade. The National Convention discussed this topic and requested special research about the use of the decimal system (the dissertation shown is from 1793).
The creation of the new measures was not a secret project of scientists. Many steps were publicly announced and discussed, including through posters displayed throughout Paris and the rest of France (the poster shown is from 1793).
The best scientists of the time were employed either part time or full time in the making of the new metric system (see Champagne’s The Role of Five EighteenthCentury French Mathematicians in the Development of the Metric System and Gillispie’s Science and Polity in France for more detailed accounts). AdrienMarie Legendre, today better known through Legendre polynomials and the Legendre transform, spent a large amount of time in the Temporary Bureau of Weights and Measures. Here is a letter signed by him on the official letterhead of the bureau:
René Just Haüy, a famous mineralogist, was employed by the government to write a textbook about the new length, area, volume and mass units. His Instruction abrégée sur les mesures déduites de la grandeur de la terre: uniformes pour toute la République: et sur les calculs relatifs à leur division décimale (Abridged Instruction on Measurements Derived from the Size of the Earth: Uniforms for the Whole Republic: and Calculations of their Decimal Division) was first published in 1793 and became, in its 150page abridged version, a bestseller that was many times republished throughout France.
After nearly 10 years, these efforts culminated in a rectangular platinum bar 1 meter in length, and a platinum cylinder that was 39 millimeters in width and height with a weight of 1 kilogram. These two pieces would become the definitive standards for France and were built by the best instrument makers of the time, Étienne Lenoir and Nicolas Fortin. The two platinum objects were the first and defining realization of what we today call the metric system. A few copies of the platinum meter and kilogram cylinder were made; all have since remained in the possession of the French government. Cities, municipalities and private persons could buy brass copies of the new standards. Here is one brass meter from Lenoir. (The script text under “METRE” reads “Egal a la dixmillionieme partie du quart du Méridien terrestre,” which translates to “Equal to the tenmillionth part of the quarter of the Earth meridian.”)
While the platinum kilogram was a cylinder, the first brass weights for the public were parallelepipeds, also made from brass.
The determination of the length of the meridian was done with amazing effort and precision. But a small error was creeping in, and the resulting meter deviated about 0.2% from its ideal value. (For the whole story of how this happened, see Ken Alder’s The Measure of All Things.)
✕
GeodesyData["ITRF00","MeridianQuadrant"] 
Finally came June 22, 1799. Louis Antoine de Bougainville, the famous navigator, had a cold and so could not actively execute his responsibilities at the National Institute. PierreSimon Laplace, the immortal mathematician whose name we see still everywhere in modern science through his transform, his operator and his demon, had to take his place. Laplace gave a long speech to the Council of Five Hundred (Conseil des CinqCents) and the Council of Ancients (Counseil des Anciens).
After his speech, Laplace himself, LefévreGineau, Monge, Brisson, Coulomb, Delambre, Haüy, Lagrange, Méchain, Prony and Vandermonde; the Foreign commissionaires Bugge (from Denmark), van Swinden and Aeneae (from Batavia), Tralles (from Switzerland), Ciscar and Pedrayes (from Spain), Balbo, Mascheroni, Multedo, Franchini and Fabbroni (from Italy); and the two instrument makers Lenoir and Fortin took coaches over to the National Archives and deposited the meter and the kilogram in a special safe with four locks. The group also had certified measurements; the certificates were deposited as well.
Something similar has happened today, once again in Paris. Over the last three days, the General Conference on Weights and Measures (CGPM) held its 26th quadrennial meeting. Their first meeting 129 years ago established the meter and kilogram artifacts of 1889 as international standards. The culmination of today’s meeting was a vote on whether the current definition of the kilogram as a material artifact will be replaced by an exact value of the Planck constant. Additionally, the electron charge, the Boltzmann constant and the Avogadro constant will also get exact values (the speed of light has had an exact value since 1983).
Every few years, new values (with uncertainties) have been published for the fundamental constants of physics, by CODATA. Back in 1998 the value of the electron charge was . The latest published value is . This morning, it was decided that soon it will be exactly and it will always be this, forever.
But what exactly does it mean for a fundamental constant to have an exact value? It is a matter of the defining units. When a unit (like a coulomb) is exactly defined, then determining the value of the charge of an electron becomes a precision measurement task (a path followed for 100+ years since Millikan’s 1909 droplet experiments). When the value of the elementary charge is exactly defined, realizing 1 coulomb becomes a task of precision metrology.
The situation is similar for the other constants: give the constant an exact defined value, and use this exact value to define the unit. Most importantly, the Planck constant will get an exact value that will define the kilogram, the last unit that is still defined through a manmade artifact.
Over the past decades, scientists have measured the Planck constant, the electron mass, the Boltzmann constant and the Avogadro constant through devices that were calibrated with base units of kilogram, ampere, kelvin and mole. In the future, the values of the constants will be exact numbers that define the units. The resulting system is the natural revision of the SI, more simply called the metric system. To emphasize the new, enlarged dependence on the fundamental constants of physics, this revision has been called the “new SI” (or, sometimes, the “constantsbased SI”).
Today, a revolution in measurement happened. Here is a slide from Bill Phillips’ talk:
Today’s vote completes a process foreseen by James Clerk Maxwell in 1871. This process started in 1892 when Michelson (known for the famous Michelson–Morley experiment for the nonexistence of the aether) connected the length of a meter with the wavelength of a cadmium line. The process advanced more recently in 1983 when the speed of light changed from a measured value to an exact constant of size meters per second that today defines the meter.
Reading through Laplace’s speech from June 22, 1799, is interesting. Here are five paragraphs from his speech:
“We have always felt some of the advantages that the uniformity of weights and measures will have. But from one country to another and in the very interior of each country, habit, prejudices were opposed on this point to any agreement, any reform.
“It was therefore necessary to find the principle in Nature, that all nations have an equal interest in observing and choosing it, so far as its convenience could determine all minds.
“This unity, drawn from the greatest and most invariable of bodies which man can measure, has the advantage of not differing considerably from the halfheight and several other measures used in different countries; common opinion.
“Overcoming a multitude of physical and moral obstacles, they have been acquitted with a degree of perfection of which we have had no idea until now. And in securing the measure they were asked, they have collected and demonstrated in the figure of the Earth the irregularity of its flattening, truths as curious as new.
“But if an earthquake engulfed, if it were possible that a frightful blow of lightning would melt the preservative metal of this measure, it would not result, Citizen Legislators, that the fruit of so many works, that the general type of measures could be lost for the national glory, or for the public utility.”
Many parallels could be drawn to today. International trade without a common system of units is unimaginable. As in the 1790s, dozens of scientists around the world have labored for decades to make as precise as possible with current technology measurements of the Planck and other constants, a precision unimaginable even 50 years ago. And like 219 years ago, defining the new units has been an international effort. And although the platinum meter and kilogram have endured well and fortunately no earthquake or lightning has hit them, the new definitions are truly resistant against any natural catastrophe, and are even suitable for sharing with aliens.
Laplace addressed the Councils one week after van Swinden (one of the foreign delegates) had published the scientific and technical summary of all operations that were involved in the creation of the metric system.
Once the new system was established, its use would be mandated by the French government. Here is a letter from the end of 1799, written from the interior minister François de Neufchâteau to the Northern Department of France ordering the use of the new measures. Despite the government’s efforts, the metric system would not displace old measures for 40 years in France (we can blame this largely on Napoléon).
And one of the last professions to adopt the new measures was medicine. Only in January of 1802 was it even considered.
In contrast, the proposed revised SI was accepted today, and will take effect in just 185 days on May 20, 2019, World Metrology Day.
The 2019 SI will come in much more quietly. Some newspapers have occasionally reported on the experiments. But just as with the original SI, today not everybody is 100% happy with the new system, e.g. some chemists do not like decoupling the mole from the kilogram.
The story that leads to today covers the making of an exact replica of the kilogram from the late 1790s in the 1880s, as well as a slightly improved version of the platinum meter bar. This kilogram, also called the International Prototype of the Kilogram (IPK), is still today the standard of the unit of mass. As such, it is today the last artifact that is used to define a unit.
The metric system in its modern form is de facto used everywhere in science, technology, commerce, trade and daily life. All UScustomary measures are defined and calibrated through the metric standards. As a universal measurement standard, it was instrumental in quantifying and quantitatively describing the world.
“À tous les temps, à tous les peuples” (“For all times, for all people”) were the words that were planned for a commemorative medal that was suggested on September 9, 1799 (23 fructidor an 7), to be minted to honor the creation of the metric system. (Similar to the metric system itself, the medal was delayed by 40 years.) Basing our units on some of the most important fundamental constants of physics bases them on the deepest quantifying properties of our universe, and at the same time defines them for all times and for all people.
So what exactly is the new SI? The metric system started with base units for time, length and mass. Today, SI has seven base units: the second, the meter, the kilogram, the ampere, the kelvin, the mole and the candela. The socalled SI Brochure is the standard document that defines the system. The currently active definitions are:
s: the second is the duration of periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium133 atom
m: the meter is the length of the path traveled by light in a vacuum during a time interval of 1/299 792 458 of a second
kg: the kilogram is the unit of mass; it is equal to the mass of the IPK
A: the ampere is that constant current that, if maintained in two straight parallel conductors of infinite length and of negligible circular crosssections and placed one meter apart in a vacuum, would produce between these conductors a force equal to 2 × newtons per meter of length
K: the kelvin, the unit of thermodynamic temperature, is the fraction 1/273.16 of the thermodynamic temperature of the triple point of water
mol: the mole is the amount of substance in a system that contains as many elementary entities as there are atoms in 0.012 kilograms of carbon12
cd: the candela is the luminous intensity, in a given direction, of a source that emits monochromatic radiation of frequency 540 × hertz and that has a radiant intensity in that direction of 1/683 watt per steradian
Some notes to these official definitions:
The proposed definitions of the new SI, based on fixed values of the fundamental constants, are available from the draft of the next edition of the SI Brochure. First the importance and values of the constants are postulated.
The SI is the system of units in which:
The definitions now read as follows:
s: The second, symbol s, is the SI unit of time. It is defined by taking the fixed numerical value of the caesium frequency , the unperturbed groundstate hyperfine transition frequency of the caesium133 atom, to be when expressed in the unit Hz, which is equal to .
m: The meter, symbol m, is the SI unit of length. It is defined by taking the fixed numerical value of the speed of light in vacuum to be when expressed in the unit , where the second is defined in terms of the caesium frequency .
kg: The kilogram, symbol kg, is the SI unit of mass. It is defined by taking the fixed numerical value of the Planck constant h to be when expressed in the unit J s, which is equal to kg , where the meter and the second are defined in terms of c and .
A: The ampere, symbol A, is the SI unit of electric current. It is defined by taking the fixed numerical value of the elementary charge e to be , when expressed in the unit C, which is equal to A s, where the second is defined in terms of .
K: The kelvin, symbol K, is the SI unit of thermodynamic temperature. It is defined by taking the fixed numerical value of the Boltzmann constant k to be when expressed in the unit J , which is equal to kg , where the kilogram, meter and second are defined in terms of h, c and .
mol: The mole, symbol mol, is the SI unit of amount of substance. One mole contains exactly elementary entities. This number is the fixed numerical value of the Avogadro constant, , when expressed in the unit and is called the Avogadro number.
cd: The candela, symbol cd, is the SI unit of luminous intensity in a given direction. It is defined by taking the fixed numerical value of the luminous efficacy of monochromatic radiation of frequency 540 × Hz, , to be 683 when expressed in the unit lm , which is equal to cd sr , or cd sr , where the kilogram, meter and second are defined in terms of h, c and .
Compared with the early 2018 SI definitions, we observe:
Determining the values of the constants to a precision allows us to supersede the old definitions, and thus ensures that the associated changes in the values of the units will have no disruptive influence on any measurement is a remarkable success of modern science.
Two hundred twenty years ago, not everybody agreed on the new system of units. The base (10 or 12) and the naming of the units were frequent topics of public discussion. Here is a fullpage newspaper article with a suggestion of a slightly different system than the classic metric system.
Now let’s come back to the fundamental constants of physics.
From a fundamental physics point of view, it is not a priori clear that the fundamental constants are constant over great lengths of time (billions of years) and distance in the universe. But from a practical point of view, they seem to be as stable as anything could be.
What is the relative popularity of the various fundamental constants? The arXiv preprint server, with its nearly one million physics preprints, is a good data source to answer this question. Here is a breakdown of the frequencies with which the various fundamental constants are explicitly mentioned in the preprints. (The cosmological constant only became so popular over the last three decades, and it is a constant current unsuitable for defining units.)
There is a lot of philosophical literature, theoretical physics and numerology literature about fundamental constants and their meaning, values and status within the universe (or multiverse) and so on. Why do the constants have the values they have? Is humankind lucky that the constants have the values they have (e.g. only minute changes in the values of the constants would not allow stars to form)? Fundamental constants allow many backoftheenvelope calculations as they govern all physics around us. Here is a crude estimation for the height of a giraffe in terms of the electron and proton mass, the elementary charge, the Coulomb constant , the gravitational constant and the Bohr radius :
✕
(Subscript[m, e]/Subscript[m, p])^(1/20) ((κ e^2)/(G Subscript[m, p]^2))^(3/10) Subscript[a, 0]//UnitConvert[#,"Feet"]& 
This is not the place to review this literature of the theory and uses of fundamental constants, or to contribute to it. Rather, let’s use the Wolfram Language to see how fundamental constants can be used in actual computations.
Fundamental constants are tightly integrated into the computational subsystem of the Wolfram Language that deals with units, measures and physical laws. The fundamental constants are in the upperleft yellow box of the network graphic:
These are the five constants from the new SI and their current values expressed in SI base units:
✕
siConstants = {Quantity[1, "SpeedOfLight"], Quantity[1, "PlanckConstant"], Quantity[1, "ElementaryCharge"], Quantity[1, "BoltzmannConstant"], Quantity[1, "AvogadroConstant"]}; 
✕
Grid[Transpose[{siConstants/. 1>None, UnitConvert[siConstants,"SIBase"]}],Dividers>Center,Alignment>Left]//TraditionalForm 
Physical constant are everywhere in physics. We use the function FormulaData to get some examples.
✕
siConstantNames=Alternatives @@ (Last/@ siConstants) 
✕
formulas=Select[{#,FormulaData[#] /. Quantity[a_,b_. "ReducedPlanckConstant"^exp_.] :> Quantity[a/(2Pi),b "PlanckConstant"^exp]}&/@ FormulaData[], MemberQ[#, siConstantNames, ∞]&]; 
The left column shows the standard names of the formulas, and the right column shows the actual formulas. The symbols (E, T, …) are all physical quantity variables of PhysicalQuantity[...,...]). Here are the shortest formulas that contain a physical constant:
✕
makeGrid[data_] := Grid[{Column[Flatten[{#}]],#2}&@@@ Take[ SortBy[data,LeafCount[Last[#]]&], UpTo[12]], Dividers>Center,Alignment>Left] 
✕
makeGrid[formulas] 
And here are the formulas that contain at least three different physical fundamental constants:
✕
makeGrid[Select[formulas, Length[Union[Cases[#,siConstantNames, ∞]]]>2&]] 
One of the many entity types included in the Wolfram Knowledgebase is fundamental constants. There is some discussion in the literature of what exactly constitutes a constant. Are any dimensional constants really physical constants, or are they “just” artifacts of our units? Do only dimensionless coupling constants, typically around 26 in the standard model of particle physics, describe the fabric of our universe? We took a liberal approach and also included derived constants in our data as well as anthropologically relevant values, such as the Sun’s mass, that are often standardized by various international bodies. This gives a total of more than 210.
✕
EntityValue["PhysicalConstant","EntityCount"] 
Expressed in SI base units, the constants span about 160 orders of magnitude. Converting all constants to Planck units gives dimensionless values for the constants and allows for a more honest and faithful representation.
✕
toPlanckUnits[u_?NumberQ ]:=Abs[u] toPlanckUnits[Quantity[v_,u_] ]:= Normal[UnitConvert[Abs[v] u/. {"Meters" > 1/Quantity[1, "PlanckLength"], "Seconds" > 1/Quantity[1, "PlanckTime"], "Kilograms" > 1/Quantity[1, "PlanckMass"], "Kelvins">1/Quantity[1, "PlanckTemperature"], "Amperes" >1/ Quantity[1,"PlanckElectricCurrent"]},"SIBase"] /. ("Meters""Seconds""Kilograms""Kelvins""Amperes"):>1] 
✕
constantsInPlanckUnits=SortBy[Cases[{#1, toPlanckUnits@UnitConvert[#2,"SIBase"]}&@@@ EntityValue["PhysicalConstant",{"Entity", "Value"}],{_,_?NumberQ}],Last]; 
✕
ListLogPlot[MapIndexed[Callout[{#2[[1]],#1[[2]] }, #[[1]]]&,constantsInPlanckUnits], PlotStyle > PointSize[0.004],AspectRatio>1,GridLines>{{},{1}}] 
Because the values of the constants span many orders of magnitude, one expects the first digits to obey (approximately) Benford’s law. The yellow histogram shows the digit frequencies of the constants, and the blue shows the theoretical predictions of Benford’s law.
✕
Show[{Histogram[{First[RealDigits[N@#2]][[1]]&@@@constantsInPlanckUnits, WeightedData[Range[9], Table[Log10[1+1/d],{d, 9}]]},{1},"PDF", Ticks > {None, Automatic}, AxesLabel > {"first digit", "digit frequency"}],Graphics[Table[Text[Style[k, Bold],{k, 0.03}],{k,9}]]}] 
Because of the different magnitudes and dimensions, it is not straightforward to visualize all constants. The function FeatureSpacePlot allows us to visualize objects that lie on submanifolds of higherdimensional spaces. In the following, we take the magnitudes and the unit dimensions of the constants into account. As a result, dimensionally equal or similar constants cluster together.
✕
constants=Select[{#1, UnitConvert[#2,"SIBase"]}&@@@ EntityValue["PhysicalConstant", {"Entity","Value"}],Not[StringMatchQ[#[[1,2]], (___~~("Jupiter""Sun""Jupiter")~~___)]]&]; 
✕
siBaseUnits={"Seconds","Meters","Kilograms","Amperes","Kelvins","Moles","Candelas"}; 
✕
constantsData=Cases[{ Log10[Abs[N@QuantityMagnitude[#2] ]], N@Normalize[Unitize[Exponent[QuantityUnit[#2],siBaseUnits]]], #1}&@@@(constants /.{"Steradians">1} ), {_?NumberQ,_,_}]; 
✕
allSIConstants=(Entity["PhysicalConstant",#]&/@ {"AvogadroConstant","PlanckConstant","BoltzmannConstant","ElementaryCharge", "SpeedOfLight","Cesium133HyperfineSplittingFrequency"}); 
The poor Avogadro constant is so alone. :) The reason for this is that not many named fundamental constants contain the mole base unit.
✕
(FeatureSpacePlot[Callout[(1 + #2/100) #2,Style[#3, Gray]]&@@@ constantsData , ImageSize >1600,Method > "TSNE",ImageMargins>0,PlotRangePadding>Scaled[0.02], AspectRatio>2,PlotStyle>PointSize[0.008]]/.((#>Style[#, Darker[Red]])&/@allSIConstants) )// Show[#, ImageSize > 800]& 
They are (not mutually exclusively) organized in the following classes of constants:
✕
EntityClassList["PhysicalConstant"] 
As much as possible, each constant has the following set of properties filled out:
✕
EntityValue["PhysicalConstant","Properties"] 
Most properties are selfexplanatory; the Lévy‐Leblond class might not be. A classic paper from 1977 classified the constants into three types:
Type A: physical properties of a particular object
Type B: constants characterizing whole classes of physical phenomena
Type C: universal constants
Here are examples of constants from these three classes:
✕
typedConstants=With[{d=EntityValue["PhysicalConstant",{"Entity", "LevyLeblondClass"}]}, Take[DeleteCases[First/@ Cases[d, {_,#}],Entity["PhysicalConstant","EarthMass"]],UpTo[10]]&/@ {"C","B","A"}]; 
✕
TextGrid[Prepend[PadRight[typedConstants,Automatic,""]//Transpose, Style[#,Gray]&/@ {"Type C", "Type B", "Type A"}],Dividers>Center] 
Physicists’ most beloved fundamental constant is the finestructure constant (or the inverse, with an approximate value of 137). As it is a genuinely dimensionless constant, it is not useful for defining units.
✕
EntityValue[Entity["PhysicalConstant","InverseFineStructureConstant"],{"Value","StandardUncertainty"}] //InputForm 
There are many ways to express the finestructure constant through other constants. Here are some of them, including the von Klitzing constant , the impedance of the vacuum , the electron mass , the Bohr radius and some others:
✕
(Quantity[None,"FineStructureConstant"]==#/.Quantity[1,s_String]:> Quantity[None,s])&/@ Entity["PhysicalConstant", "FineStructureConstant"]["EquivalentForms"]//Column//TraditionalForm 
This number puzzled and continues to puzzle physicists more than any other. And over the last 100 years, many people have come up with conjectured exact values of the finestructure constant. Here we retrieve some of them using the "ConjecturedValues" property and display their values and the relative differences to the measured value:
✕
alphaValues=Entity["PhysicalConstant","FineStructureConstant"]["ConjecturedValues"]; 
✕
TextGrid[{Row[Riffle[StringSplit[StringReplace[#1,(DigitCharacter~~__):>""],RegularExpression["(?=[$[:upper:]])"]]," "]], "Year"/.#2,"Value"/.#2,NumberForm[Quantity[100 (N[UnitConvert[("Value"/.#2)/α,"SIBase"]]1),"Percent"],2]}&@@@ DeleteCases[alphaValues,"Code2011">_],Dividers>All, Alignment>Left] 
Something of great importance for the fundamental constants is the uncertainty of their values. With the exception of the fundamental constants that now have defined values, fundamental constants are measured, and every experiment has an inherent uncertainty. In the Wolfram Language, any number can be precision tagged, e.g. here is π to 10 digits:
✕
π10=3.1415926535`10 
The difference to π is zero within an uncertainty/error of the order :
✕
Piπ10 
Alternatively, one can use an interval to encode an uncertainty:
✕
π10Int = Interval[{3.141592653,3.141592654}] 
✕
Piπ10Int 
When using precisiontagged, arbitraryprecision numbers as well as intervals in computations, the precision (interval width) is computed, and does represent the precision of the result.
In the forthcoming version of the Wolfram Language, there will be a more direct representation of numbers with uncertainty, called Around (see episode 182 of Stephen Wolfram’s “Live CEOing” livestream).
For a natural (one could say canonical) use of this function, we select five constants that have exact values in the new SI:
✕
newSIConstants=ToEntity/@ {c,h,e,k,Subscript[N, A]} 
These five fundamental constants are (of course) dimensionally independent.
✕
DimensionalCombinations[{}, IncludeQuantities > {Quantity[1, "SpeedOfLight"], Quantity[1, "PlanckConstant"], Quantity[1, "ElementaryCharge"], Quantity[1, "BoltzmannConstant"], Quantity[1, "AvogadroConstant"]}] 
If we add and , then we can form a twoparameter family of dimensionless combinations.
✕
DimensionalCombinations[{}, IncludeQuantities > Join[{Quantity[1, "SpeedOfLight"], Quantity[1, "PlanckConstant"], Quantity[1, "ElementaryCharge"], Quantity[1, "BoltzmannConstant"], Quantity[1, "AvogadroConstant"]}, {Quantity[1, "MagneticConstant"], Quantity[1, "ElectricConstant"]}]] 
Let’s take the Planck constant. CODATA is an international organization that every few years takes all measurements of all fundamental constants and calculates the best mutually compatible values and their uncertainties. (Through various physical relations, many fundamental constants are related to each other and are not independent.) For instance, the values from the last 10 years are:
✕
hValues={#1, {"Value","StandardUncertainty"}/.#2}&@@@Take[Entity["PhysicalConstant", "PlanckConstant"]["Values"], 5] 
PS: The strangelooking value of is just the reduced fraction of the previously stated new exact value for the Planck constant when the value is expressed in units of J·s.
✕
662607015/100000000*10^34 
Here are the proposed values for the four constants , , and :
✕
{hNew,eNew,kNew,NAnew}=("Value"/.("CODATA2017RecommendedRevisedSI"/.#["Values"]))&/@ Rest[newSIConstants] 
Take, for instance, the last reported CODATA value for the Planck constant. The value and uncertainty are:
✕
hValues[[4,2]] 
We convert this expression to an Around.
✕
toAround[{value:Quantity[v_,unit_], unc:Quantity[u_,unit_]}] := Quantity[Around[v,u], unit] toAround[HoldPattern[Around[Quantity[v_,unit_],Quantity[u_,unit_]]]]:= Quantity[Around[v,u],unit] toAround[{v_?NumberQ, u_?NumberQ}] := Around[v,u] toAround[pc:Entity["PhysicalConstant",_]] := EntityValue[pc,{"Value","StandardUncertainty"}] 
✕
toAround[hValues[[4,2]]] 
Now we can carry out arithmetic on it, e.g. when taking the square root, the uncertainty will be appropriately propagated.
✕
Sqrt[%] 
Now let’s look at a more practical example: what will happen with , the permeability of free space after the redefinition? Right now, it has an exact value.
✕
Entity["PhysicalConstant", "MagneticConstant"]["Value"] 
Unfortunately, keeping this value after defining h and e is not a compatible solution. We recognize this from having a look at the equivalent forms of .
✕
Entity["PhysicalConstant", "MagneticConstant"]["EquivalentForms"] 
The second one shows that keeping the current value would imply an exact value for the finestructure constant. The value would be this number:
✕
UnitConvert[2*(1/e^2)*(1α)*(1h)*(1/c),"SIBase"] 
✕
With[{e=eNew,h=hNew}, π/2500000N/(A)^2 (c e^2)/(2h) ] 
✕
N[1/%, 10] 
What one must do instead is consider the equation:
… as the defining equation for , and this shows that in the new SI will have a relative uncertainty equal to the uncertainty of the finestructure constant.
✕
With[{e=eNew,h=hNew,α=toAround[Entity["PhysicalConstant", "FineStructureConstant"]]}, α UnitConvert[(2h)/(c e^2),"SIBase"]] //toAround 
Before ending this blog (once in Versailles, I should have some nice French pastry in a café rather than yammer on for another ten pages about fundamental constants), let me quickly mention some fun numbertheoretic consequences of the exact values for the constants.
With the constants of the new SI having exact values (rational numbers in a mathematical sense) in SI base units, compound expressions that are rational functions of these constants will unavoidably also have rational values.
Using the "EquivalentForms" property, we can quickly select some such constants:
✕
Grid[Select[Flatten[Function[{p, ps}, {Text[CommonName[p]],#}&/@ ps] @@@ DeleteMissing[EntityValue["PhysicalConstant",{"Entity","EquivalentForms"}],1,2],1], DeleteCases[Cases[#[[2]],_String, ∞],siConstantNames]==={}&&FreeQ[#,Pi,{0, ∞}]&],Dividers>All,Alignment>Left] 
Let’s have a look at the two constants from the macroscopic quantum effects that were used in the process of determining the value of the Planck constant: the Josephson constant and the von Klitzing constant.
Here is a photo of von Klitzing’s talk. Note the many digits given for the von Klitzing constant.
We start with the Josephson constant:
✕
With[{e=eNew,h=hNew},(2e)/h ] //UnitConvert[#,"SIBase"]& 
When the value is expressed in SI base units, the corresponding decimal number has a leading number of 4 and then a repeating sequence of digits of length 6.3 million.
✕
Short[rdJ=RealDigits[21362351200000000000000/44173801], 3] 
This long period even has the first seven digits of the Planck constant and the elementary charge value inside (which is, of course, pure numerology).
✕
{SequencePosition[rdJ[[1,2]],{6,6,2,6,0,7,0}], SequencePosition[rdJ[[1,2]],{1,6,0,2,1,7,6}]} 
Now let’s have a look at the von Klitzing constant with a value (well known from the quantum Hall effect) of about 25.8 :
✕
FromEntity[Entity["PhysicalConstant", "VonKlitzingConstant"]] 
✕
UnitConvert[%,"SIBase"]//UnitSimplify 
The digits of the magnitude in base 10 are trivial to obtain:
✕
RealDigits[QuantityMagnitude[%]] 
✕
With[{e=eNew,h=hNew},h/e^2 ] //UnitSimplify 
As a rational number, what properties does it have?
Before answering this question, as a reminder about decimal fractions, let’s digress for a moment and have a look at the period of a fraction . Using the function RealDigits, we can easily find the period for rational numbers (with small denominators). Here we use the example :
✕
RealDigits[5/49] 
✕
periodLength[r_] := If[MatchQ[#,{_Integer ..}],0,Length[Last[#]]]&[RealDigits[r][[1]]] 
✕
periodLength[5/49] 
Formatting the decimal digits with an appropriate length shows the period visually.
✕
N[5/49,800] //Pane[#,{350,300}]& 
Generically, the period can have length . Here is a plot of the period of all for :
✕
Graphics3D[{EdgeForm[],Table[Cuboid[{p1/2, q1/2,0},{p+1/2, q+1/2,periodLength[p/q]}],{p,100},{q,100}]}, Axes > True,BoxRatios>{1,1,1/2},AxesLabel>{"p","q","period"}] 
If the period length is , then is prime.
✕
Select[Table[{q,periodLength[1/q]},{q,2,100}], #[[2]]>=#[[1]]1&] 
✕
Select[Table[{q,periodLength[2/q]},{q,2,100}], #[[2]]>=#[[1]]1&] 
✕
Select[Table[{q,periodLength[3/q]},{q,100}], #[[2]]>=#[[1]]1&] 
So what is the period of the von Klitzing constant? Here is a small function that calculates the period of a proper fraction less than 1 in base 10. The function returns the numbers of nonrepeating and repeating digits, respectively.
✕
decimalDigitCount[b_Rational?(# < 1&)] := Module[{q= FactorInteger[Denominator[b]],r,m,c}, If[Complement[First /@ q, {2, 5}] === {},{Max[Last /@ q], 0}, {c=Max[Last /@ Cases[q, {2  5, _}]];If[c == ∞, 0, c], r=Times @@ Power@@@Cases[q, {Except[25], _}]; m/.(Solve[Mod[10^m,r]==1∧m>0,m,Integers]/._C>1)[[1]]}]] 
The period is about 344.2 trillion. (This means don’t try to call RealDigits on the quantity magnitude of the von Klitzing constant in the new SI base units.)
✕
decimalDigitCount[vK=55217251250000000000/213914085371316325812] 
As a quick check, we implement a function that calculates the th digit of a rational number in base .
✕
NthDigit[ r:Rational[p_Integer?Positive, q_Integer?Positive]?(0 < # < 1&), n_Integer?Positive, base_Integer?Positive] := Floor[base Mod[p PowerMod[base, n  1, q], q]/q] 
Indeed, the period of the decimal fraction of the fractional part of the von Klitzing constant in SI base units is that large:
✕
periodvK=344229188825340; 
✕
Table[NthDigit[vK,j, 10],{j, 20}] 
✕
Table[NthDigit[vK,j+periodvK, 10],{j, 20}] 
This concludes my “short” report from Versailles, where just hours ago the basis for the metric system was officially redefined (and, in turn, the US customary system of weights and measures, because it is tightly coupled to the SI). Exactly 80,135 days after two precious platinum artifacts were delivered to the French National Archives, the values of five fundamental constants (the speed of light, the Planck constant, the elementary charge, the Boltzmann constant and the Avogadro constant) were delivered in the form of five rational numbers to mankind. Millions of digital copies of these numbers will exist on the internet (and in a few human memories), and no earthquake or lightning could ever delete them all. The new definitions could last for a long, long time, as artifacts (which themselves were once predicted to last 10,000 years) are no longer involved. Today’s vote does not mean that nothing will ever change again. At the bottom of the chain of definitions, the unit of time—the second—is defined through about nine billion radiation cycles. Replacing this definition with some higherfrequency radiation, and thus a larger, more precisely countable number, might be the next change. But a new definition of the second will still be based on a fundamental constant of physics.
During NaNoWriMo, authors are typically categorized into two distinct types: pantsers, who “write by the seat of their pants,” and plotters, who are meticulous in their planning. While plotters are likely writing from preplanned outlines, pantsers may need some inspiration.
That’s where WolframAlpha comes in handy.
WolframAlpha can help you name your characters. By typing in “name” plus the name itself, you can find out all sorts of info: when the name was most popular, how common it is and more. If you place a comma between two names, you can compare the two.
For example, let’s say you’re writing a roadtrip story featuring two women named “Sarah” and “Sara.” You type in “name sarah, sara” and see the following:
WolframAlpha shows that both names were common around the same time, but one is more likely for a woman who’s just slightly older. You can make Sara the older of the two by a hair, and her age can be a point of characterization. The extra year makes her extra wise—or extra bossy.
What if you want to write about a male character? Let’s explore two possibilities, Kevin and Alan.
By viewing the charts in WolframAlpha, we can see that one name is much more common, but both skew older. What if you try searching for another name, like Dominic?
Additionally, we can see that “Dominic” is a name with a history, with WolframAlpha showing tidbits such as the fact that it was often used for boys born on Sundays. If you’re a pantser, this information is something to file away for later.
Of course, you can always look for popular names if you’re setting your work in the modern day:
So, Sarah and Sara are on their trip. Let’s say that they’re smalltown southern girls who happened to meet because of their shared first name, but you’re not sure what town fits the bill. You can look for cities in North Carolina with a population of under 2,000 people:
From there, you can calculate the price of gas and other costs of living. The small details you uncover can help with worldbuilding, particularly if the story is set slightly in the past. You can also compare facts about different cities:
If spontaneous Sarah didn’t plan for her trip as a well as staid Sara, then you can calculate just how off the mark she was—particularly with an international journey.
WolframAlpha provides currency conversions, so if the ladies’ trip somehow takes them to the UK, then you can determine just how much their trip savings are currently worth:
Even beyond finances or travel planning, WolframAlpha can help ground a plot in reality. Let’s say Sarah and Sara end up at a pub. How many bottles of hard cider can Sarah enjoy before things go pearshaped?
The process of figuring out the physical details of your characters can help you visualize them better too!
Beyond providing reallife calculations that are useful in everyday situations, WolframAlpha can help to add a touch of realism to genre fiction. For example, going back to our friend Dominic… well, he’s a vampire. He was born in 1703, on a Sunday to tie in with his name. But on what date, exactly? We can view our 1703 calendar with a query of “January 1703”:
From this screen, we can also see his age relative to today, putting him at well over three centuries old. We can also see that there was a full Moon on January 3. Could you use this as a plot point? Perhaps he’s stronger against sunlight than the average vampire due to the full Moon reflecting more of the Sun’s rays.
If you’re a pantser, these sorts of searches can be extra helpful for inspiring new plot or character developments. While you may not have initially set out to create a full Moon–enhanced vampire, name searches and looking up past events lit that spark of inspiration.
Realistic physical properties can be especially helpful for scifi writers, particularly those writing hard scifi. While there are some example WolframAlpha searches for scifi “entertainment” on this page, many of which relate to preexisting genre media, you can also use astronomy searches to enhance your scifi setting.
In a previous search, “Emma” came up as a popular name. Maybe it’s still popular when, decades in the future, we’ve colonized Mars.
In this scifi future, we’ve normalized lightspeed travel. To figure out Emma’s commute, you can use formulas to measure the amount of time it would take to travel from place to place. If Emma works at a Martian university, then you can see how long it would take for a lightspeed bus to shuttle her to the office:
She would hardly have time to read through her newsfeed on her holoheadset before the bus dropped her off at work!
For science fiction plots set in a time period closer to today’s tech, you can calculate totals using WolframAlpha’s many included formulas. For example, you can figure out volts and amps for a maker using Ohm’s law, or even run through a linear regression or two for a fictional AI assistant.
Because WolframAlpha is a “computation engine,” it also provides general facts that can help you come up with ideas for characters—and monsters.
For horror writers, the bare facts can provide a perfect starting point for tweaking reality ever so slightly into the uncanny valley.
For example, let’s say you have werewolves in your story. These aren’t friendly werewolves, though: they’re the eldritch kind that give passersby the heebiejeebies. Going by the “one small tweak” rule, you can compare the number of teeth in a dog’s mouth to the amount in a human’s mouth:
What if your werewolves have tootoothy smiles because they have a few too many incisors, matching up with the amount found in a wolf’s mouth? Are dentists hunted down if they discover the truth?
Mystery writers can also discover interesting things on WolframAlpha, from chemical compositions to ciphers. With the latter, there are several wordpuzzle tools you can use to create clues for a crime scene. For example, by using underscores in your searches, you can build Hangmanlike messages from blanks and letters:
WolframAlpha also has a texttoMorse converter, allowing you to convert normal text to dots and dashes. Perhaps a sidekick is attempting to get in touch with a wily detective without kidnappers noticing what’s going on:
For a mystery set in the past, you can use a date search to determine the sunrise, sunset and weather patterns of any given day. While this data is invaluable for historic writers—the books they write are all about historical accuracy, after all—it can also help you determine how an oldtimey crime might have gone down. For example, the witness couldn’t have seen the Sun peeking through the blinds at 5:35am because sunrise hadn’t happened yet:
If you’re trying to come up with ideas on the fly, having an allinone spot to search for facts and figures can be invaluable. For more topic suggestions, check out this page to see other example search ideas separated into categories.
Hopefully these ideas have sparked your interest, whether for your own personal NaNo journey or for a library or classroombased NaNoWriMo project. Feel free to share this post with other writers or educators if you’ve found it to be useful. And even after November draws to a close, continue mining WolframAlpha for story ideas. Write on!
]]>Join Wolfram U for Wolfram Technology in Action: Applications & New Developments, a threepart web series showcasing innovative applications in the Wolfram Language.
Newcomers to Wolfram technology are welcome, as are longtime users wanting to see the latest functionality in the language.
The series is modeled after the three different tracks offered at our recent Wolfram Technology Conference, covering data science and AI (November 14), engineering and modeling (November 28) and mathematics and science (December 12). Each webinar will feature presentations shared at the Wolfram Technology Conference, so if you weren’t able to attend this year, you can still take part in some of the highlights.
Additional presentations will be given live during each webinar by Wolfram staff scientists, application developers, software engineers and Wolfram Language users who apply the technology every day to their business operations and research.
At the Data Science and AI webinar on November 14, learn how to build applications using models from the Wolfram Neural Net Repository, including an overview of some of the newest models available for classification, feature extraction, image processing, speech, audio and more. We will also show some applications built by students from the Wolfram Summer Programs, and we’ll perform realtime examples of model training with data.
The Data Science and AI webinar will conclude with a realworld example applying computer vision tasks to digital pathology for the purposes of cancer diagnosis. Get a preview of the webinar content and learn more about Summer School projects by visiting the Wolfram Community posts on Rooftop Recognition for Solar Energy Potential and Using Machine Learning to Diagnose Pneumonia from Chest XRays.
You can join any or all of the webinars to benefit from the series. You only need to sign up once to save your seat for this webinar and the sessions that follow. When you sign up, you’ll receive an email confirming your registration, as well as reminders for upcoming sessions.
Don’t miss this opportunity to engage with other users and experts of the Wolfram Language!
This year I had the honor of composing the competition questions, in addition to serving as live commentator alongside trusty cocommentator (and Wolfram’s lead communications strategist) Swede White. You can view the entire recorded livestream of the event here—popcorn not included.
Right: Swede White and the author commentating. Left: Stephen Wolfram and the author.
This year’s competition started with a brief introduction by competition room emcee (and Wolfram’s director of outreach and communications) Danielle Rommel and Wolfram Research founder and CEO Stephen Wolfram. Stephen discussed his own history with (noncompetitive) livecoding and opined on the Wolfram Language’s unique advantages for livecoding. He concluded with some advice for the contestants: “Read the question!”
After a short delay, the contestants started work on the first question, which happened to relate to Stephen Wolfram’s 2002 book A New Kind of Science. Stephen dropped by the commentators’ table to offer his input on the question and its interesting answer, an obscure tome from 1692 with a 334character title (Miscellaneous Discourses Concerning the Dissolution and Changes of the World Wherein the Primitive Chaos and Creation, the General Deluge, Fountains, Formed Stones, SeaShells found in the Earth, Subterraneous Trees, Mountains, Earthquakes, Vulcanoes, the Universal Conflagration and Future State, are Largely Discussed and Examined):
The second question turned out to be quite challenging for our contestants—knowledge of a certain function, DiscretizeGraphics, was essential to solving the question, and many contestants had to spend precious time tracking down this function in the Wolfram Language documentation.
The contestants stumbled a bit in interpreting the third question, but they figured it out relatively quickly. However, some technical issues led to some exciting drama as the judges deliberated on who to hand the thirdplace point to. Stephen made a surprise return to the commentators’ table to talk about astronaut Michael Foale’s unique connection to Wolfram Research and Mathematica. I highly recommend reading Michael’s fascinating keynote address, Navigating on Mir, given at the 10thanniversary Mathematica Conference.
Wolfram Algorithms R&D department researcher José MartínGarcía joined the commentators’ table for the fourth question. José worked on the geographic computation functionality in the Wolfram Language, and helped explain to our audience some of the technical aspects of this question, such as the mathematical concept of a geometric centroid. Solving this question involved the same DiscretizeGraphics function that tripped up contestants on question 2, but it seems that this time they were prepared, and produced their solutions much more quickly.
The fifth question was, lengthwise, the most verbose in this year’s competition. For every question, the goal is to provide as much clarity as possible regarding the expected format of the answer (as well as its content), which this question demonstrates well. The last sentence is particularly important, as it specifies that the pie “slices” are expected to be in ascending order by size, which ensures that the pie chart looks the same as the expected result. This aspect took our contestants a few tries to pin down, but they eventually got it.
The sixth question makes use of not only the Wolfram Language’s unique symbolic support for neural networks, but also the recently launched Wolfram Neural Net Repository. You can read more about the repository in its introductory blog post.
This particular neural network, the Wolfram English CharacterLevel Language Model V1, is trained to generate English text by predicting the most probable next character in a string. The results here might be improbable to hear from President Lincoln’s mouth, but they do reflect the fact that part of this model’s training data consists of old news articles!
For the seventh and last question of the night, our judges decided to skip ahead to the tenth question on their list! We hadn’t expected to get to this question in the competition and so hadn’t lined up an expert commentator. But as it turns out, José MartínGarcía knows a fair bit about eclipses, and he kindly joined the commentators’ table on short notice to briefly explain eclipse cycles and the difference between partial and total solar eclipses. Check out Stephen’s blog post about the August 2017 solar eclipse for an indepth explanation with excellent visualizations.
(The highlighted regions here show the “partial phase” of each eclipse, which is the region in which the eclipse is visible as a partial eclipse. The Wolfram Language does not yet have information on the total phases of these eclipses.)
At the end of the competition, the thirdplace contestant, going under the alias “AieAieAie,” was unmasked as Etienne Bernard, lead architect in Wolfram’s Advanced Research Group (which is the group responsible for the machine learning functionality of the Wolfram Language, among other wonderful features).
Etienne Bernard and Carlo Barbieri
The contestant going by the name “Total Annihilation” (Carlo Barbieri, consultant in the Advanced Research Group) and the 2015 Wolfram Innovator Award recipient Philip Maymin tied for second place, and both won limitededition Tech CEO minifigures!
Left: Philip Maymin. Right: Tech CEO minifigure.
The firstplace title of champion and the championship belt (as well as a Tech CEO minifigure) went to the contestant going as “RiemannXi,” Chip Hurst!
I wanted to specifically address potential confusion regarding question 3, Astronaut Timelines. This is the text of the question:
Highly skilled programmer Philip Maymin was one of our contestants this year, and he was dissatisfied with the outcome of this round. Here’s a solution to the question that produces the expected “correct” result:
✕
counts = Counts@Flatten[EntityValue["MannedSpaceMission", "Crew"]]; TimelinePlot[ EntityValue[Keys@TakeLargest[counts, 6], EntityProperty["Person", "BirthDate"], "EntityAssociation"]] 
And here’s Philip’s solution:
✕
TimelinePlot@ EntityValue[ Keys[Reverse[ SortBy[EntityValue[ Flatten@Keys@ Flatten[EntityClass["MannedSpaceMission", "MannedSpaceMission"][ EntityProperty["MannedSpaceMission", "PrimaryCrew"]]], "MannedSpaceMissions", "EntityAssociation"], Length]][[;; 6]]], "BirthDate", "EntityAssociation"] 
Note the slightly different approaches—the first solution gets the "Crew" property (a list) of every "MannedSpaceMission" entity, flattens the resulting list and counts the occurrences of each "Person" entity within that, while Philip’s solution takes the aforementioned list and checks the length of the "MannedSpaceMission" property for each "Person" entity in it. These are both perfectly valid techniques (although Philip’s didn’t even occur to me as a possibility when I wrote this question), and in theory should both produce the exact same result, as they’re both accessing the same conceptual information, just through slightly different means. But they don’t, and it turns out Philip’s result is actually the correct one! Why is this?
The primary reason for this discontinuity boils down to a bug in the Wolfram Knowledgebase representation of the STS27 mission of Space Shuttle Atlantis. Let’s look at the "Crew" property for STS27:
✕
Entity["MannedSpaceMission", "STS27"]["Crew"] 
Well, that’s clearly wrong! There’s a "Person" entity for Jerry Lynn Ross in there, but it doesn’t match the entity that canonically represents him within the Knowledgebase. I’ve reported this inaccuracy, along with a few other issues, to our data curation team, and I expect it will be addressed soon. Thanks to Philip for bringing this to our attention!
The inaugural Wolfram Language Livecoding Competition took place at the 2016 Wolfram Technology Conference, and the following year’s competition in 2017 was the first to be livestreamed. We held something of a testrun for this year’s competition at the Wolfram Summer School in June, for which I also composed questions and piloted an experimental second, concurrent livestream for behindthescenes commentary. At this year’s Technology Conference we merged these two streams into one, physically separating the contestants from the commentators to avoid “contaminating” the contestant pool with our commentary. We also debuted a new semiautomated grading system, which eased the judges’ workload considerably. Each time we’ve made some mistakes, but we’re slowly improving, and I think we’ve finally hit upon a format that’s both technically feasible and entertaining for a live audience. We’re all looking forward to the next competition!
]]>David’s submission takes first place in the category of creepiness—and was timely, given the upcoming Halloween holiday. The judges were impressed by its visual impact:
✕
c=Flatten@DeleteCases[WebImageSearch["eye iris","Images",MaxItems>120],$Failed];ImageCollage[ConformImages[c[[1;;Length[c]]]]] 
David had a character to spare with this submission, so he had no reason to shorten it. But he could have saved 20 characters by eliminating code that was left over from his exploration process. I’ll leave it as an exercise for the interested reader to figure out which characters those are.
Abby’s submission recreates an image by assembling lowresolution flag images. In order to squeak in at the 128character limit, she cleverly uses UN flags. Over half of the code is grabbing the flag and dress images; the heart of the rendering work is a compact 60character application of ImagePartition, Nearest and ImageAssemble:
✕
f=ImageResize[#,{4,4}]&/@CountryData["UN","Flag"];{i=Entity["Word", "dress"][image],ImageAssemble@Map[Nearest[f,#][[1]]&,ImagePartition[i,4],{2}]} 
This OneLiner derives from an activity in Abby’s computational thinking group at Torrey Pines High School. You can download a notebook that describes the activity by clicking the Flag Macaw link on this page.
Take a second to consider what this OneLiner does: gets the list of 164,599 city entities in the Wolfram Language, searches the web for an image of each one, applies the ResNet neural network to each image to guess where it was taken and compares that location with the geotagging information in the image to see how precise the neural network’s prediction is. This may well be an honorable mention… but we’d have to wait 14 hours for the code to evaluate in order to find out:
✕
Mean[GeoDistance[NetModel["ResNet101 Trained on YFCC100m Geotagged Data"]@WebImageSearch[#[[1]]][1,1],#]&/@EntityList["City"]] 
I suspect David was fishing for a dishonorable mention with this submission that creates what one judge called “the cruelest game of Where’s Waldo ever invented.” Your task is to find the black disk among the randomly colored random polygons:
✕
Graphics[{Table[{RandomColor[],Translate[RandomPolygon["Convex"],{i,j}+RandomReal[{E,E},2]]},{i,99},{j,99}],Disk[{9E,9E},1/E]}] 
What? You can’t find the disk?? Here’s the output again with the disk enlarged:
✕
Graphics[{Table[{RandomColor[],Translate[RandomPolygon["Convex"],{i,j}+RandomReal[{E,E},2]]},{i,99},{j,99}],Disk[{9E,9E},5/E]}] 
Note David’s extensive use of the oneletter E to save characters in numeric quantities.
The uniqueness and creativity of this idea moved the judges to award third place to this OneLiner that makes a table of words that are pronounced like letters. It’s fun, and it opens the door to further explorations, such as finding words (like “season”) whose pronunciations begin with a letter name:
✕
w = # > WordData[#, "PhoneticForm"] &; a = w /@ Alphabet[]; p = w /@ WordList[]; Grid@ Table[{a[[n]], If[a[[n, 2]] === #[[2]], #, Nothing] & /@ p}, {n, 26}] 
Like Abby’s Flag Mosaic submission, this OneLiner also derives from an activity in Abby’s computational thinking group at Torrey Pines High School. You can download a notebook that describes the activity by clicking the Alpha Words link on this page.
This was one of the most timely and shortest OneLiners we’ve yet seen. It answers a question that arose just hours before the end of the competition.
Every Wolfram Technology Conference includes a conference dinner at which Stephen Wolfram hosts an “ask me anything” session. One of the questions at this year’s dinner was “What is the age and gender distribution of conference attendees?”
To answer the age part of that question, Isaac took photos of all of the tables at the dinner, used FacialFeatures to estimate the ages of the people in the photos and made a histogram of the result. We can’t vouch for the accuracy of the result, but it seems plausible:
✕
FacialFeatures["Age"]/@Values[Databin@"ytgvoXyH"]//Flatten//Histogram 
Here are the first three photos in the Databin:
✕
Take[Values[Databin@"ytgvoXyH"],3] 
Congratulations, Isaac, on a brilliant demonstration of computational thinking with Wolfram technology.
Our firstplace winner encapsulated an homage to Joseph Weizenbaum’s natural language conversation program, ELIZA, in a single tweet. Philip’s Eliza often responds with offthewall phrases that make it seem either a few cards short of a full deck or deeply profound. But it was the judges’ first conversation with Eliza, which eerily references current world events, that clinched first place:
✕
While[StringQ[x=InputString@HELP],Echo@NestWhile[#<>y&,x<>" ",StringFreeQ[",.\" ",y=(e=NetModel)[e[][[7]]][#,"RandomSample"]]&]] 
Weizenbaum was aghast that people suggested that ELIZA could substitute for human psychotherapists. The program could not and was never intended to heal patients with psychological illnesses. Philip’s Eliza, however, could well drive you crazy.
There were 14 submissions to this year’s competition, all of which you can see in this notebook. Thank you, participants, for showing us once again the power and economy of the Wolfram Language.
]]>Last week, Wolfram hosted individuals from across the globe at our annual Wolfram Technology Conference. This year we had a packed program of talks, training, and networking and dining events, while attendees got to see firsthand what’s new and what’s coming in the Wolfram tech stack from developers, our power users and Stephen Wolfram himself.
The conference kicked off with Stephen’s keynote speech, which rang in at three and a half hours of live demonstrations of upcoming functions and features in Version 12 of the Wolfram Language. Before getting started, Stephen fired up Version 1 of Mathematica on a Macintosh SE/30—it’s remarkable that running code written in Version 1 still works in the newest versions of the Wolfram Language. Stephen also shared with us the latest developments in WolframAlpha Enterprise, introduced WolframAlpha Notebooks, new cloud functionalities, the Wolfram Notebook Archive and the local Wolfram Engine, a way for developers to easily access the Wolfram Language without barriers of entry.
Most exciting during Stephen’s keynote was the litany of new features coming in Version 12 of the Wolfram Language. Stephen ticked through them starting alphabetically—a is for anatomy, b is for blockchain, c is for compiler and so forth. A few of the many highlights included:
The upcoming release of Version 12 will bring with it not only new functions, but also improvements to interfaces, interoperability with other programming languages and core language efficiencies. If you’re interested in seeing Version 12 being designed and built firsthand, be sure to watch Stephen’s “Live CEOing” series of livestreams.
For the second year, Wolfram hosted and livestreamed a livecoding championship where our internal experts and conference guests competed to see who had the best Wolfram Language chops. To be hosted annually, the competition is a fun way to unwind after a day full of talks and an evening of networking. Each contestant is given a coding challenge, and the first to accurately solve the problem is awarded points. Challenges utilize the full range of capabilities in the Wolfram Language, including builtin data, geometric computations and even data science. It was impressive to see how quickly a complicated problem could be solved.
This year, our winner was Chip Hurst, a Wolfram expert who is currently involved in cuttingedge developments in 3D printing in biotech applications. Congratulations, Chip!
Each year at the Technology Conference, Wolfram recognizes outstanding individuals whose work exemplifies excellence in their fields. Stephen recognized eight individuals this year, from educators to engineers to computational mathematicians. This year, Wolfram honored:
We’ll be back with a post about the winners of our annual oneliner competition. This year’s conference was another success for the books, and we look forward to seeing everyone back next year!
]]>Join us Wednesday, October 17, 2018, from 9:30–11:30pm CT for an exciting adventure in livecoding! During our annual Wolfram Technology Conference, we put our internal experts and guests to the test. Coding questions ranging from physics to pop culture, image processing to visualizations, and all other things challenging will be posed to participants live.
Who will take home the trophy belt this year? A senior developer from our Machine Learning group? A highschool kid with serious coding chops? You? Now in its third year, the Wolfram Livecoding Championship promises to be bigger and better than ever. The event is concurrently livestreamed on Twitch and YouTube Live, so if you’re not able to be here in person, we’d love to see you on the stream. The livestream will also be available on Stephen Wolfram’s Twitch channel, with a special livestreamed introduction from Stephen himself. See last year’s competition and get a taste of what the event has to offer:
New this year will be running commentary on competitors’ progress as they each take their own unique approach to problem solving, highlighting the depth and breadth of possibilities in the Wolfram Language.
Stay tuned for more competitions, and we hope to see you there!
]]>Between October 1787 and April 1788, a series of essays was published under the pseudonym of “Publius.” Altogether, 77 appeared in four New York City periodicals, and a collection containing these and eight more appeared in book form as The Federalist soon after. As of the twentieth century, these are known collectively as The Federalist Papers. The aim of these essays, in brief, was to explain the proposed Constitution and influence the citizens of the day in favor of ratification thereof. The authors were Alexander Hamilton, James Madison and John Jay.
On July 11, 1804, Alexander Hamilton was mortally wounded by Aaron Burr, in a duel beneath the New Jersey Palisades in Weehawken (a town better known in modern times for its tunnels to Manhattan and Alameda). Hamilton died the next day. Soon after, a list he had drafted became public, claiming authorship of more than sixty essays. James Madison publicized his claims to authorship only after his term as president had come to an end, many years after Hamilton’s death. Their lists overlapped, in that essays 49–58 and 62–63 were claimed by both men. Three essays were claimed by each to have been collaborative works, and essays 2–5 and 64 were written by Jay (intervening illness being the cause of the gap). Herein we refer to the 12 claimed by both men as “the disputed essays.”
Debate over this authorship, among historians and others, ensued for well over a century. In 1944 Douglass Adair published “The Authorship of the Disputed Federalist Papers,” wherein he proposed that Madison had been the author of all 12. It was not until 1963, however, that a statistical analysis was performed. In “Inference in an Authorship Problem,” Frederick Mosteller and David Wallace concurred that Madison had indeed been the author of all of them. An excellent account of their work, written much later, is Mosteller’s “Who Wrote the Disputed Federalist Papers, Hamilton or Madison?.” His work on this had its beginnings also in the 1940s, but it was not until the era of “modern” computers that the statistical computations needed could realistically be carried out.
Since that time, numerous analyses have appeared, and most tend to corroborate this finding. Indeed, it has become something of a standard for testing authorship attribution methodologies. I recently had occasion to delve into it myself. Using this technology, developed in the Wolfram Language, I will show results for the disputed essays that are mostly in agreement with this consensus opinion. Not entirely so, however—there is always room for surprise. Brief background: in early 2017 I convinced Catalin Stoean, a coauthor from a different project, to work with me in developing an authorship attribution method based on the Frequency Chaos Game Representation (FGCR) and machine learning. Our paper “Text Documents Encoding through Images for Authorship Attribution” was recently published, and will be presented at SLSP 2018. The method outlined in this blog comes from this recent work.
The idea that rigorous, statistical analysis of text might be brought to bear on determination of authorship goes back at least to Thomas Mendenhall’s “The Characteristic Curves of Composition” in 1887 (earlier work along these lines had been done, but it tended to be less formal in nature). The methods originally used mostly involved comparisons of various statistics, such as frequencies for sentence or word length (that latter in both character and syllable counts), frequency of usage of certain words and the like. Such measures can be used because different authors tend to show distinct characteristics when assessed over many such statistics. The difficulty encountered with the disputed essays was that, by measures then in use, the authors were in agreement to a remarkable extent. More refined measures were needed.
Modern approaches to authorship attribution are collectively known as “stylometry.” Most approaches fall into one or more of the following categories: lexical characteristics (e.g. word frequencies, character attributes such as ngram frequencies, usage of white space), syntax (e.g. structure of sentences, usage of punctuation) and semantic features (e.g. use of certain uncommon words, relative frequencies of members of synonym families).
Among advantages enjoyed by modern approaches, there is the ready availability on the internet of large corpora, and the increasing availability (and improvement) of powerful machine learning capabilities. In terms of corpora, one can find all manner of texts, newspaper and magazine articles, technical articles and more. As for machine learning, recent breakthroughs in image recognition, speech translation, virtual assistant technology and the like all showcase some of the capabilities in this realm. The past two decades have seen an explosion in the use of machine learning (dating to before that term came into vogue) in the area of authorship attribution.
A typical workflow will involve reading in a corpus, programmatically preprocessing to group by words or sentences, then gathering various statistics. These are converted into a format, such as numeric vectors, that can be used to train a machine learning classifier. One then takes text of known or unknown authorship (for purposes of validation or testing, respectively) and performs similar preprocessing. The resulting vectors are classified by the result of the training step.
We will return to this after a brief foray to describe a method for visualizing DNA sequences.
Nearly thirty years ago, H. J. Jeffrey introduced a method of visualizing long DNA sequences in “Chaos Game Representation of Gene Structure.” In brief, one labels the four corners of a square with the four DNA nucleotide bases. Given a sequence of nucleotides, one starts at the center of this square and places a dot halfway from the current spot to the corner labeled with the next nucleotide in the sequence. One continues placing dots in this manner until the end of a sequence of nucleotides is reached. This in effect makes nucleotide strings into instruction sets, akin to punched cards in mechanized looms.
One common computational approach is slightly different. It is convenient to select a level of pixelation, such that the final result is a rasterized image. The actual details go by the name of the Frequency Chaos Game Representation, or FCGR for short. In brief, a square image space is divided into discrete boxes. The gray level in the resulting image of each such pixelized box is based on how many points from chaos game representation (CGR) land in it.
Following are images thus created from nucleotide sequences of six different species (cribbed from the author’s “Linking Fourier and PCA Methods for Image Look‐Up”). This has also appeared on Wolfram Community.
It turns out that such images do not tend to vary much from others created from the same nucleotide sequence. For example, the previous images were created from the initial subsequences of length 150,000 from their respective chromosomes. Corresponding images from the final subsequences of corresponding chromosomes are shown here:
As is noted in the referenced article, dimensionreduction methods can now be used on such images, for the purpose of creating a “nearest image” lookup capability. This can be useful, say, for quick identification of the approximate biological family a given nucleotide sequence belongs to. More refined methods can then be brought to bear to obtain a full classification. (It is not known whether image lookup based on FCGR images is alone sufficient for full identification—to the best of my knowledge, it has not been attempted on large sets containing closer neighbor species than the six shown in this section). It perhaps should go without saying (but I’ll note anyway) that even without any processing, the Wolfram Language function Nearest will readily determine which images from the second set correspond to similar images from the first.
A key aspect to CGR is that it uses an alphabet of length four. This is responsible for a certain fractal effect in that blocks from each quadrant tend to be approximately repeated in nested subblocks in corresponding nested subquadrants. In order to obtain an alphabet of length four, it was convenient to use multiple digits from a power of four. Some experiments indicated that an alphabet of length 16 would work well. Since there are 26 characters in the English version of the Latin alphabet, as well as punctuation, numeric characters, white space and more, some amount of merging was done, with the general idea that “similar” characters could go into the same overall class. For example, we have one class comprised of {c,k,q,x,z}, another of {b,d,p} and so on. This brought the modified alphabet to 16 characters. Written in base 4, the 16 possibilities give all possible pairs of digits in base 4. The string of base 4 digits thus produced is then used to produce an image from text.
For relatively short texts, up to a few thousand characters, say, we simply create one image. Longer texts we break into chunks of some specified size (typically in the range of 2,000–10,000 characters) and make an image for each such chunk. Using ExampleData["Text"] from the Wolfram Language, we show the result for the first and last chunks from Alice in Wonderland and Pride and Prejudice, respectively:
While there is not so much for the human eye to discern between these pairs, machine learning does quite well in this area.
The paper with Stoean provides details for a methodology that has proven to be best from among variations we have tried. We use it to create onedimensional vectors from the twodimensional image arrays; use a common dimension reduction via the singularvalue decomposition to make the sizes manageable; and feed the training data, thus vectorized, into a simple neural network. The result is a classifier that can then be applied to images from text of unknown authorship.
While there are several moving parts, so to speak, the breadth of the Wolfram Language make this actually fairly straightforward. The main tools are indicated as follows:
1. Import to read in data.
2. StringDrop, StringReplace and similar string manipulation functions, used for removing initial sections (as they often contain identifying information) and to do other basic preprocessing.
3. Simple replacement rules to go from text to base 4 strings.
4. Simple code to implement FCGR, such as can be found in the Community forum.
5. Dimension reduction using SingularValueDecomposition. Code for this is straightforward, and one version can be found in “Linking Fourier and PCA Methods for Image Look‐Up.”
6. Machine learning functionality, at a fairly basic level (which is the limit of what I can handle). The functions I use are NetChain and NetTrain, and both work with a simple neural net.
7. Basic statistics functions such as Total, Sort and Tally are useful for assessing results.
Common practice in this area is to show results of a methodology on one or more sets of standard benchmarks. We used three such sets in the referenced paper. Two come from Reuters articles in the realm of corporate/industrial news. One is known as Reuters_50_50 (also called CCAT50). It has fifty authors represented, each with 50 articles for training and 50 for testing. Another is a subset of this, comprised of 50 training and 50 testing articles from ten of the fifty authors. One might think that using both sets entails a certain level of redundancy, but, perhaps surprisingly, past methods that perform very well on either of these tend not to do quite so well on the other. We also used a more recent set of articles, this time in Portuguese, from Brazilian newspapers. The only change to the methodology that this necessitated involved character substitutions to handle e.g. the “c‐with‐cedilla” character ç.
Results of this approach were quite strong. As best we could find in prior literature, scores equaled or exceeded past top scores on all three datasets. Since that time, we have applied the method to two other commonly used examples. One is a corpus comprised of IMDb reviews from 62 prolific reviewers. This time we were not the top performer, but came in close behind two other methods. Each was actually a “hybrid” comprised of weighted scores from some submethods. (Anecdotally, our method seems to make different mistakes from others, at least in examples we have investigated closely. This makes it a sound candidate for adoption in hybridized approaches.) As for the other new test, well, that takes us to the next section.
We now return to The Federalist Papers. The first step, of course, is to convert the text to images. We show a few here, created from first and last chunks from two essays. The ones on the top are from Federalist No. 33 (Hamilton) while those on the bottom are from Federalist No. 44 (Madison). Not surprisingly, they are not different in the obvious ways that the genome‐based images were different:
Before attempting to classify the disputed essays, it is important to ascertain that the methodology is sound. This requires a validation step. We proceeded as follows: We begin with those essays known to have been written by either Hamilton or Madison (we discard the three they coauthored, because there is not sufficient data therein to use). We hold back three entire essays from those written by Madison, and eight from the set by Hamilton (this is in approximate proportion to the relative number each penned). These withheld essays will be our first validation set. We also withhold the final chunk from each of the 54 essays that remain, to be used as a second validation set. (This two‐pronged validation appears to be more than is used elsewhere in the literature. We like to think we have been diligent.)
The results for the first validation set are perfect. Every one of the 70 chunks from the withheld essays are ascribed to their correct author. For the second set, two were erroneously ascribed. The scores for most chunks have the winner around four to seven times higher than the loser. For the two that were mistaken, these ratios dropped considerably, in one case to a factor of three and in the other to around 1.5. Overall, even with the two misses, these are extremely good results as compared to methods reported in past literature. I will remark that all processing, from importing the essays through classifying all chunks, takes less than half a minute on my desktop machine (with the bulk of that occupied in multiple training runs of the neural network classifier).
In order to avail ourselves of the full corpus of training data, we next merge the validation chunks into the training set and retrain. When we run the classifier on chunks from the disputed essays, things are mostly in accordance with prior conclusions. Except…
The first ten essays go strongly to Madison. Indeed, every chunk therein is ascribed to him. The last two go to Hamilton, albeit far less convincingly. A typical aggregated score for one of the convincing outcomes might be approximately 35:5 favoring Madison, whereas for the last two that go to Hamilton the scores are 34:16 and 42:27, respectively. A look at the chunk level suggests a perhaps more interesting interpretation. Essay 62, the next‐to‐last, has the fivechunk score pairs shown here (first is Hamilton’s score, then Madison’s):
Three are fairly strongly in favor of Hamilton as author (one of which could be classified as overwhelmingly so). The second and fourth are quite close, suggesting that despite the ability to do solid validation, these might be too close to call (or might be written by one and edited by the other).
The results from the final disputed essay are even more stark:
The first four chunks go strongly to Hamilton. The next two go strongly to Madison. The last also favors Madison, albeit weakly. This would suggest again a collaborative effort, with Hamilton writing the first part, Madison roughly a third and perhaps both working on the final paragraphs.
The reader will be reminded that this result comes from but one method. In its favor is that it performs extremely well on established benchmarks, and also in the validation step for the corpus at hand. On the counter side, many other approaches, over a span of decades, all point to a different outcome. That stated, we can mention that most (or perhaps all) prior work has not been at the level of chunks, and that granularity can give a better outcome in cases where different authors work on different sections. While these discrepancies with established consensus are of course not definitive, they might serve to prod new work on this very old topic. At the least, other methods might be deployed at the granularity of the chunk level we used (or similar, perhaps based on paragraphs), to see if parts of those essays 62 and 63 then show indications of Hamilton authorship.
To two daughters of Weehawken. My wonderful mother‐in‐law, Marie Wynne, was a library clerk during her working years. My cousin Sharon Perlman (1953–2016) was a physician and advocate for children, highly regarded by peers and patients in her field of pediatric nephrology. Her memory is a blessing.
]]>Take a look at some of the posts making Wolfram Community so popular. We’d love to see you posting your Wolfram technology–based projects too!
How does a neural network “see the world” if it has only been trained on beautiful images? Marco Thiel, a professor from the University of Aberdeen, UK, shows how easy it is to answer this notsoeasy question with the Wolfram Language. The diversity of models in the Wolfram Neural Net Repository and elegant architecture of the Wolfram Language across various domains makes this usually laborious project a breeze.
When processing natural language (as with automatic speech recognition), the generated text is often not punctuated. This can lead to problems during further analysis. Mengyi Shan, a Wolfram Summer School student, works with the Wolfram Language in training ten neural networks to recognize where commas and periods should appear. This post received attention from news outlets around the world.
In August 2018, an exceptionally strong storm caused a large suspension bridge in Genoa, Italy, to collapse, killing at least 43 people. Professor Marco Thiel comes back to explore a computational approach to understanding infrastructural issues, using Germany as an example. With just a few lines of Wolfram Language code, you can determine where unsafe bridges are grouped, the correlation between a bridge’s age and its safety level, and how much infrastructure spending has changed within a given period of time.
The ambiguous circle illusion left people with lots of questions. Erik Mahieu uses the Wolfram Language to create an educational analysis for 3Dprinted models that produces the illusion in the physical world. His demonstration walks you through the steps from the initial Manipulate to the finished, printed product.
It’s inspiring to see Wolfram artificial intelligence technology empowering realworld research on stem cells, such as at the Developmental Biology Institute of Marseille. Doctoral student Ali Hashmi shares his research advances and neural network design, and expresses appreciation for the Wolfram development team for the efficiency of the Wolfram machine learning framework.
Recently, a paper was published that discusses a fascinating hashing algorithm based on fluid mechanics, and that mentions that all calculations were carried out using the Wolfram Language. As no notebook supplement was given, Wolfram’s Michael Trott reproduced some of the computations from the paper. This post is of particular interest to fans of stunning graphics and captivating computational storytelling.
During the Wolfram High School Summer Camp, Paolo Lammens developed a tool to identify chord sequences in music to create a corresponding graph. This represents all unique chords as vertices and connects all pairs of chronologically subsequent chords with a directed edge. Using MIDI files, Paolo shows every step of the visualization process.
If you haven’t yet signed up to be a member of Wolfram Community, please do so! You can join in on similar discussions, post your own work in groups of your interest and browse the complete list of Staff Picks.
]]>In past blog posts, we’ve talked about the Wolfram Language’s builtin, highlevel functionality for 3D printing. Today we’re excited to share an example of how some more general functionality in the language is being used to push the boundaries of this technology. Specifically, we’ll look at how computation enables 3D printing of very intricate sugar structures, which can be used to artificially create physiological channel networks like blood vessels.
Let’s think about how 3D printing takes a virtual design and brings it into the physical world. You start with some digital or analytical representation of a 3D volume. Then you slice it into discrete layers, and approximate the volume within each layer in a way that maps to a physical printing process. For example, some processes use a digital light projector to selectively polymerize material. Because the projector is a 2D array of pixels that are either on or off, each slice is represented by a binary bitmap. For other processes, each layer is drawn by a nozzle or a laser, so each slice is represented by a vector image, typically with a fixed line width.
In each case, the volume is represented as a stack of images, which, again, is usually an approximation of the desired design. Greater fidelity can be achieved by increasing the resolution of the printer—that is, the smallest pixel or thinnest line it can create. However, there is a practical limit, and sometimes a physical limit to the resolution. For example, in digital light projection a pixel cannot be made much smaller than the wavelength of the light used. Therefore, for some kinds of designs, it’s actually easier to achieve higher fidelity by modifying the process itself. Suppose, for example, you want to make a connected network of cylindrical rods with arbitrary orientation (there is a good reason to do this—we’ll get to that). Any process based on layers or pixels will produce some approximation of the cylinders. You might instead devise a process that is better suited to making this shape.
One type of 3D printing, termed fused deposition modeling, deposits material through a cylindrical nozzle. This is usually done layer by layer, but it doesn’t have to be. If the nozzle is translated in 3D, and the material can be made to stiffen very quickly upon exiting, then you have an elegant way of making arbitrarily oriented cylinders. If you can get new cylinders to stick to existing cylinders, then you can make very interesting things indeed. This nonplanar deposition process is called directwrite assembly, wireframe printing or freeform 3D printing.
Things that you would make using freeform 3D printing are best represented not as solid volumes, but as structural frames. The data structure is actually a graph, where the nodes of the graph are the joints, and the edges of the graph are the beams in the frame. In the following image, you’ll see the conversion of a model to a graph object. Directed edges indicate the corresponding beam can only be drawn in one direction. An interesting computational question is, given such a frame, how do you print it? More precisely, given a machine that can “draw” 3D beams, what sequence of operations do you command the machine to perform?
First, we can distinguish between motions where we are drawing a beam and motions where we are moving the nozzle without drawing a beam. For most designs, it will be necessary to sometimes move the nozzle without drawing a beam. In this discussion, we won’t think too hard about these nonprinting motions. They take time, but, at least in this example, the time it takes to print is not nearly as important as whether the print actually succeeds or fails catastrophically.
We can further define the problem as follows. We have a set of beams to be printed, and each beam is defined by two joints, . Give a sequence of beams and a printing direction for each beam (i.e. ) that is consistent with the following constraints:
1) Directionality: for each beam, we need to choose a direction so that the nozzle doesn’t collide with that beam as it’s printed.
2) Collision: we have to make sure that as we print each beam, we don’t hit a previously printed beam with the nozzle.
3) Connection: we have to start each beam from a physical surface, whether that be the printing substrate or an existing joint.
Let’s pause there for a moment. If these are the only three constraints, and there are only three axes of motion, then finding a sequence that is consistent with the constraints is straightforward. To determine whether printing beam B would cause a collision with beam A, we first generate a volume by sweeping the nozzle shape along the path coincident with beam B to form the 3D region . If RegionDisjoint[R, A] is False, then printing beam B would cause a collision with beam A. This means that beam A has to be printed first.
Here’s an example from the RegionDisjoint reference page to help illustrate this. Red walls collide with the cow and green walls do not:
✕
cow=ExampleData[{\"Geometry3D\",\"Cow\"},\"MeshRegion\"]; 
✕
w1=Hyperplane[{1,0,0},0.39]; w2=Hyperplane[{1,0,0},0.45]; 
✕
wallColor[reg_,wall_]:=If[RegionDisjoint[reg,wall],Green,Red] 
✕
Show[cow,Graphics3D[{{wallColor[cow,w1],w1},{wallColor[cow,w2],w2}}],PlotRangePadding>.04] 
Mimicking the logic from this example, we can make a function that takes a swept nozzle and finds the beams that it collides with. Following is a Wolfram Language command that visualizes nozzlebeam collisions. The red beams must be drawn after the green one to avoid contact with the blue nozzle as it draws the green beam:
✕
HighlightNozzleCollisions[,{{28,0,10},{23,0,10}}] 
For a printer with three axes of motion, it isn’t particularly difficult to compute collision constraints between all the pairs of beams. We can actually represent the constraints as a directed graph, with the nodes representing the beams, or as an adjacency matrix, where a 1 in element (, ) indicates that beam must precede beam . Here’s the collision matrix for the bridge:
A feasible sequence exists, provided this precedence graph is acyclic. At first glance, it may seem that a topological sort will give such a feasible sequence; however, this does not take the connection constraint into consideration, and therefore nonanchored beams might be sequenced. Somewhat surprisingly, TopologicalSort can often yield a sequence with very few connection violations. For example, in the topological sort, only the 12th and 13th beams violate the connection constraint:
✕
ordering=TopologicalSort[AdjacencyGraph[SparseArray[Specified elements: 2832 Dimensions: {135,135}]]] 
Instead, to consider all three aforementioned constraints, you can build a sequence in the following greedy manner. At each step, print any beam such that: (a) the beam can be printed starting from either the substrate or an existing joint; and (b) all of the beam’s predecessors have already been printed. There’s actually a clever way to speed this up: go backward. Instead of starting at the beginning, with no beams printed, figure out the last beam you’d print. Remove that last beam, then repeat the process. You don’t have to compute collision constraints for a beam that’s been removed. Keep going until all the beams are gone, then just print in the reverse removal order. This can save a lot of time, because this way you never have to worry about whether printing one beam will make it impossible to print a later beam due to collision. For a threeaxis printer this isn’t a big deal, but for a four or fiveaxis robot arm it is.
So the assembly problem under collision, connection and directionality constraints isn’t that hard. However, for printing processes where the material is melted and solidifies by cooling, there is an additional constraint. This is shown in the following video:
See what happened? The nozzle is hot, and it melts the existing joint. Some degree of melting is unfortunately necessary to fuse new beams to existing joints. We could add scaffolding or try to find some physical solution, but we can circumvent it in many cases by computation alone. Specifically, we can find a sequence that is not only consistent with collision, connection and directionality constraints, but that also never requires a joint to simultaneously support two cantilevered beams. Obviously some things, like the tree we tried to print previously, are impossible to print under this constraint. However, it turns out that some very intimidatinglooking designs are in fact feasible.
We approach the problem by considering the assembly states. A state is just the set of beams that has been assembled, and contains no information about the order in which they were assembled. Our goal is to find a path from the start state to the end state. Because adjacent states differ by the presence of a single beam, each path corresponds to a unique assembly sequence. For small designs, we can actually generate the whole graph. However, for large designs, exhaustively enumerating the states would take forever. For illustrative purposes, here’s a structure where the full assembly state is small enough to enumerate. Note that some states are unreachable or are a dead end:
Note that, whether you start at the beginning and go forward or start at the end and work backward, you can find yourself in a dead end. These dead ends are labeled G and H in the figure. There might be any number of dead ends, and you may have to visit all of them before you find a sequence that works. You might never find a sequence that works! This problem is actually NP complete—that is, you can’t know if there is a feasible sequence without potentially trying all of them. The addition of the cantilever constraint is what makes the problem hard. You can’t say for sure if printing a beam is going to make it impossible to assemble another beam later. What’s more, going backward doesn’t solve that problem: you can’t say for sure if removing a beam is going to make it impossible to remove a beam later due to the cantilever constraint.
The key word there is “potentially.” Usually you can find a sequence without trying everything. The algorithm we developed searches the assembly graph for states that don’t contain cantilevers. If you get to one of these states, it doesn’t mean a full sequence exists. However, it does mean that if a sequence exists, you can find one without backtracking past this particular cantileverfree state. This essentially divides the problem into a series of much smaller NPcomplete graph search problems. Except in contrived cases, these can be solved quickly, enabling construction of very intricate models:
✕FindFreeformPath[,Monitor>Full]

So that mostly solves the problem. However, further complicating matters is that these slender beams are about as strong as you might expect. Gravity can deform the construct, but there is actually a much larger force attributable to the flow of material out of the nozzle. This force can produce catastrophic failure, such as the instability shown here:
However, it turns out that intelligent sequencing can solve this problem as well. Using models developed for civil engineering, it is possible to compute at every potential step the probability that you’re going to break your design. The problem then becomes not one of finding the shortest path to the goal, but of finding the safest path to the goal. This step requires inversion of large matrices and is computationally intensive, but with the Wolfram Language’s fast builtin solvers, it becomes feasible to perform this process hundreds of thousands of times in order to find an optimal sequence.
So that’s the how. The next question is, “Why?” Well, the problem is simple enough. Multicellular organisms require a lot of energy. This energy can only be supplied by aerobic respiration, a fancy term for a cascade of chemical reactions. These reactions use oxygen to produce the energy required to power all higher forms of life. Nature has devised an ingenious solution: a complex plumbing system and an indefatigable pump delivering oxygenrich blood to all of your body’s cells, 24/7. If your heart doesn’t beat at least once every couple seconds, your brain doesn’t receive enough oxygenrich blood to maintain consciousness.
We don’t really understand superhighlevel biological phenomena like consciousness. We can’t, as far as we can tell, engineer a conscious array of cells, or even of transistors. But we understand pretty well the plumbing that supports consciousness. And it may be that if we can make the plumbing and deliver oxygen to a sufficiently thick slab of cells, we will see some emergent phenomena. A conscious brain is a long shot, a functional piece of liver or kidney decidedly less so. Even a small piece of vascularized breast or prostate tissue would be enormously useful for understanding how tumors metastasize.
The problem is, making the plumbing is hard. Cells in a dish do selforganize to an extent, but we don't understand such systems well enough to tell a bunch of cells to grow into a brain. Plus, as noted, growing a brain sort of requires attaching it to a heart. Perhaps if we understand the rules that govern the generation of biological forms, we can generate them at will. We know that with some simple mathematical rules, one can generate very complex, interesting structures—the stripes on a zebra, the venation of a leaf. But going backward, reverseengineering the rule from the form, is hard, to say the least. We have mastered the genome and can program single cells, but we are novices at best when it comes to predicting or programming the behavior of cellular ensembles.
An alternative means of generating biological forms like vasculature is a bit cruder—just draw the form you want, then physically place all the cells and the plumbing according to your blueprint. This is bioprinting. Bioprinting is exciting because it reduces the generation of biological forms into a set of engineering problems. How do we make a robot put all these cells in the right place? These days, any sentence that starts with “How do we make a robot...” probably has an answer. In this case, however, the problem is complicated by the fact that, while the robot or printer is working, the cells that have already been assembled are slowly dying. For really big, complex tissues, either you need to supply oxygen to the tissue as you assemble it or you need to assemble it really fast.
One approach of the really fast variety was demonstrated in 2009. Researchers at Cornell used a cotton candy machine to meltspin a pile of sugar fibers. They cast the sugar fibers in a polymer, dissolved them out with water and made a vascular network in minutes, albeit with little control over the geometry. A few years later, researchers at University of Pennsylvania used a hacked desktop 3D printer to draw molten sugar fibers into a lattice and show that the vascular casting approach was compatible with a variety of cellladen gels. This was more precise, but not quite freeform. The next step, undertaken in a collaboration between researchers at the University of Illinois at Urbana–Champaign and Wolfram Research, was to overcome the physical and computational barriers to making really complex designs—in other words, to take sugar printing and make it truly freeform.
We’ve described the computational aspects of freeform 3D printing in the first half of this post. The physical side is important too.
First, you need to make a choice of material. Prior work has used glucose or sucrose—things that are known to be compatible with cells. The problem with these materials is twofold: One, they tend to burn. Two, they tend to crystallize while you’re trying to print. If you’ve ever left a jar of honey or maple syrup out for a long time, you can see crystallization in action. Crystals will clog your nozzle, and your print will fail. Instead of conventional sugars, this printer uses isomalt, a lowcalorie sugar substitute. Isomalt is less prone to burning or crystallizing than other sugarlike materials, and it turns out that cells are just as OK with isomalt as they are with real sugar.
Next, you need to heat the isomalt and push it out of a tiny nozzle under high pressure. You have to draw pretty slowly—the nozzle moves about half a millimeter per second—but the filament that is formed coincides almost exactly with the path taken by the nozzle. Right now it’s possible to be anywhere from 50 to 500 micrometers, a very nice range for blood vessels.
So the problems of turning a design into a set of printer instructions, and of having a printer that is sufficiently precise to execute them, are more or less solved. This doesn’t mean that 3Dprinted organs are just around the corner. There are still problems to be solved in introducing cells in and around these vascular molds. Depending on the ability of the cells to selforganize, dumping them around the mold or flowing them through the finished channels might not be good enough. In order to guide development of the cellular ensemble into a functional tissue, more precise patterning may be required from the outset; direct cell printing would be one way to do this. However, our understanding of selforganizing systems increases every day. For example, last year researchers reproduced the first week of mouse embryonic development in a petri dish. This shows that in the right environment, with the right mix of chemical signals, cells will do a lot of the work for us. Vascular networks deliver oxygen, but they can also deliver things like drugs and hormones, which can be used to poke and prod the development of cells. In this way, bioprinting might enable not just spatial but also temporal control of the cells’ environment. It may be that we use the vascular network itself to guide the development of the tissue deposited around it. Cardiologists shouldn’t expect a 3Dprinted heart for their next patients, but scientists might reasonably ask for a 3Dprinted sugar scaffold for their next experiments.
So to summarize, isomalt printing offers a route to making interesting physiological structures. Making it work requires a certain amount of mechanical and materials engineering, as one might expect, but also a surprising amount of computational engineering. The Wolfram Language provides a powerful tool for working with geometry and physical models, making it possible to extend freeform bioprinting to arbitrarily large and complex designs.
To learn more about our work, check out our papers: a preprint regarding the algorithm (to appear in IEEE Transactions on Automation Science and Engineering), and another preprint regarding the printer itself (published in Additive Manufacturing).
This work was performed in the Chemical Imaging and Structures Laboratory under the principal investigator Rohit Bhargava at the University of Illinois at Urbana–Champaign.
Matt Gelber was supported by fellowships from the Roy J. Carver Charitable Trust and the Arnold and Mabel Beckman Foundation. We gratefully acknowledge the gift of isomalt and advice on its processing provided by Oliver Luhn of Südzucker AG/BENEOPalatinit GmbH. The development of the printer was supported by the Beckman Institute for Advanced Science and Technology via its seed grant program.
We also would like to acknowledge Travis Ross of the Beckman Institute Visualization Laboratory for help with macrophotography of the printed constructs. We also thank the contributors of the CAD files on which we based our designs: GrabCAD user M. G. Fouché, 3D Warehouse user Damo and Bibliocas user limazkan (Javier Mdz). Finally, we acknowledge Seth Kenkel for valuable feedback throughout this project.