Derivatives of functions play a fundamental role in calculus and its applications. In particular, they can be used to study the geometry of curves, solve optimization problems and formulate differential equations that provide mathematical models in areas such as physics, chemistry, biology and finance. The function `D` computes derivatives of various types in the Wolfram Language and is one of the most-used functions in the system. My aim in writing this post is to introduce you to the exciting new features for `D` in Version 11.1, starting with a brief history of derivatives.

The idea of a derivative was first used by Pierre de Fermat (1601–1665) and other seventeenth-century mathematicians to solve problems such as finding the tangent to a curve at a point. Given a curve *y*=*f*(*x*), such as the one pictured below, they regarded the tangent line at a point {*x*,*f*(*x*)} as the limiting position of the secant drawn to the point through a nearby point {*x*,*f*(*x*+*h*)}, as the “infinitesimal” quantity *h* tends to 0.

Their technique can be illustrated as follows.

The slope of a secant line joining {*x*,*f*(*x*)} and {*x*+*h*,*f*(*x*+*h*)} is given by `DifferenceQuotient`.

Now suppose that the function *f*(*x*) is defined as follows.

Then the slope of a secant line joining {*x*,*f*(*x*)} and {*x*+*h*,*f*(*x*+*h*)} is given.

The mathematicians of the time then proceeded to find the slope of the tangent by setting *h* equal to 0.

The following animation shows the tangent lines along the curve that are obtained by using the formula for the slope derived above.

The direct replacement of the infinitesimal quantity *h* by 0 works well for simple examples, but it requires considerable ingenuity to compute the limiting value of the difference quotient in more difficult examples. Indeed, Isaac Barrow (1630–1677) and others used geometrical methods to compute this limiting value for a variety of curves. On the other hand, the built-in `Limit` function in the Wolfram Language incorporates methods based on infinite series expansions and can be used for evaluating the required limits. For example, suppose that we wish to find the derivative of `Sin`. We first compute the difference quotient of the function.

Next, we note that setting *h* equal to 0 directly leads to an `Indeterminate` expression, as shown below. The `Quiet` function is used to suppress messages that warn about the indeterminacy.

Although the direct substitution method has failed, we can use `Limit` to arrive at the result that the derivative of `Sin[x]` is `Cos[x]`.

Continuing with the historical development, around 1670, Isaac Newton and Gottfried Wilhelm Leibniz “discovered” calculus in the sense that they introduced the general notions of derivative and integral, developed convenient notations for these two operations and established that they are inverses of each other. However, an air of mystery still surrounded the use of infinitesimal quantities in the works of these pioneers. In his 1734 essay *The Analyst*, Bishop Berkeley called infinitesimals the “ghosts of departed quantities”, and ridiculed the mathematicians of his time by saying that they were “men accustomed rather to compute, than to think.” Meanwhile, calculus continued to provide spectacularly successful models in physics, such as the wave equation for oscillatory motion. These successes spurred mathematicians on to search for a rigorous definition of derivatives using limits, which was finally achieved by Augustin-Louis Cauchy in 1823.

The work of Cauchy and later mathematicians, particularly Karl Weierstrass (1815–1897), laid to rest the controversy about the foundations of calculus. Mathematicians could now treat derivatives in a purely algebraic way without feeling concerned about the treacherous computation of limits. To be more precise, the calculus of derivatives could now be reduced to two sets of rules—one for computing derivatives of individual functions such as `Sin`, and another for finding derivatives of sums, products, compositions, etc. of these functions. It is this algebraic approach to derivatives that is implemented in `D` and allows us to directly calculate the derivative of `Sin` with a single line of input, as shown here.

Starting from the derivative of a function, one can compute derivatives of higher orders to gain further insight into the physical phenomenon described by the function. For example, suppose that the position *s*(*t*) of a particle moving along a straight line at time *t* is defined as follows.

Then, the velocity and the acceleration of the particle are given by its first and second derivatives, respectively. The higher derivatives too can be computed easily using `D`; they also have special names, which can be seen in the following computation.

Let us now return to our original example and compute the first four derivatives of `Sin`.

There is a clear pattern in the table, namely that each derivative may be obtained by adding a multiple of 𝜋/2 to *x*, as shown here.

In Version 11.1, `D` returns exactly this formula for the *n*^{th} derivative of `Sin`.

An immediate application of the above closed form would be to compute higher-order derivatives of functions with blinding speed. `D` itself uses this method to compute the billionth derivative of `Sin` in a flash, using Version 11.1.

The Wolfram Language has a rich variety of mathematical functions, starting from elementary functions such as `Power` to advanced special functions such as `EllipticE`. The *n*^{th} derivatives for many of these functions can be computed in closed form using `D` in Version 11.1. The following table captures the beauty and complexity of these formulas, each of which encodes all the information required to compute higher derivatives of a given function.

Some of the entries in the table are rather simple. For example, the first entry states that all the derivatives of the exponential function are equal to the function itself, which generalizes the following result from basic calculus.

In sharp contrast to that, the *n*^{th} derivative for `ArcTan` is given by a formidable expression involving `HypergeometricPFQRegularized`.

If we now give specific values to *n* in that formula, we obtain elementary answers from the first few derivatives.

These answers agree with the ones obtained if `D` is used separately for each derivative computation. The results are then simplified.

The familiar sum, product and chain rules of calculus generalize very nicely to the case of *n*^{th} derivatives. The sum rule is the easiest, and simply states that the *n*^{th} derivative of a sum is the sum of the *n*^{th} derivatives.

The product rule, or the so-called Leibniz rule, gives an answer that is essentially a binomial expansion, expressed as a sum wrapped in `Inactive` to prevent evaluation.

We can recover the product rule from a first course on derivatives simply by setting *n*=1 and applying `Activate` to evaluate the resulting inert expression.

Finally, there is a form of the chain rule due to the pious Italian priest Francesco Faà di Bruno (1825–1888). This is given by a rather messy expression in terms of `BellY`, and states that:

Once again, it is easy to recover the chain rule for first derivatives by setting *n=*1 as we did earlier.

The special functions in the Wolfram Language typically occur in families, with different members of each family labeled by integers or other parameters. For example, there is one function `BesselJ[n,z]` for each integer *n*. The first four members of this family are pictured below (the sinusoidal character of Bessel functions helps in the modelling of circular membranes).

It turns out that the derivatives of `BesselJ[n,z]` can be expressed in terms of other Bessel functions from the same family. While earlier versions did make some use of these relationships, Version 11.1 exploits them more fully to return compact answers for examples such as the following, which generated 2^{10}=1024 instances of `BesselJ` in earlier releases!

The functions considered so far are differentiable in the sense that they have derivatives for all values of the variable. The absolute value function provides a standard example of a non-differentiable function, since it does not have a derivative at the origin. Unfortunately, the built-in `Abs` function is defined for complex values, and hence does not have a derivative at any point. Version 11.1 overcomes this limitation by introducing `RealAbs`, which agrees with `Abs` for real values, as seen in the following plot.

This function has a derivative at all values except at the origin, which is given by:

The introduction of `RealAbs` is sure to be welcomed by users who have long requested such a function for use in differential equations and other applications.

This real absolute value function is continuous and only mildly non-differentiable, but in 1872, Karl Weierstrass stunned the mathematical world by introducing a fractal function that is continuous at every point but differentiable nowhere. Version 11.1 introduces several fractal curves of this type, which are named after their discoverers. Approximations for a few of these curves are pictured here.

Albert Einstein’s 1916 paper announcing the general theory of relativity provided a great impetus to the development of calculus. In this landmark paper, he made systematic use of the tensor calculus developed by Gregorio Ricci (1853–1925) and his student Tullio Levi-Civita (1873–1941) to formulate a theory of the gravitational field, which has now received superb confirmation through the detection of gravitational waves. The `KroneckerDelta` tensor, which derives its name from the Greek delta character δ that is used to represent it, plays a key role in tensor calculus.

The importance of `KroneckerDelta` lies in the fact that it allows us to “sift” a tensor and isolate individual terms from it with ease. In order to understand this idea, let us obtain the definition of this tensor by applying `PiecewiseExpand` to it.

From the above, we see that `KroneckerDelta[i, j]` is 1 if its components *i* and *j* are equal, and is equal to 0 otherwise. As a result, it allows us to sift through all the terms in the following sum and select, say, the third term *f*(3) from it.

In Version 11.1, `D` makes use of this property of `KroneckerDelta` to differentiate finite sums with symbolic upper limits with respect to an indexed variable *x*(*j*), as illustrated here.

The last result expresses the fact that only the *j*^{th} term in the derivative is nonzero, since none of the other terms depend on *x*(*j*), and hence their derivatives with respect to this variable are 0. For example, if we set *n*=5 and *j*=2, then the sum reduces to the single term *f ^{′}*(

Along with the improvements for the functionality of `D`, Version 11.1 also includes a major documentation update for this important function. In particular, the reference page now includes many application examples of the types encountered in a typical college calculus course. These examples are based on a large collection of more than 5,000 textbook exercises that were solved by a group of talented interns using the Wolfram Language during the summer of 2016. Some of the graphics from these examples are shown here. You can click anywhere inside each of the three following graphics to view their corresponding examples in the online documentation.

`D` is a venerable function that has been available since Version 1.0 (1988). We hope that the enhancements for this function in Version 11.1 will make it even more appealing to users at all levels. Any comments or feedback about the new features are very welcome.

Exoplanets are currently an active area of research in astronomy. In the past few years, the number of exoplanet discoveries has exploded, mainly as the result of the Kepler mission to survey eclipsing exoplanet systems. But Kepler isn’t the only exoplanet study mission going on. For example, the TRAnsiting Planets and PlanetesImals Small Telescope (TRAPPIST) studies its own set of targets. In fact, the media recently focused on an exoplanet system orbiting an obscure star known as TRAPPIST-1. As an introduction to exoplanet systems, we’ll explore TRAPPIST-1 and its system of exoplanets using the Wolfram Language.

To familiarize yourself with the TRAPPIST-1 system, it helps to start with the host star itself, TRAPPIST-1. Imagine placing the Sun, TRAPPIST-1 and Jupiter alongside one another on a table. How would their sizes compare? The following provides a nice piece of eye candy that lets you see how small TRAPPIST-1 is compared to our Sun. It’s actually only a bit bigger than Jupiter.

Although the diameter looks to be about the same as Jupiter’s, its mass is quite different—actually about 80 times the mass of Jupiter.

And it has only about 8% of the Sun’s mass.

TRAPPIST-1 is a thus very low-mass star, at the very edge of the main sequence, but still allowing the usual hydrogen fusion in its core.

The exoplanets in this system are what actually gained all of the media attention. All of the exoplanets (blue orbits) found in the TRAPPIST-1 system so far orbit the star at distances that would be far inside the orbit of Mercury (in green), if they were in our solar system.

As a more quantitative approach to study the planets in this system, it is useful to take a look at the orbital periods of these planets, which lie very close together. Planets in such close proximity can often perturb one another, which can result in planets being ejected out of the system, unless some orbital resonances ensure that the planets are never in the wrong place at the wrong time. It’s easy to look up the orbital period of the TRAPPIST-1 planets.

Divide them all by the orbital period of the first exoplanet to look for orbital resonances, as indicated by ratios close to rational fractions.

These show near resonances with the following ratios.

TRAPPIST-1 h has an inaccurately known orbital period so it’s not clear whether it partakes in any resonances.

Similarly, nearest-neighbor orbital period ratios show resonances.

Which are close to:

An orbital resonance of 3/2 means that one of the planets orbits 3 times for every 2 of the other. Pluto and Neptune in our solar system exhibit a near 3:2 orbital resonance.

This can help explain how so many planets can be packed into such a tight space without experiencing disruptive perturbations.

What about the distances of the exoplanets from their host star? If you placed TRAPPIST-1 and its planets alongside Jupiter and its four Galilean moons, how would they compare? The star and Jupiter are similar in size. The exoplanets are a bit larger than the moons (which are hard to see here) and they orbit a bit farther away, but the overall scales are of similar magnitude. In the following graphic, all distances are to scale, but we magnified the actual diameters of the planets and moons to make them easier to see.

The sizes of the planets can be compared to Jupiter’s four biggest moons for additional scale comparisons.

At the time of this writing, we have curated over 3,400 confirmed exoplanets:

Most of the confirmed exoplanets have been discovered since 2014, during missions such as Kepler.

You can query for data on individual exoplanets.

In addition, there are various classifications of exoplanets that can be queried.

You can perform an analysis to see when the exoplanets were discovered.

You can also do a systematic comparison of exoplanet parameters, which we limit here to the radius and density. We are only considering the entity class of super-Earths here. The red circle marks the approximate location of the TRAPPIST-1 system in this plot.

Here is another example of systematic comparison of exoplanet parameters by discovery method, indicated by color coding. Once again, the TRAPPIST-1 system is shown, with red dots at its mean values.

In addition to data specific to exoplanets, the Wolfram Language also includes data on chemical compounds present in planetary atmospheres.

You can use this data to show, for example, how the density of various atmospheric components changes with temperature.

As a more concrete application, the Wolfram Language also provides the tools needed to explore collections of raw data. For example, here we can import irregularly sampled stellar light-curve data directly from the NASA Exoplanet Archive for the HAT-P-7 exoplanet system.

Then we can remove data points that are not computable.

This raw data can be easily visualized, as shown here.

The following subsamples the data and does some additional post processing to both the times and magnitudes.

Plotting the data shows evidence of eclipses, appearing as a smattering of points below the dense band of data.

A Fourier-like algorithm can identify periodicities in this data.

Zoom into the fundamental frequency, at higher resolution.

We find that the fundamental peak frequency occurs at .453571 radians/day, for which the reciprocal gives an estimate of the corresponding orbital period in days.

With some additional processing, we can apply a minimum string length (MSL) algorithm to the raw data to look for periodicities.

We can apply the MSL algorithm to a range of potential periods to try to find a value that minimizes the distance between neighboring points when the data is phase folded.

Clearly, the minimum string length occurs at about 2.20474 days, in close agreement with method 1.

We can also validate this derived value with the value stored in the header of the original data.

This orbital period corresponds with that of exoplanet HAT-P-7b, as can be seen in the Wolfram curated data collection (complete with units and precision).

From the known orbital period, we can phase fold the original dataset, overlapping the separate eclipses, to obtain a more complete picture of the exoplanet eclipse.

Noise can be reduced by carrying out a phase-binning technique. All data points are placed into bins of width .0005 days, and the mean of the values in each bin is determined.

This graphic, mainly for purposes of visualization, shows the host star, HAT-P-7, with its exoplanet HAT-P-7b orbiting it. All parameters, including diameter and orbital radius, are to scale. The idea is to try to reproduce the brightness variations seen in the observed light curve. For this graphic, the `GrayLevel` of the exoplanet is set to `GrayLevel[1]`, which enables you to more clearly see the exoplanet go though phases as it orbits the host star.

Now we can do an analogous thing, generating a list of frames instead of a static graphic. In this case, the `GrayLevel` of the exoplanet is much reduced, as compared to the animation above. For purposes of illustration and to reduce computation time, a small set of values has been chosen around the primary eclipse.

Now, to measure how the brightness of the scene changes, we can use image processing to total all of the pixel values. It’s rasterized at a large image size so that edge artifacts are minimized (which can otherwise have measurable effects on the resulting light curve). So this code takes a minute or so to run.

Next, we rescale all of the pixel counts to fit in the same vertical range as the observed light curve.

Now compare the model data to the actual data. The red points show the model data computed at a few orbital phases around the primary eclipse.

A more detailed model light curve can be constructed if you increase the computation time. The version above was done for speed. Of course, additional secondary effects can be included, such as the possibility of gravity darkening and other effects that cause brightness variations across the face of the star. Such secondary effects are beyond the scope of this blog.

Other star systems can be far more complicated and provide their own unique challenges to understanding their dynamical behavior. The Wolfram Language provides a powerful tool that allows you to explore the subtleties of stellar light curve analysis as well as the periodicities in irregularly sampled data. It would be interesting to see some of these more complicated systems tackled in similar ways to what we’ve done in this blog.

Calling all command-line junkies: the new WolframScript is here!

Now you can evaluate Wolfram Language code, call deployed APIs and execute standalone scripts directly from your favorite command-line interface. WolframScript works like any other command-line utility, enabling flexible connections between the Wolfram System and other programs and I/O.

WolframScript comes packaged with Version 11.1 of Mathematica; on Mac, you must run the Extras installer bundled with the Wolfram System. You can also download and install a standalone version from the WolframScript home page.

Once installed, the `wolframscript` executable can be found in the same folder as your desktop application, and it is added to the PATH so you can call it directly from any command-line interface.

When executed with no options, `wolframscript` opens a Wolfram Language console interpreter. This interactive shell (sometimes referred to as a REPL or read–eval–print loop) is a convenient way to write and run Wolfram Language code without launching the desktop front end. It also provides an alternative interface for headless servers or embedded computers (for example, a Raspberry Pi).

When running `wolframscript` in this way, you can simply enter a line of code and press Enter to see the result. Once you’re finished, use `Quit` to terminate the interactive session.

To run a single line of code without launching the interactive shell, use the `-code` option. Commands entered this way are evaluated immediately by the Wolfram Engine, with the result sent to standard output. When evaluation is complete, the Wolfram kernel is terminated. This is convenient for single-use applications, like viewing the contents of a text file using `Import`. (In some cases you’ll need to escape inner double quotes with the \ character.)

You can also use redirection to supply a file as input through standard input. This incoming data is represented within a script by `$ScriptInputString`. Adding `-linewise` uses the standard `NewLine` character as a delimiter, treating each line of text as a separate input value.

For more structured scripting, you can indicate a pure function using the `-function` option and pass in arguments with `-args`. By default, arguments are interpreted as strings.

With the `-signature` option, you can specify how arguments should be parsed in each function slot, including any format available to `Interpreter`—from basic numeric and string types to entities, quantities and many import/export formats. (Keep in mind that some high-level interpreter types require a connection to the Wolfram Cloud.)

If you don’t have a local installation of Mathematica, you can run `wolframscript` in the cloud. Adding the `-cloud` option to the end of your command sends the computation to an available cloud kernel. You’ll be asked to authenticate the first time you run something in the cloud.

The `-cloud` option uses a public kernel on the Wolfram Cloud by default. If you’re connected to Wolfram Enterprise Private Cloud, you can specify a different cloud base by passing its URL (e.g. https://privatecloud.mycompany.com) as an argument directly after `-cloud`.

You can open and close these connections manually using `-auth` and `-disconnect`. Each cloud requires separate authentication, and connection data is stored for use during your session. Cloud authentication is only necessary for sending dedicated computations; it doesn’t affect Wolfram Knowledgebase access.

Code from Wolfram Language packages (.wl, .m) can be executed through `wolframscript` using the `-file` option. This evaluates each successive line of code in the file, terminating the kernel when finished.

Unlike with interactive scripting, results from `-file` are not displayed by default unless enclosed in `Print`, `Write` or some similar output function. Using the `-print` option sends the result of the final computation to standard output, and `-print all` shows intermediate results as well.

You can also call deployed APIs with the `-api` option. The following API (generated using `APIFunction` and `CloudDeploy`) returns a forecast of high temperatures for the next week in a given city. To call the API with `wolframscript`, you can reference it by URL or by UUID (the last part of the URL). Parameters are passed in by name; in this case, `-args` is optional.

By default, `wolframscript` gives a low-level text representation of the result. You can select the type of output you want, including any format understood by `Export`, using the `-format` option. For instance, some output may be easier to read in a table format.

When working with non-textual formats (e.g. spreadsheets, audio, video, graphics), it’s often best to write output directly to a file; you can do this using redirection.

Wolfram Language scripts (.wls) are standalone files containing Wolfram Language code that can be executed like any other application. Structurally, scripts are just packages that are launched as programs rather than notebooks by default. You can create a script from Mathematica with the **File** > **New** > **Script** menu item and execute it by typing its name in the command line (prepending ./ on Linux and Mac systems) or by double-clicking its icon in a file explorer.

The shebang line (starting with `#!`) tells the Unix environment to check the PATH for the `wolframscript` executable. On Unix-based systems, you can add launch options to this line by opening the script in a text editor. For instance, if you wanted to implement the travel distance function above as a standalone script, you would include the `-function` and `-signature` options in this line. (As of this writing, these options are bypassed when running scripts in Windows, but the goal is to eventually have all platforms work the same.)

To access command-line arguments, use `$ScriptCommandLine` within your script. Arguments are stored as a list of strings, starting with the full path of the script. In most cases, you’ll want to discard that initial value using `Rest`.

You may need to convert arguments to the correct data type for computations; this can be done using `ToExpression`. This script also checks for arguments first, printing a message if none are found.

Redirection works both ways when executing scripts, allowing for advanced applications such as the following image processing example. To maintain formatting for non-textual output, use `-format` when writing to a file.

You can even launch external programs directly from your script. The following will take a fundamental frequency, generate a bell sound using harmonics, export it to a temporary file and play it in your system’s default audio player.

WolframScript makes it easy to access Wolfram kernels from familiar, low-level interfaces for more flexible and universal computations. And with its cloud connectivity, you can access the Wolfram Language even from machines with no Wolfram System installed.

All the scripts demonstrated here are available for direct download as .wls files. You can execute them directly, change code and launch options in a text editor, or open them in Mathematica for standard notebook features like interactive execution, code highlighting and function completion.

For even more ideas, take a look at the WolframScript documentation and our tutorial on writing scripts. These examples barely scratch the surface—with the full functionality of the Wolfram Language available, the possibilities are endless.

So what are you waiting for? Let’s get scripting!

]]>This year, we’re bringing the European Wolfram Technology Conference to Amsterdam! Join us June 19–20 for two days of expert talks showcasing the latest releases in Wolfram technologies, in-depth explorations of key features and practical use cases for integrating Wolfram technologies in your ecosystem.

Catering to both new and existing users, the conference provides an overview of the entire Wolfram technology stack while also exploring some of our new products and features, including Wolfram|One, the Wolfram Data Repository and the latest capabilities released in Mathematica 11.1!

With a conference dinner rounding out the first day, this is a great opportunity for attendees not only to meet those who develop Wolfram technologies but also to connect with our thriving community of like-minded users.

Session highlights will include keynotes from Conrad Wolfram, Jon McLoone, and a range of Wolfram experts and users from around the world, giving you the inside track on the future direction of computational technology.

Key topics will include:

- Machine learning and neural networks
- Enterprise computation strategies
- Deployment in the Wolfram Cloud
- Signal and image processing

To join us in Amsterdam, register now!

]]>

*Differential Equations with Mathematica, Fourth Edition*

The fourth edition of *Differential Equations with Mathematica* is a supplementing reference that uses the fundamental concepts of Mathematica to solve (analytically, numerically and/or graphically) differential equations of interest to students, instructors and scientists. Authors Martha L. Abell and James P. Braselton include instruction on basic methods and algorithms. They cover the Mathematica functions relevant to differential equations and dependant concepts from calculus and linear algebra. This book contains many helpful illustrations that make use of Mathematica’s visualization capabilities.

*Solution Techniques for Elementary Partial Differential Equations, Third Edition*

Christian Constanda teaches students to solve partial differential equations through concise, easily understood explanations and worked examples that allow students to see the techniques in action. The third edition includes new sections on series expansions of more general functions, other problems of general second-order linear equations, vibrating string with other types of boundary conditions and equilibrium temperature in an infinite strip. It also includes new and improved exercises with a brief Mathematica program for nearly all of the worked examples, teaching students how to verify their results with a computer.

*Differential Equations & Linear Algebra, Fourth Edition*

Authors C. Henry Edwards, David E. Penney and David Calvis provide updated and improved figures, examples, problems and applications. With real-world applications and a blend of algebraic and geometric approaches, *Differential Equations & Linear Algebra* introduces students to mathematical modeling of real-world phenomena and offers an array of problem sets. Alongside this fourth edition, an expanded applications website is now available that includes programming tools from Mathematica and Wolfram|Alpha.

*Exploring Calculus: Labs and Projects with Mathematica*

Authors Crista Arangala and Karen A. Yokley created a hands-on lab manual that can be used in class every day to guide the exploration of the theory and applications of differential and integral calculus. Each lab consists of an explanation of material with integrated exercises. The exercise sections integrate problems, technology, Mathematica R visualization and the Computable Document Format (CDF) to help students discover the theory and applications of differential and integral calculus in a meaningful and memorable way.

*Calculus and Differential Equations with Mathematica*

In this book, Pramote Dechaumphai offers a clear and easy-to-understand presentation of how to use Mathematica to solve calculus and differential equation problems. It contains essential topics that are taught in calculus and differential equation courses, including differentiation, integration, ordinary differential equations and Laplace and Fourier transforms, as well as special functions normally encountered in solving science and engineering problems. Numerical methods are employed when the exact solutions are not available. Additionally, the finite element method in Mathematica is used to analyze partial differential equations for problems with complex geometry. These partial differential equations could be in elliptic, parabolic and hyperbolic forms. Many examples are presented with detailed derivation for their solutions before using Mathematica to confirm the results.

*Geometry, Language and Strategy Vol. 2: The Dynamics of Decision Processes*

The first volume, *Geometry, Language and Strategy*, extended the concepts of game theory, replacing static equilibrium with a deterministic dynamic theory. It also opened up many applications that were only briefly touched on. To study the consequences of the deterministic approach and the extent of these applications in contrast to standard Bayesian approaches requires an engineering foundation and discipline, which this volume supplies. It provides a richer list of applications, such as the prisoner’s dilemma, expanding the relevance of volume 1 to more general time-dependent and transient behaviors.

*Mathematica for Mathematics, Physics and Engineers*

Mehrzad Ghorbani expands on an earlier work, *Applied Mathematical Softwares: Mathematica*, developed over the course of more than 10 years of teaching mathematics software and Mathematica code in Iranian universities. This new title includes more elegant and basic mathematical problems from a range of specializations including calculus, number theory, numerical analysis, vector and matrix algebra, complex variables, graph theory, engineering mathematics and mathematical physics. Although applicable to undergraduate and graduate studies in math and science, Ghorbani’s book is additionally relevant to those who use Mathematica in computational scientific branches that need symbolic or numerical code.

Ever since the partnership between the Raspberry Pi Foundation and Wolfram Research began, people have been excited to discover—and are often surprised by—the power and ease of using the Wolfram Language on a Raspberry Pi. The Wolfram Language’s utility is expanded even more with the addition of the Sense HAT, a module that gives the Raspberry Pi access to an LED array and a collection of environmental and movement sensors. This gives users the ability to read in data from the physical world and display or manipulate it in the Wolfram Language with simple, one-line functions. With the release of Mathematica 11, I’ve been working hard to refine functions that connect to the Sense HAT, allowing Mathematica to communicate directly with the device.

The Sense HAT functionality is built on Wolfram’s Device Driver Framework, so connecting to the device is incredibly simple. To start, use the `DeviceOpen` function to establish a connection. This will return a `DeviceObject`, which we will use later to tell Mathematica which device we are wanting to read from or write to.

In the case of the Sense HAT, there are three onboard sensors that Mathematica can read from. Accessing the data from these sensors is as easy as calling `DeviceRead` with the name of the measurement wanted. For instance:

There are a total of seven measurements that can be read from the Sense HAT: temperature, humidity, air pressure, acceleration, rotation, magnetic field and orientation. All readings are returned with appropriate units, making it easy to convert the values to other formats if necessary.

The other physical component of the Sense HAT is the 8-by-8 LED array. Similar to reading data with `DeviceRead`, it is only a matter of calling the `DeviceWrite` function to send either an image or a string to the array. For strings, the text scrolls across the device sideways. You can manipulate the speed and color of the scrolling text with relevant options as well.

Alternatively, the Sense HAT can receive an 8-by-8 list of RGB values to be displayed on the LED array. Using this method, it’s possible to display small images on the screen of the Sense HAT.

Here is a picture of what this looks like when written to a Sense HAT:

Using these functions, you can write Mathematica programs that process the data received from the sensors on the Sense HAT. For example, here is a demo I ran at the Wolfram Technology Conference in October 2016. It reads the temperature, humidity and air pressure around the Pi every five minutes and pushes that data to the Wolfram Data Drop.

The above function generates a new databin to record data to, but what does that data look like once it’s been recorded? Let’s look at the recordings I made at the aforementioned Wolfram Technology Conference.

That data can be downloaded into Mathematica by anyone anytime after the conference to show the changes in atmospheric conditions over the course of the conference using `DateListPlot`. Below, you can see the rise in air pressure inside the conference center as more people gathered to see the many demos Wolfram employees had set up, followed by a drop as the conference ended.

Another demo I ran at the Wolfram Tech Conference made use of `DeviceWrite`. Using the Wolfram Language’s financial database, I turned the Sense HAT into a miniature stock ticker. This demo downloads the current stock market data from Wolfram’s servers, then displays them by picking a random stock from the list and showing the stock’s name and price on the Sense HAT’s LED array.

The final demo that was run at the Wolfram Tech Conference this year used the Sense HAT’s LED array to run Conway’s Game of Life, a famous cellular automaton. For those unfamiliar with the “Game,” imagine each lit LED is a cell in a Petri dish. If a cell has too few or too many neighbors, it dies out. If an empty space has exactly three living neighbors, a new cell appears there. When these rules have been applied to all of the spaces on the grid, a new “generation” begins and the rules are reapplied. This pattern can continue indefinitely, given the right conditions. In my demo, a random array of lit and unlit LEDs constitutes the starting pattern; then the automaton runs for a given number of iterations.

The rounds, pause and color parameters can all be modified to change how the automaton is displayed and how long Mathematica waits before displaying the next iteration.

These demos give a taste of what is possible when Mathematica connects with the Sense HAT. Have a look yourself at the Sense HAT documentation page, and send a note to Wolfram Community if you come up with something interesting!

With the world of data science developing at a rapid pace and companies increasingly aware of its importance, Wolfram is pleased to bring together a range of data science experts at the Computation Meets Data Science Conference on 11 May, in partnership with the Satellite Applications Catapult and Digital Catapult.

Over recent years, Wolfram has been a leading force in revolutionising the field of data science, whether through the development of advanced computation technologies or through the creation of bespoke customer solutions. Wolfram developers have a wealth of knowledge for building your data science strategy, implementing the appropriate infrastructure and transforming traditional methodologies with improved automated solutions. Drawing on this expertise, we have brought together a variety of data science and technology specialists to share their knowledge and experience with you, and to provide an opportunity to quiz the experts on your data science challenges.

We’ve organised a jam-packed schedule, starting with opening remarks from Conrad Wolfram, followed by guest talks, interactive sessions and networking breaks. The conference has a little bit of something for everyone, with panel sessions covering the creation of a data science infrastructure, using data science to promote cultural change, and practical use cases of how computation has changed different organisations’ outlook on data.

To give a flavour of some of the specialist sessions we have lined up, speakers will include:

- Keith Harrison, Head of Knowledge Management at the Offshore Renewable Energy Catapult
- Alison Lowndes, Artificial Intelligence Developer Relations at NVIDIA
- Marco Thiel, Professor of Mathematics and Physics at the University of Aberdeen
- Fredrik Döberl, Owner and Founder of Ablona AB

Please visit the conference website for more information or to register.

]]>I’m pleased to announce that as of today, the Wolfram Data Repository is officially launched! It’s been a long road. I actually initiated the project a decade ago—but it’s only now, with all sorts of innovations in the Wolfram Language and its symbolic ways of representing data, as well as with the arrival of the Wolfram Cloud, that all the pieces are finally in place to make a true computable data repository that works the way I think it should.

It’s happened to me a zillion times: I’m reading a paper or something, and I come across an interesting table or plot. And I think to myself: “I’d really like to get the data behind that, to try some things out”. But how can I get the data?

If I’m lucky there’ll be a link somewhere in the paper. But it’s usually a frustrating experience to follow it. Because even if there’s data there (and often there actually isn’t), it’s almost never in a form where one can readily use it. It’s usually quite raw—and often hard to decode, and perhaps even intertwined with text. And even if I can see the data I want, I almost always find myself threading my way through footnotes to figure out what’s going on with it. And in the end I usually just decide it’s too much trouble to actually pull out the data I want.

And I suppose one might think that this is just par for the course in working with data. But in modern times, we have a great counterexample: the Wolfram Language. It’s been one of my goals with the Wolfram Language to build into it as much data as possible—and make all of that data immediately usable and computable. And I have to say that it’s worked out great. Whether you need the mass of Jupiter, or the masses of all known exoplanets, or Alan Turing’s date of birth—or a trillion much more obscure things—you just ask for them in the language, and you’ll get them in a form where you can immediately compute with them.

Here’s the mass of Jupiter (and, yes, one can use “Wolfram|Alpha-style” natural language to ask for it):

Dividing it by the mass of the Earth immediately works:

Here’s a histogram of the masses of known exoplanets, divided by the mass of Jupiter:

And here, for good measure, is Alan Turing’s date of birth, in an immediately computable form:

Of course, it’s taken many years and lots of work to make everything this smooth, and to get to the point where all those thousands of different kinds of data are fully integrated into the Wolfram Language—and Wolfram|Alpha.

But what about other data—say data from some new study or experiment? It’s easy to upload it someplace in some raw form. But the challenge is to make the data actually useful.

And that’s where the new Wolfram Data Repository comes in. Its idea is to leverage everything we’ve done with the Wolfram Language—and Wolfram|Alpha, and the Wolfram Cloud—to make it as easy as possible to make data as broadly usable and computable as possible.

There are many parts to this. But let me state our basic goal. I want it to be the case that if someone is dealing with data they understand well, then they should be able to prepare that data for the Wolfram Data Repository in as little as 30 minutes—and then have that data be something that other people can readily use and compute with.

It’s important to set expectations. Making data fully computable—to the standard of what’s built into the Wolfram Language—is extremely hard. But there’s a lower standard that still makes data extremely useful for many purposes. And what’s important about the Wolfram Data Repository (and the technology around it) is it now makes that standard easy to achieve—with the result that it’s now practical to publish data in a form that can really be used by many people.

Each item published in the Wolfram Data Repository gets its own webpage. Here, for example, is the page for a public dataset about meteorite landings:

At the top is some general information about the dataset. But then there’s a piece of a Wolfram Notebook illustrating how to use the dataset in the Wolfram Language. And by looking at this notebook, one can start to see some of the real power of the Wolfram Data Repository.

One thing to notice is that it’s very easy to get the data. All you do is ask for `ResourceData["Meteorite Landings"]`. And whether you’re using the Wolfram Language on a desktop or in the cloud, this will give you a nice symbolic representation of data about 45716 meteorite landings (and, yes, the data is carefully cached so this is as fast as possible, etc.):

And then the important thing is that you can immediately start to do whatever computation you want on that dataset. As an example, this takes the `"Coordinates"` element from all rows, then takes a random sample of 1000 results, and geo plots them:

Many things have to come together for this to work. First, the data has to be reliably accessible—as it is in the Wolfram Cloud. Second, one has to be able to tell where the coordinates are—which is easy if one can see the dataset in a Wolfram Notebook. And finally, the coordinates have to be in a form in which they can immediately be computed with.

This last point is critical. Just storing the textual form of a coordinate—as one might in something like a spreadsheet—isn’t good enough. One has to have it in a computable form. And needless to say, the Wolfram Language has such a form for geo coordinates: the symbolic construct `GeoPosition[{`*lat*`,`*lon*`}]`.

There are other things one can immediately see from the meteorites dataset too. Like notice there’s a `"Mass"` column. And because we’re using the Wolfram Language, masses don’t have to just be numbers; they can be symbolic `Quantity` objects that correctly include their units. There’s also a `"Year"` column in the data, and again, each year is represented by an actual, computable, symbolic `DateObject` construct.

There are lots of different kinds of possible data, and one needs a sophisticated data ontology to handle them. But that’s exactly what we’ve built for the Wolfram Language, and for Wolfram|Alpha, and it’s now been very thoroughly tested. It involves 10,000 kinds of units, and tens of millions of “core entities”, like cities and chemicals and so on. We call it the Wolfram Data Framework (WDF)—and it’s one of the things that makes the Wolfram Data Repository possible.

Today is the initial launch of the Wolfram Data Repository, and to get ready for this launch we’ve been adding sample content to the repository for several months. Some of what we’ve added are “obvious” famous datasets. Some are datasets that we found for some reason interesting, or curious. And some are datasets that we created ourselves—and in some cases that I created myself, for example, in the course of writing my book *A New Kind of Science*.

There’s plenty already in the Wolfram Data Repository that’ll immediately be useful in a variety of applications. But in a sense what’s there now is just an example of what can be there—and the kinds of things we hope and expect will be contributed by many other people and organizations.

The fact that the Wolfram Data Repository is built on top of our Wolfram Language technology stack immediately gives it great generality—and means that it can handle data of any kind. It’s not just tables of numerical data as one might have in a spreadsheet or simple database. It’s data of any type and structure, in any possible combination or arrangement.

There are time series:

There are training sets for machine learning:

There’s gridded data:

There’s the text of many books:

There’s geospatial data:

Many of the data resources currently in the Wolfram Data Repository are quite tabular in nature. But unlike traditional spreadsheets or tables in databases, they’re not restricted to having just one level of rows and columns—because they’re represented using symbolic Wolfram Language `Dataset` constructs, which can handle arbitrarily ragged structures, of any depth.

But what about data that normally lives in relational or graph databases? Well, there’s a construct called `EntityStore` that was recently added to the Wolfram Language. We’ve actually been using something like it for years inside Wolfram|Alpha. But what `EntityStore` now does is to let you set up arbitrary networks of entities, properties and values, right in the Wolfram Language. It typically takes more curation than setting up something like a `Dataset`—but the result is a very convenient representation of knowledge, on which all the same functions can be used as with built-in Wolfram Language knowledge.

Here’s a data resource that’s an entity store:

This adds the entity stores to the list of entity stores to be used automatically:

Now here are 5 random entities of type `"MoMAArtist"` from the entity store:

For each artist, one can extract a dataset of values:

This queries the entity store to find artists with the most recent birth dates:

The Wolfram Data Repository is built on top of a new, very general thing in the Wolfram Language called the “resource system”. (Yes, expect all sorts of other repository and marketplace-like things to be rolling out shortly.)

The resource system has “resource objects”, that are stored in the cloud (using `CloudObject`), then automatically downloaded and cached on the desktop if necessary (using `LocalObject`). Each `ResourceObject` contains both primary content and metadata. For the Wolfram Data Repository, the primary content is data, which you can access using `ResourceData`.

The Wolfram Data Repository that we’re launching today is a public resource, that lives in the public Wolfram Cloud. But we’re also going to be rolling out private Wolfram Data Repositories, that can be run in Enterprise Private Clouds—and indeed inside our own company we’ve already set up several private data repositories, that contain internal data for our company.

There’s no limit in principle on the size of the data that can be stored in the Wolfram Data Repository. But for now, the “plumbing” is optimized for data that’s at most about a few gigabytes in size—and indeed the existing examples in the Wolfram Data Repository make it clear that an awful lot of useful data never even gets bigger than a few megabytes in size.

The Wolfram Data Repository is primarily intended for the case of definitive data that’s not continually changing. For data that’s constantly flowing in—say from IoT devices—we released last year the Wolfram Data Drop. Both Data Repository and Data Drop are deeply integrated into the Wolfram Language, and through our resource system, there’ll be some variants and combinations coming in the future.

Our goal with the Wolfram Data Repository is to provide a central place for data from all sorts of organizations to live—in such a way that it can readily be found and used.

Each entry in the Wolfram Data Repository has an associated webpage, which describes the data it contains, and gives examples that can immediately be run in the Wolfram Cloud (or downloaded to the desktop).

On the webpage for each repository entry (and in the `ResourceObject` that represents it), there’s also metadata, for indexing and searching—including standard Dublin Core bibliographic data. To make it easier to refer to the Wolfram Data Repository entries, every entry also has a unique DOI.

The way we’re managing the Wolfram Data Repository, every entry also has a unique readable registered name, that’s used both for the URL of its webpage, and for the specification of the `ResourceObject` that represents the entry.

It’s extremely easy to use data from the Wolfram Data Repository inside a Wolfram Notebook, or indeed in any Wolfram Language program. The data is ultimately stored in the Wolfram Cloud. But you can always download it—for example right from the webpage for any repository entry.

The richest and most useful form in which to get the data is the Wolfram Language or the Wolfram Data Framework (WDF)—either in ASCII or in binary. But we’re also setting it up so you can download in other formats, like JSON (and in suitable cases CSV, TXT, PNG, etc.) just by pressing a button.

Of course, even formats like JSON don’t have native ways to represent entities, or quantities with units, or dates, or geo positions—or all those other things that WDF and the Wolfram Data Repository deal with. So if you really want to handle data in its full form, it’s much better to work directly in the Wolfram Language. But then with the Wolfram Language you can always process some slice of the data into some simpler form that does makes sense to export in a lower-level format.

The Wolfram Data Repository as we’re releasing it today is a platform for publishing data to the world. And to get it started, we’ve put in about 500 sample entries. But starting today we’re accepting contributions from anyone. We’re going to review and vet contributions much like we’ve done for the past decade for the Wolfram Demonstrations Project. And we’re going to emphasize contributions and data that we feel are of general interest.

But the technology of the Wolfram Data Repository—and the resource system that underlies it—is quite general, and allows people not just to publish data freely to the world, but also to share data in a more controlled fashion. The way it works is that people prepare their data just like they would for submission to the public Wolfram Data Repository. But then instead of actually submitting it, they just deploy it to their own Wolfram Cloud accounts, giving access to whomever they want.

And in fact, the general workflow is that even when people are submitting to the public Wolfram Data Repository, we’re going to expect them to have first deployed their data to their own Wolfram Cloud accounts. And as soon as they do that, they’ll get webpages and everything—just like in the public Wolfram Data Repository.

OK, so how does one create a repository entry? You can either do it programmatically using Wolfram Language code, or do it more interactively using Wolfram Notebooks. Let’s talk about the notebook way first.

You start by getting a template notebook. You can either do this through the menu item `File > New > Data Resource`, or you can use `CreateNotebook["DataResource"]`. Either way, you’ll get something that looks like this:

Basically it’s then a question of “filling out the form”. A very important section is the one that actually provides the content for the resource:

Yes, it’s Wolfram Language code—and what’s nice is that it’s flexible enough to allow for basically any content you want. You can either just enter the content directly in the notebook, or you can have the notebook refer to a local file, or to a cloud object you have.

An important part of the Construction Notebook (at least if you want to have a nice webpage for your data) is the section that lets you give examples. When the examples are actually put up on the webpage, they’ll reference the data resource you’re creating. But when you’re filling in the Construction Notebook the resource hasn’t been created yet. The symbolic character of the Wolfram Language comes to the rescue, though. Because it lets you reference the content of the data resource symbolically as `$$Data` in the inputs that’ll be displayed, but lets you set `$$Data` to actual data when you’re working in the Construction Notebook to build up the examples.

Alright, so once you’ve filled out the Construction Notebook, what do you do? There are two initial choices: set up the resource locally on your computer, or set it up in the cloud:

And then, if you’re ready, you can actually submit your resource for publication in the public Wolfram Data Repository (yes, you need to get a Publisher ID, so your resource can be associated with your organization rather than just with your personal account):

It’s often convenient to set up resources in notebooks. But like everything else in our technology stack, there’s a programmatic Wolfram Language way to do it too—and sometimes this is what will be best.

Remember that everything that is going to be in the Wolfram Data Repository is ultimately a `ResourceObject`. And a `ResourceObject`—like everything else in the Wolfram Language—is just a symbolic expression, which happens to contain an association that gives the content and metadata of the resource object.

Well, once you’ve created an appropriate `ResourceObject`, you can just deploy it to the cloud using `CloudDeploy`. And when you do this, a private webpage associated with your cloud account will automatically be created. That webpage will in turn correspond to a `CloudObject`. And by setting the permissions of that cloud object, you can determine who will be able to look at the webpage, and who will be able to get the data that’s associated with it.

When you’ve got a `ResourceObject`, you can submit it to the public Wolfram Data Repository just by using `ResourceSubmit`.

By the way, all this stuff works not just for the main Wolfram Data Repository in the public Wolfram Cloud, but also for data repositories in private clouds. The administrator of an Enterprise Private Cloud can decide how they want to vet data resources that are submitted (and how they want to manage things like name collisions)—though often they may choose just to publish any resource that’s submitted.

The procedure we’ve designed for vetting and editing resources for the public Wolfram Data Repository is quite elaborate—though in any given case we expect it to run quickly. It involves doing automated tests on the incoming data and examples—and then ensuring that these continue working as changes are made, for example in subsequent versions of the Wolfram Language. Administrators of private clouds definitely don’t have to use this procedure—but we’ll be making our tools available if they want to.

OK, so let’s say there’s a data resource in the Wolfram Data Repository. How can it actually be used to create a data-backed publication? The most obvious answer is just for the publication to include a link to the webpage for the data resource in the Wolfram Data Repository. And once people go to the page, it immediately shows them how to access the data in the Wolfram Language, use it in the Wolfram Open Cloud, download it, or whatever.

But what about an actual visualization or whatever that appears in the paper? How can people know how to make it? One possibility is that the visualization can just be included among the examples on the webpage for the data resource. But there’s also a more direct way, which uses Source Links in the Wolfram Cloud.

Here’s how it works. You create a Wolfram Notebook that takes data from the Wolfram Data Repository and creates the visualization:

Then you deploy this visualization to the Wolfram Cloud—either using Wolfram Language functions like `CloudDeploy` and `EmbedCode`, or using menu items. But when you do the deployment, you say to include a source link (`SourceLink->Automatic` in the Wolfram Language). And this means that when you get an embeddable graphic, it comes with a source link that takes you back to the notebook that made the graphic:

So if someone is reading along and they get to that graphic, they can just follow its source link to see how it was made, and to see how it accesses data from the Wolfram Data Repository. With the Wolfram Data Repository you can do data-backed publishing; with source links you can also do full notebook-backed publishing.

Now that we’ve talked a bit about how the Wolfram Data Repository works, let’s talk again about why it’s important—and why having data in it is so valuable.

The #1 reason is simple: it makes data immediately useful, and computable.

There’s nice, easy access to the data (just use `ResourceData["..."]`). But the really important—and unique—thing is that data in the Wolfram Data Repository is stored in a uniform, symbolic way, as WDF, leveraging everything we’ve done with data over the course of so many years in the Wolfram Language and Wolfram|Alpha.

Why is it good to have data in WDF? First, because in WDF the meaning of everything is explicit: whether it’s an entity, or quantity, or geo position, or whatever, it’s a symbolic element that’s been carefully designed and documented. (And it’s not just a disembodied collection of numbers or strings.) And there’s another important thing: data in WDF is already in precisely the form it’s needed for one to be able to immediately visualize, analyze or otherwise compute with it using any of the many thousands of functions that are built into the Wolfram Language.

Wolfram Notebooks are also an important part of the picture—because they make it easy to show how to work with the data, and give immediately runnable examples. Also critical is the fact that the Wolfram Language is so succinct and easy to read—because that’s what makes it possible to give standalone examples that people can readily understand, modify and incorporate into their own work.

In many cases using the Wolfram Data Repository will consist of identifying some data resource (say through a link from a document), then using the Wolfram Language in Wolfram Notebooks to explore the data in it. But the Wolfram Data Repository is fully integrated into the Wolfram Language, so it can be used wherever the language is used. Which means the data from the Wolfram Data Repository can be used not just in the cloud or on the desktop, but also in servers and so on. And, for example, it can also be used in APIs or scheduled tasks, using the exact same `ResourceData` functions as ever.

The most common way the Wolfram Data Repository will be used is one resource at a time. But what’s really great about the uniformity and standardization that WDF provides is that it allows different data resources to be used together: those dates or geo positions mean the same thing even in different data resources, so they can immediately be put together in the same analysis, visualization, or whatever.

The Wolfram Data Repository builds on the whole technology stack that we’ve been assembling for the past three decades. In some ways it’s just a sophisticated piece of infrastructure that makes a lot of things easier to do. But I can already tell that its implications go far beyond that—and that it’s going to have a qualitative effect on the extent to which people can successfully share and reuse a wide range of kinds of data.

It’s a big win to have data in the Wolfram Data Repository. But what’s involved in getting it there? There’s almost always a certain amount of data curation required.

Let’s take a look again at the meteorite landings dataset I showed earlier in this post. It started from a collection of data made available in a nicely organized way by NASA. (Quite often one has to scrape webpages or PDFs; this is a case where the data happens to be set up to be downloadable in a variety of convenient formats.)

As is fairly typical, the basic elements of the data here are numbers and strings. So the first thing to do is to figure out how to map these to meaningful symbolic constructs in WDF. For example, the “mass” column is labeled as being “(g)”, i.e. in grams—so each element in it should get converted to `Quantity[`*value*`,"Grams"]`. It’s a little trickier, though, because for some rows—corresponding to some meteorites—the value is just blank, presumably because it isn’t known.

So how should that be represented? Well, because the Wolfram Language is symbolic it’s pretty easy. And in fact there’s a standard symbolic construct `Missing[...]` for indicating missing data, which is handled consistently in analysis and visualization functions.

As we start to look further into the dataset, we see all sorts of other things. There’s a column labeled “year”. OK, we can convert that into `DateObject[{`*value*`}]`—though we need to be careful about any BC dates (how would they appear in the raw data?).

Next there are columns “reclat” and “reclong”, as well as a column called “GeoLocation” that seems to combine these, but with numbers quoted a different precision. A little bit of searching suggests that we should just take reclat and reclong as the latitude and longitude of the meteorite—then convert these into the symbolic form `GeoPosition[{`*lat*`,`*lon*`}]`.

To do this in practice, we’d start by just importing all the data:

OK, let’s extract a sample row:

Already there’s something unexpected: the date isn’t just the year, but instead it’s a precise time. So this needs to be converted:

Now we’ve got to reset this to correspond only to a date at a granularity of a year:

Here is the geo position:

And we can keep going, gradually building up code that can be applied to each row of the imported data. In practice there are often little things that go wrong. There’s something missing in some row. There’s an extra piece of text (a “footnote”) somewhere. There’s something in the data that got misinterpreted as a delimiter when the data was provided for download. Each one of these needs to be handled—preferably with as much automation as possible.

But in the end we have a big list of rows, each of which needs to be assembled into an association, then all combined to make a `Dataset` object that can be checked to see if it’s good to go into the Wolfram Data Repository.

The example above is fairly typical of basic curation that can be done in less than 30 minutes by any decently skilled user of the Wolfram Language. (A person who’s absorbed my book *An Elementary Introduction to the Wolfram Language* should, for example, be able to do it.)

It’s a fairly simple example—where notably the original form of the data was fairly clean. But even in this case it’s worth understanding what hasn’t been done. For example, look at the column labeled `"Classification"` in the final dataset. It’s got a bunch of strings in it. And, yes, we can do something like make a word cloud of these strings:

But to really make these values computable, we’d have to do more work. We’d have to figure out some kind of symbolic representation for meteorite classification, then we’d have to do curation (and undoubtedly ask some meteorite experts) to fit everything nicely into that representation. The advantage of doing this is that we could then ask questions about those values (“what meteorites are above L3?”), and expect to compute answers. But there’s plenty we can already do with this data resource without that.

My experience in general has been that there’s a definite hierarchy of effort and payoff in getting data to be computable at different levels—starting with the data just existing in digital form, and ending with the data being cleanly computable enough that it can be fully integrated in the core Wolfram Language, and used for repeated, systematic computations.

Let’s talk about this hierarchy a bit.

The zeroth thing, of course, is that the data has to exist. And the next thing is that it has to be in digital form. If it started on handwritten index cards, for example, it had better have been entered into a document or spreadsheet or something.

But then the next issue is: how are people supposed to get access to that document or spreadsheet? Well, a good answer is that it should be in some kind of accessible cloud—perhaps referenced with a definite URI. And for a lot of data repositories that exist out there, just making the data accessible like this is the end of the story.

But one has to go a lot further to make the data actually useful. The next step is typically to make sure that the data is arranged in some definite structure. It might be a set of rows and columns, or it might be something more elaborate, and, say, hierarchical. But the point is to have a definite, known structure.

In the Wolfram Language, it’s typically trivial to take data that’s stored in any reasonable format, and use `Import` to get it into the Wolfram Language, arranged in some appropriate way. (As I’ll talk about later, it might be a `Dataset`, it might be an `EntityStore`, it might just be a list of `Image` objects, or it might be all sorts of other things.)

But, OK, now things start getting more difficult. We need to be able to recognize, say, that such-and-such a column has entries representing countries, or pairs of dates, or animal species, or whatever. `SemanticImport` uses machine learning and does a decent job of automatically importing many kinds of data. But there are often things that have to be fixed. How exactly is missing data represented? Are there extra annotations that get in the way of automatic interpretation? This is where one starts needing experts, who really understand the data.

But let’s say one’s got through this stage. Well, then in my experience, the best thing to do is to start visualizing the data. And very often one will immediately see things that are horribly wrong. Some particular quantity was represented in several inconsistent ways in the data. Maybe there was some obvious transcription or other error. And so on. But with luck it’s fairly easy to transform the data to handle the obvious issues—though to actually get it right almost always requires someone who is an expert on the data.

What comes out of this process is typically very useful for many purposes—and it’s the level of curation that we’re expecting for things submitted to the Wolfram Data Repository.

It’ll be possible to do all sorts of analysis and visualization and other things with data in this form.

But if one wants, for example, to actually integrate the data into Wolfram|Alpha, there’s considerably more that has to be done. For a start, everything that can realistically be represented symbolically has to be represented symbolically. It’s not good enough to have random strings giving values of things—because one can’t ask systematic questions about those. And this typically requires inventing systematic ways to represent new kinds of concepts in the world—like the `"Classification"` for meteorites.

Wolfram|Alpha works by taking natural language input. So the next issue is: when there’s something in the data that can be referred to, how do people refer to it in natural language? Often there’ll be a whole collection of names for something, with all sorts of variations. One has to algorithmically capture all of the possibilities.

Next, one has to think about what kinds of questions will be asked about the data. In Wolfram|Alpha, the fact that the questions get asked in natural language forces a certain kind of simplicity on them. But it makes one also need to figure out just what the linguistics of the questions can be (and typically this is much more complicated than the linguistics for entities or other definite things). And then—and this is often a very difficult part—one has to figure out what people want to compute, and how they want to compute it.

At least in the world of Wolfram|Alpha, it turns out to be quite rare for people just to ask for raw pieces of data. They want answers to questions—that have to be computed with models, or methods, or algorithms, from the underlying data. For meteorites, they might want to know not the raw information about when a meteorite fell, but compute the weathering of the meteorite, based on when it fell, what climate it’s in, what it’s made of, and so on. And to have data successfully be integrated into Wolfram|Alpha, those kinds of computations all need to be there.

For full Wolfram|Alpha there’s even more. Not only does one have to be able to give a single answer, one has to be able to generate a whole report, that includes related answers, and presents them in a well-organized way.

It’s ultimately a lot of work. There are very few domains that have been added to Wolfram|Alpha with less than a few skilled person-months of work. And there are plenty of domains that have taken person-years or tens of person-years. And to get the right answers, there always has to be a domain expert involved.

Getting data integrated into Wolfram|Alpha is a significant achievement. But there’s further one can go—and indeed to integrate data into the Wolfram Language one has to go further. In Wolfram|Alpha people are asking one-off questions—and the goal is to do as well as possible on individual questions. But if there’s data in the Wolfram Language, people won’t just ask one-off questions with it: they’ll also do large-scale systematic computations. And this demands a much greater level of consistency and completeness—which in my experience rarely takes less than person-years per domain to achieve.

But OK. So where does this leave the Wolfram Data Repository? Well, the good news is that all that work we’ve put into Wolfram|Alpha and the Wolfram Language can be leveraged for the Wolfram Data Repository. It would take huge amounts of work to achieve what’s needed to actually integrate data into Wolfram|Alpha or the Wolfram Language. But given all the technology we have, it takes very modest amounts of work to make data already very useful. And that’s what the Wolfram Data Repository is about.

With the Wolfram Data Repository (and Wolfram Notebooks) there’s finally a great way to do true data-backed publishing—and to ensure that data can be made available in an immediately useful and computable way.

For at least a decade there’s been lots of interest in sharing data in areas like research and government. And there’ve been all sorts of data repositories created—often with good software engineering—with the result that instead of data just sitting on someone’s local computer, it’s now pretty common for it to be uploaded to a central server or cloud location.

But the problem has been that the data in these repositories is almost always in a quite raw form—and not set up to be generally meaningful and computable. And in the past—except in very specific domains—there’s been no really good way to do this, at least in any generality. But the point of the Wolfram Data Repository is to use all the development we’ve done on the Wolfram Language and WDF to finally be able to provide a framework for having data in an immediately computable form.

The effect is dramatic. One goes from a situation where people are routinely getting frustrated trying to make use of data to one in which data is immediately and readily usable. Often there’s been lots of investment and years of painstaking work put into accumulating some particular set of data. And it’s often sad to see how little the data actually gets used—even though it’s in principle accessible to anyone. But I’m hoping that the Wolfram Data Repository will provide a way to change this—by allowing data not just to be accessible, but also computable, and easy for anyone to immediately and routinely use as part of their work.

There’s great value to having data be computable—but there’s also some cost to making it so. Of course, if one’s just collecting the data now, and particularly if it’s coming from automated sources, like networks of sensors, then one can just set it up to be in nice, computable WDF right from the start (say by using the data semantics layer of the Wolfram Data Drop). But at least for a while there’s going to still be a lot of data that’s in the form of things like spreadsheets and traditional databases—-that don’t even have the technology to support the kinds of structures one would need to directly represent WDF and computable data.

So that means that there’ll inevitably have to be some effort put into curating the data to make it computable. Of course, with everything that’s now in the Wolfram Language, the level of tools available for curation has become extremely high. But to do curation properly, there’s always some level of human effort—and some expert input—that’s required. And a key question in understanding the post-Wolfram-Data-Repository data publishing ecosystem is who is actually going to do this work.

In a first approximation, it could be the original producers of the data—or it could be professional or other “curation specialists”—or some combination. There are advantages and disadvantages to all of these possibilities. But I suspect that at least for things like research data it’ll be most efficient to start with the original producers of the data.

The situation now with data curation is a little similar to the historical situation with document production. Back when I was first doing science (yes, in the 1970s) people handwrote papers, then gave them to professional typists to type. Once typed, papers would be submitted to publishers, who would then get professional copyeditors to copyedit them, and typesetters to typeset them for printing. It was all quite time consuming and expensive. But over the course of the 1980s, authors began to learn to type their own papers on a computer—and then started just uploading them directly to servers, in effect putting them immediately in publishable form.

It’s not a perfect analogy, but in both data curation and document editing there are issues of structure and formatting—and then there are issues that require actual understanding of the content. (Sometimes there are also more global “policy” issues too.) And for producing computable data, as for producing documents, almost always the most efficient thing will be to start with authors “typing their own papers”—or in the case of data, putting their data into WDF themselves.

Of course, to do this requires learning at least a little about computable data, and about how to do curation. And to assist with this we’re working with various groups to develop materials and provide training about such things. Part of what has to be communicated is about mechanics: how to move data, convert formats, and so on. But part of it is also about principles—and about how to make the best judgement calls in setting up data that’s computable.

We’re planning to organize “curateathons” where people who know the Wolfram Language and have experience with WDF data curation can pair up with people who understand particular datasets—and hopefully quickly get all sorts of data that they may have accumulated over decades into computable form—and into the Wolfram Data Repository.

In the end I’m confident that a very wide range of people (not just techies, but also humanities people and so on) will be able to become proficient at data curation with the Wolfram Language. But I expect there’ll always be a certain mixture of “type it yourself” and “have someone type it for you” approaches to data curation. Some people will make their data computable themselves—or will have someone right there in their lab or whatever who does. And some people will instead rely on outside providers to do it.

Who will these providers be? There’ll be individuals or companies set up much like the ones who provide editing and publishing services today. And to support this we’re planning a “Certified Data Curator” program to help define consistent standards for people who will work with the originators of a wide range of different kinds of data putting it into computable form.

But in additional to individuals or specific “curation companies”, there are at least two other kinds of entities that have the potential to be major facilitators of making data computable.

The first is research libraries. The role of libraries at many universities is somewhat in flux these days. But something potentially very important for them to do is to provide a central place for organizing—and making computable—data from the university and beyond. And in many ways this is just a modern analog of traditional library activities like archiving and cataloging.

It might involve the library actually having a private cloud version of the Wolfram Data Repository—and it might involve the library having its own staff to do curation. Or it might just involve the library providing advice. But I’ve found there’s quite a bit of enthusiasm in the library community for this kind of direction (and it’s perhaps an interesting sign that at our company people involved in data curation have often originally been trained in library science).

In addition to libraries, another type of organization that should be involved in making data computable is publishing companies. Some might say that publishing companies have had it a bit easy in the last couple of decades. Back in the day, every paper they published involved all sorts of production work, taking it from manuscript to final typeset version. But for years now, authors have been delivering their papers in digital forms that publishers don’t have to do much work on.

With data, though, there’s again something for publishers to do, and again a place for them to potentially add great value. Authors can pretty much put raw data into public repositories for themselves. But what would make publishers visibly add value is for them to process (or “edit”) the data—putting in the work to make it computable. The investment and processes will be quite similar to what was involved on the text side in the past—it’s just that now instead of learning about phototypesetting systems, publishers should be learning about WDF and the Wolfram Language.

It’s worth saying that as of today all data that we accept into the Wolfram Data Repository is being made freely available. But we’re anticipating in the near future we’ll also incorporate a marketplace in which data can be bought and sold (and even potentially have meaningful DRM, at least if it’s restricted to being used in the Wolfram Language). It’ll also be possible to have a private cloud version of the Wolfram Data Repository—in which whatever organization that runs it can set up whatever rules it wants about contributions, subscriptions and access.

One feature of traditional paper publishing is the sense of permanence it provides: once even just a few hundred printed copies of a paper are on shelves in university libraries around the world, it’s reasonable to assume that the paper is going to be preserved forever. With digital material, preservation is more complicated.

If someone just deploys a data resource to their Wolfram Cloud account, then it can be available to the world—but only so long as the account is maintained. The Wolfram Data Repository, though, is intended to be something much more permanent. Once we’ve accepted a piece of data for the repository, our goal is to ensure that it’ll continue to be available, come what may. It’s an interesting question how best to achieve that, given all sorts of possible future scenarios in the world. But now that the Wolfram Data Repository is finally launched, we’re going to be working with several well-known organizations to make sure that its content is as securely maintained as possible.

The Wolfram Data Repository—and private versions of it—is basically a powerful, enabling technology for making data available in computable form. And sometimes all one wants to do is to make the data available.

But at least in academic publishing, the main point usually isn’t the data. There’s usually a “story to be told”—and the data is just backup for that story. Of course, having that data backing is really important—and potentially quite transformative. Because when one has the data, in computable form, it’s realistic for people to work with it themselves, reproducing or checking the research, and directly building on it themselves.

But, OK, how does the Wolfram Data Repository relate to traditional academic publishing? For our official Wolfram Data Repository we’re going to have definite standards for what we accept—and we’re going to concentrate on data that we think is of general interest or use. We have a whole process for checking the structure of data, and applying software quality assurance methods, as well as expert review, to it.

And, yes, each entry in the Wolfram Data Repository gets a DOI, just like a journal article. But for our official Wolfram Data Repository we’re focused on data—and not the story around it. We don’t see it as our role to check the methods by which the data was obtained, or to decide whether conclusions drawn from it are valid or not.

But given the Wolfram Data Repository, there are lots of new opportunities for data-backed academic journals that do in effect “tell stories”, but now have the infrastructure to back them up with data that can readily be used.

I’m looking forward, for example, to finally making the journal *Complex Systems* that I founded 30 years ago a true data-backed journal. And there are many existing journals where it makes sense to use versions of the Wolfram Data Repository (often in a private cloud) to deliver computable data associated with journal articles.

But what’s also interesting is that now that one can take computable data for granted, there’s a whole new generation of “Journal of Data-Backed ____” journals that become possible—that not only use data from the Wolfram Data Repository, but also actually present their results as Wolfram Notebooks that can immediately be rerun and extended (and can also, for example, contain interactive elements).

I’ve been talking about the Wolfram Data Repository in the context of things like academic journals. But it’s also important in corporate settings. Because it gives a very clean way to have data shared across an organization (or shared with customers, etc.).

Typically in a corporate setting one’s talking about private cloud versions. And of course these can have their own rules about how contributions work, and who can access what. And the data can not only be immediately used in Wolfram Notebooks, but also in automatically generated reports, or instant APIs.

It’s been interesting to see—during the time we’ve been testing the Wolfram Data Repository—just how many applications we’ve found for it within our own company.

There’s information that used to be on webpages, but is now in our private Wolfram Data Repository, and is now immediately usable for computation. There’s information that used to be in databases, and which required serious programming to access, but is now immediately accessible through the Wolfram Language. And there are all sorts of even quite small lists and so on that used to exist only in textual form, but are now computable data in our data repository.

It’s always been my goal to have a truly “computable company”—and putting in place our private Wolfram Data Repository is an important step in achieving this.

In addition to public and corporate uses, there are also great uses of Wolfram Data Repository technology for individuals—and particularly for individual researchers. In my own case, I’ve got huge amounts of data that I’ve collected or generated over the course of my life. I happen to be pretty organized at keeping things—but it’s still usually something of an adventure to remember enough to “bring back to life” data I haven’t dealt with in a decade or more. And in practice I make much less use of older data than I should—even though in many cases it took me immense effort to collect or generate the data in the first place.

But now it’s a different story. Because all I have to do is to upload data once and for all to the Wolfram Data Repository, and then it’s easy for me to get and use the data whenever I want to. Some data (like medical or financial records) I want just for myself, so I use a private cloud version of the Wolfram Data Repository. But other data I’ve been getting uploaded into the public Wolfram Data Repository.

Here’s an example. It comes from a page in my book *A New Kind of Science*:

The page says that by searching about 8 trillion possible systems in the computational universe I found 199 that satisfy some particular criterion. And in the book I show examples of some of these. But where’s the data?

Well, because I’m fairly organized about such things, I can go into my file system, and find the actual Wolfram Notebook from 2001 that generated the picture in the book. And that leads me to a file that contains the raw data—which then takes a very short time to turn into a data resource for the Wolfram Data Repository:

We’ve been systematically mining data from my research going back into the 1980s—even from Mathematica Version 1 notebooks from 1988 (which, yes, still work today). Sometimes the experience is a little less inspiring. Like to find a list of people referenced in the index of *A New Kind of Science*, together with their countries and dates, the best approach seemed to be to scrape the online book website:

And to get a list of the books I used while working on *A New Kind of Science* required going into an ancient FileMaker database. But now all the data—nicely merged with Open Library information deduced from ISBNs—is in a clean WDF form in the Wolfram Data Repository. So I can do such things as immediately make a word cloud of the titles of the books:

Many things have had to come together to make today’s launch of the Wolfram Data Repository possible. In the modern software world it’s easy to build something that takes blobs of data and puts them someplace in the cloud for people to access. But what’s vastly more difficult is to have the data actually be immediately useful—and making that possible is what’s required the whole development of our Wolfram Language and Wolfram Cloud technology stack, which are now the basis for the Wolfram Data Repository.

But now that the Wolfram Data Repository exists—and private versions of it can be set up—there are lots of new opportunities. For the research community, the most obvious is finally being able to do genuine data-backed publication, where one can routinely make underlying data from pieces of research available in a way that people can actually use. There are variants of this in education—making data easy to access and use for educational exercises and projects.

In the corporate world, it’s about making data conveniently available across an organization. And for individuals, it’s about maintaining data in such a way that it can be readily used for computation, and built on.

But in the end, I see the Wolfram Data Repository as a key enabling technology for defining how one can work with data in the future—and I’m excited that after all this time it’s finally now launched and available to everyone.

]]>It’s National Pet Day on April 11, the day we celebrate furry, feathered or otherwise nonhuman companions. To commemorate the date, we thought we’d use some new features in the Wolfram Language to map a dog walk using pictures taken with a smartphone along the way. After that, we’ll use some neural net functions to identify the content in the photos. One of the great things about Wolfram Language 11.1 is pre-trained neural nets, including Inception V3 trained on ImageNet Competition data and Inception V1 trained on Places365 data, among others, making it super easy for a novice programmer to implement them. These two pre-trained neural nets make it easy to: 1) identify objects in images; and 2) tell a user what sort of landscape an image represents.

The Wolfram Documentation Center also makes this especially easy.

First, we need to talk a little bit about metadata stored in digital photographs. When you snap a photo on your smartphone or digital camera, all sorts of data is saved with the image, including the location where the picture was taken. The exchangeable image file format (EXIF) is a standard developed in 1985 that organizes the types of metadata stored. For our purposes, we’re interested in geolocation so we can make a map of our dog walk.

To demonstrate how image metadata works, let’s start with a picture of my cats, Chairman Meow and Detective Biscuits, and see how the Wolfram Language can extract where the picture was taken using `GeoPosition`.

Fantastic—we have some coordinates. Now let’s see where on Earth this is on a map using `GeoGraphics`. We’ve defined those coordinates as `"catlocation"` above. Now, using coordinates from the image of my cats dutifully keeping the bed warm, we define our map as `"catmap"`.

Excellent. Now let’s use the zoom tool to show where these coordinates are.

So, yes, this picture was taken in my old neighborhood in Baton Rouge, Louisiana, where I was a grad student before starting work here at Wolfram Research (Geaux Tigers!). Very cool, and good to know that data is stored in my iPhone pictures.

Just for fun, let’s see if Wolfram’s built-in knowledge has any data on my old neighborhood, known as the Garden District, using Ctrl + = followed by input, which allows us to use the Wolfram Language’s free-form input capability.

Fantastic. Let’s get a quick map of Baton Rouge.

And how about a map showing the rough outline of the Garden District?

This provides us with a rough outline of the Garden District in Baton Rouge. There is a ton of built-in socioeconomic data in the Wolfram Language we could look at, but we’ll save that for another blog post.

Since I’m not a dog owner, I asked a coworker if I could join her on a dog walk to snap some pictures to first map the walk using nothing but photos from a smartphone, then use a neural net to identify the content of the photos.

So I can just drag and drop the photos into a notebook and define them, and then their locations, from their metadata.

OK, that’s pretty good, but let’s add some points, change up some colors and add tooltips to show the images at each stop.

In the Wolfram Notebook (or if you download the CDF), when you hover the mouse over each point, it shows the image that was taken at that location. Very cool.

Next, let’s move on to a new feature in Wolfram Language 11.1, pre-trained neural nets. First, we need to define our net, and we’re going to use Inception V3 trained on ImageNet Competition data, implemented with one impressively simple line of code. Inception V3 is a dataset used for image recognition using a convolutional neural network. Sounds complicated, but we can implement it easily and figure out what objects are in the images.

Now all we have to do is put in an image, and we’ll get an accurate result of what our image is.

Fantastic. Let’s try another picture and see how sure the neural net is of its determination by using `"TopProbabilities"` as an option.

So its best guess is a goose at a 0.901 probability. Pretty good. Let’s try another image.

The net is less sure in this case what kind of dog this is—which is exactly the right answer: this dog, Maxie, is a mixed border collie/corgi.

Just for fun, let’s see what it thinks my cats are.

Wow. That’s pretty impressive, since they are indeed tabby cats. And I guess it’s reasonable there’s a 0.0446 probability my cats look like platypuses.

Along our dog walk, we took a picture of a pond. Let’s use a different pre-trained neural net to see if it can tell us what kind of landscape it is. For this, we’ll use Inception V1 trained on Places365 data, again implemented with one line of amazingly simple code. This particular neural net identifies landscapes based on a training set of images taken at various locations.

Very neat. Let’s try something else.

OK, sure, a pasture rather than a park. But you can see that it had other things in mind. This kind of reminds me of Stephen Wolfram’s blog post on overcoming artificial stupidity. Since it was written, neural nets (and Wolfram|Alpha) have certainly come a long way.

And let’s see if we can confuse it with a picture of a sculpture.

Not bad.

As you can see, the pre-trained neural nets work really well. If you want to train your own net, Wolfram Language 11.1 makes it painless to do so. So next time you’re out for a walk taking pictures of random objects and want to recreate your walk from images, you can use these new features in the Wolfram Cloud or Wolfram Desktop.

Happy coding, and happy National Pet Day!

Vibration measurement is an important tool for fault detection in rotating machinery. In a previous post, “How to Use Your Smartphone for Vibration Analysis, Part 1: The Wolfram Language,” I described how you can perform a vibration analysis with a smartphone and Mathematica. Here, I will show how this technique can be improved upon using the Wolfram Cloud. One advantage with this is that I don’t need to bring my laptop.

The configuration of files may vary depending on whether you use an iPhone or an Android. I used an iPhone. I also used Dropbox for storing the sound file. At the moment, the file format from the default app in iPhone, Voice Memo, saves the sound file only in M4A format, and that format can’t yet be imported with the Wolfram Language. Therefore, I used the app Awesome Voice Recorder, AVR, which can also store the file in Dropbox.

- First, create a Wolfram Cloud Account.
- Install the Wolfram Cloud app on your smartphone.
- If you don’t already have a Dropbox account, create one.
- Download an app that can save the sound file in MP3 format to your Dropbox. I use AVR.

You are now ready to create a vibration analysis tool.

With my iPhone, I recorded 10 seconds of the sound of my finger rubbing around the rim of a wine glass.

I named the file “wineglass.mp3” and stored it on Dropbox. Some simple code for the Fourier transform of that sound looks like the following:

It is now easy to deploy the same code to the Wolfram Cloud. Name the file “FFT Basic”. With modifications to obtain the sound data from a form instead of the file system, the FFT code looks like this:

Now go to the top-left menu in the mobile cloud app. Select Deployments and then Instant Web Forms, and you’ll see the executable file. Click it and then click to select the file, and the fast Fourier transform (FFT) results will be shown.

The tool can be generalized by allowing the user to change the scale and draw a line to show known frequencies. Typical frequencies that you may want to follow in vibration analysis are gear mesh frequencies, imbalances, blade-passing frequencies and known resonances. For the wine glass, we don’t currently have a known disturbance or resonance frequency, but I included a line anyway:

Deploy the code to the Wolfram Cloud, naming the file “FFT Analysis”:

With the improved code, I can also work with the FFT plot in the app:

This was my first experience with the Wolfram Cloud, and within an hour I had an application ready to go. The resonant frequency of the glass is 771 Hz, a frequency the human voice is capable of, so the opera trick of shattering a glass by singing is plausible. We tried using a loudspeaker playing a tone at 771 Hz, but had no success. There will be YouTube videos when we have managed this.

]]>