When I first started driving in high school, I had to pay for my own gas. Since I was also saving for college, I had to be careful about my spending, so I started manually tracking how much I was paying for gas in a spreadsheet and calculating how much gas I was using. Whenever I filled my tank, I kept the receipts and wrote down how many miles I’d traveled and how many gallons I’d used. Every few weeks, I would manually enter all of this information into the spreadsheet and plot out the costs and the amount of fuel I had used. This process helped me both visualize how much money I was spending on fuel and manage my budget.
Once I got to college, however, I got a more fuel-efficient car and my schedule got a lot busier, so I didn’t have the time to track my fuel consumption like this anymore. Now I work at Wolfram Research and I’m still really busy, but the cool thing is that I can use our company technology to more easily accomplish my automotive assessments.
After completing this easy project using the Wolfram Cloud’s web form and automated reporting capabilities, I don’t have to spend much time at all to keep track of my fuel usage and other information.
To start this project, I needed a way to store the data. I’ve found that the Wolfram Data Drop is a convenient way to store and access data for many of my projects.
I created a databin to store the data with just one line of Wolfram Language code:
Next, I needed to design a web form that I could use to log the data to the Databin. I used FormFunction to set up a basic one to record gallons of fuel used (from filling the tank each time) and trip distance (from reading the car’s onboard computer).
I also added another field for the date and time of the trip, so that I could add data retroactively (e.g. entering data from old receipts).
I used the DateString function to create an approximate time stamp for submitting data:
This form works in the notebook interface, but it isn’t accessible from anywhere but my Mathematica notebook. If you want it to access it on the web or from a phone, you need to deploy it to the cloud.
Conveniently, you can do this with just one more line of code using CloudDeploy:
If that’s all you wanted to record, you could stop there. After just a few lines of code, the form created will log distance traveled and fuel used, but there’s quite a bit more data that is available while at a gas station.
A typical car’s dashboard shows average speed and odometer readings from the onboard computer. Additionally, most newer cars will report an estimation of the average gas mileage on a per-trip basis, so I designed the following form that makes it easy to test the accuracy of those readings.
I also added a field to record the location by logging the city where I am filling up with the help of Interpreter. I used $GeoLocationCity and CityData to pre-populate this field so I don’t have to type it out each time.
Finally, if you’re saving for college like I was, you’ll want to record the total price too.
All of these data points can be helpful for tracking fuel consumption, efficiency and more.
The last thing to consider before deploying the webpage is the appearance. I set up some visual improvements with the help of AppearanceRules, PageTheme, and FormFunction’s "HTMLThemed" result style:
Now that I have a working form, I need to be able to access it when I’m at a gas station.
I almost always have my smartphone on me, so I can use URLShorten to make a simpler web address that I can type quickly:
Or I can avoid typing out a URL altogether by making a QR code with BarcodeImage, which I can read with my phone’s camera application:
Once I accessed the form on my phone, I added it as a button on my home screen, which makes returning to the form when I’m at a gas station very easy:
If you’re following along, at this point you can just start logging data by using the form; I personally have been logging this data for my car for over a year now. But what can I do with all of this data?
With the help of more than 5,000 built-in functions, including a wealth of visualization functions, the possibilities are almost limitless.
I started by querying for the data in my car’s databin with Dataset:
With a few lines of code and the built-in entity framework, I can see all of the counties where I’ve traveled over the last year or so using GeoHistogram:
I can also see the gas mileage over the course of the past year with TimeSeries:
I often wonder what I can do to improve my gas mileage. I know that there are many factors at play here: driving habits, highway/city driving, the weather—just to name a few. With the Wolfram Language, I can see the effects of some of these on my car’s gas mileage.
I can start by looking at my average speed to compare the effects of highway and city driving and compute the correlation:
It’s pretty clear from the plot that at higher average speeds, gas mileage is higher, but it does appear to eventually level off and somewhat decrease. This makes sense because although a higher average speed indicates less city driving (less stop-and-go traffic), it does require burning more fuel to maintain a higher speed. For example, on the interstate, the engine might be running above its optimal RPM, there will be more wind resistance, etc.
With the help of WeatherData, I can also see if there is a correlation with gas mileage and temperature. I can compute the mean temperature for each trip by taking the mean temperatures of each day between the times that I filled up:
The correlation is weaker, but there is a relationship:
I can also visualize both correlations for the average speed and temperature in 3D space by using miles per gallon as the “height”:
It’s also clear from this plot that gas mileage is positively correlated with both temperature and average speed.
Now that I have code to visualize and analyze the data, I need some way to automate this process when I’m away from my computer. For example, I can set up a template notebook that can generate reports in the cloud.
To do this, you can use CreateNotebook["Template"] or File > New > Template Notebook
(File > New > Template in the cloud).
After following John Fultz’s steps in his presentation to mimic the TimeSeries plot above, I created a simple report template here:
I can test the report generation locally by using GenerateDocument (or with the Generate button in the template notebook):
From here, I can generate a report every time I submit the form by adding this code to the form’s action. But first I need to upload the template notebook to the cloud with CopyFile (alternatively, you can upload it via the web interface):
Now I can update the form to generate the report, and then use HTTPRedirect to open the report as soon as it is finished:
That is a basic report. Of course, it’s easy to add more to the template, which I’ve done here, incorporating some of the plots I created before, as well as a few more. Again, I can generate the advanced report to test the template:
Seeing that it works, I can upload the template to the cloud:
Lastly, I need to update the form to use the new template and then deploy it:
With this setup, I can always access the latest report at the URL the form redirects me to, so I find it handy to also keep it on my phone’s home screen next to the button for the form:
Now you can see how simple it is to use the Wolfram Language to collect and analyze data from your vehicle. I started with a web form and a databin to collect and store information. Then, for convenience, I worked on accessing these through my smartphone. In order to analyze the data, I created visualizations with relevant variables. Finally, I automated the process so that my data collection will generate updated reports as I add new data. Altogether, this is a vast improvement over the manual spreadsheet method that I used when I was in high school.
Now that you see how quick and easy it is to set this up, give it a try yourself! Factor in other variables or try different visualizations, and maybe you can find other correlations. There’s a lot you can do with just a little Wolfram Language code!
Wolfram Community recently surpassed 15,000 members! And our Community members continue to impress us. Here are some recent highlights from the many outstanding Community posts.
BVH Accelerated 3D Shadow Mapping, Benjamin Goodman
Shade data converted to solar map
In a tour de force of computational narrative and a fusion of various Wolfram Language domains, Benjamin Goodman designs a shadow mapping algorithm. It’s a process of applying shadows to a computer graphic. Goodman optimized shadow mapping via space partitioning and a hierarchy of bounding volumes stored as a graph, forming a bounding volume hierarchy.
Pairs Trading with Copulas, Jonathan Kinlay
Jonathan Kinlay, the head of quantitative trading at Systematic Strategies LLC in New York, shows how copula models can be applied in pairs trading and statistical arbitrage strategies. The idea comes from when copulas began to be widely adopted in financial engineering, risk management and credit derivatives modeling, but it remains relatively underexplored compared to more traditional techniques in this field.
The Global Terrorism Database (GTD), Marco Thiel
Marco Thiel broke a Wolfram Community record in April when he contributed four featured posts in just three days! He utilized data from the Global Terrorism Database (GTD), an open-source database including information on terrorist events around the world, starting from 1970. It includes systematic data on domestic as well as transnational and international terrorist events, amounting to more than 150,000 cases. Marco analyzes weapon types, geo distribution of attacks and casualties, and temporal and demographical behavior.
Flight Data and Trajectories of Aeroplanes, Marco Thiel
Thiel utilizes the large amounts of data becoming ever more available. Often, however, these datasets are very valuable and difficult to access. Thiel shows how to use air traffic data to generate visualizations of three-dimensional flight paths on the globe and access flight positions and altitudes, call signs, types of planes, origins, destinations and much more.
Analysing “All” of the World’s News—Database of Everything, Marco Thiel
In another clever data collection/analysis project, Thiel works with “the largest, most comprehensive, and highest resolution open database of human society ever created,” according to the description provided by GDELT (Global Database of Events, Language, and Tone). Since 2015, this organization has acquired about three-quarters of a trillion emotional snapshots and more than 1.5 billion location references. Thiel performs some basic analysis and builds supporting visualizations.
How-to-Guide: External GPU on OSX—How to Use CUDA on Your Mac, Marco Thiel
Thiel discusses the neural network and machine learning framework that has become one of the key features of the latest releases of the Wolfram Language. Training neural networks can be very time-consuming, and the Wolfram Language offers an incredibly easy way to use a GPU to train networks and also do numerous other interesting computations. This post explains how to use powerful external GPU units for Wolfram Language computing on your Mac.
Creative Routines Charts, Patrick Scheibe
People are often interested in how creative or successful individuals manage their time, and when in their daily schedules they do what they are famous for. Patrick Scheibe describes how to build and personalize “creative routines” visualizations.
QR Code in Shopping Cart Handle, Patrick Scheibe
Scheibe also brought to Wolfram Community his famous article “QR Code in Shopping Cart Handle.” It explains the image processing algorithm for reading QR code labels when they are deformed by attachment to physical objects such as shopping carts and product packages.
Calculating NMR-Spectra with Wolfram Language, Hans Dolhaine
Hans Dolhaine, a chemist from Germany, writes a detailed walk-through calculating nuclear magnetic resonance spectra with the Wolfram Language. This is a useful educational tool for graduate physics and chemistry classes. Please feel free to share it in your interactions with students and educators.
Computational Introduction to Logarithms, Bill Gosper
Another excellent resource for educators is this elementary introduction to logarithms by means of computational exploration with the Wolfram Language. The Community contributor is renowned mathematician and programmer Bill Gosper. His article is highly instructive and accessible to a younger generation, and it contains beautiful animated illustrations that serve as outstanding educational material.
Using Recursion and FindInstance to Solve Sudoku and The Puzzled Ant and Particle Filter, Ali Hashmi
Finally, Ali Hashmi uses the recursion technique coupled with heuristics to solve a sudoku puzzle and also explains the connection between the puzzled ant problem and particle filters in computer vision.
If you haven’t yet signed up to be a member of Wolfram Community, don’t hesitate! You can join in on these discussions, post your own work in groups of your interest, and browse the complete list of Staff Picks.
As the next phase of Wolfram Research’s endeavor to make biology computable, we are happy to announce the recent release of neuroscience-related content.
The most central part of the human nervous system is the brain. It contains roughly 100 billion neurons that act together to process information, subdivided functionally and structurally into areas specialized for certain tasks. The brain’s anatomy, the characteristics of neurons and cognitive maps are used to represent some key aspects of the functional organization and processing abilities of our nervous system. Our new neuroscience content will give you a sneak peek into the amazing world of neuroscience with some facts about brains, neurons and cognition.
A primal part of the brain, the amygdala, is the well-studied cognitive area responsible for the emotional process circuitry and has active roles in emotional state, memory, face recognition and decision making. The amygdala is located near the brainstem, close to the center of the brain and, as its name suggests, is shaped like an almond:
Outgoing connections from the amygdala can be found with the "NeuronalOutput" property:
Here we see a visualization of the output connectivity of the amygdala in two layers:
Just as in a simple network, we could do additional computations on other networks. Like many other biological systems, our nervous system is hardwired to receive positive and negative feedback. Feedback is one of the key aspects of the brain’s information processing; it allows the augmentation or decrease of the efficacy of transmission, as well as fine-tuning for the resulting outputs.
Find the loop and highlight in the above graph:
Or find the specific circuit that comprises the combination amygdala-prefrontal cortex. The prefrontal cortex has the primary role in decision making, and therefore the amygdala-prefrontal cortex connectivity plays an essential role in modulating responses to emotional experiences:
We can also identify the minimum-cost flow between the amygdala and the spinal cord. The spinal cord processes signals from the brain and transmits them to other parts of the body to excite motor response:
It is also noteworthy that, in addition to the brain’s connectivity in the central nervous system, we have peripheral innervation integrated in our AnatomyData function. The motor commands from the spinal cord eventually reach the periphery.
Find nerves that innervate the left hand:
And visualize them in 3D with the AnatomyPlot3D function:
We have looked at macroscopic pictures of our nervous system so far. Now let’s look at the brain’s functional unit, the neuron. Of course, we cannot characterize all the billions of neurons, but key features of a few hundred types of neurons are very similar across various mammalian species; these will be considered in detail.
A variety of properties are available for the "Neuron" entity type to describe physical, electrophysiological and spatial characteristics of individual types of neurons:
We can get information on the types of neurons found in a particular brain region. For example, we can get a listing of neurons in the hippocampus, which is associated with emotional states, conversion of short-term to long-term memories and forming spatial memory:
Collecting further details, list the set of neurons whose axons arborize at the CA1 alveus area of the hippocampus:
Neurons transmit electrical signals to communicate with one another. Physical characteristics and patterns of their spikes, known as action potentials, differ across different neuron types.
We can obtain experimentally measured electrophysiological properties of hippocampus CA1 pyramidal cells:
Here we can visually recognize how spike characteristics vary across different neuron types:
A single neuron’s spike propagation can be simulated with the well-known Hodgkin and Huxley model (A. L. Hodgkin and A. F. Huxley, 1952) based on four differential equations involving voltages and currents. Also, there are biologically realistic computational models accommodating Hodgkin and Huxley’s concepts developed to simulate ensembles of spikes in a population of neurons (E. M. Izhikevich, 2004). We can better understand how neurons excite/suppress one another to transmit information by modeling the neurons’ electrical spikes and comparing their patterns of activities with experimentally measured ones:
After looking at microscopic features in our brain, let us finally explore the brain’s macro-scale executive function. Thanks to recent advances in imaging techniques to visualize brain activity in various cognitive states, we can map out cortical areas that are associated with specific cognitive processes. Brain areas associated with specific functions such as memory, decision making, language, emotional state, visual perception, etc. are well characterized with the appropriate activity-based fMRI analysis.
Using the EntityValue query with the AnatomicalFunctionalConcept entity type, we can find more information on hierarchically categorized brain activities:
Here we can look up the categories of functions associated with each cerebral lobe and create a simple cortical map:
We are not limited to the abstract representation of cortical maps; fMRI-based statistical maps of brain activity are also available.
Let’s look at how we perceive the visual world. A key aspect of visual perception is the subprocess (concept) of cognition as our brain categorizes our visually perceived faces, places, words, numbers, etc. with distinctive patterns of activity. The following graph illustrates how these concepts are hierarchically organized. Some areas of brain activation are highlighted (brain images are seen from the rear):
OK, let’s look further. Visually perceived words, sentences, faces, etc., in turn, affect “language” and “emotion”:
We can confirm that the amygdala (remember, the left and right amygdalae found near the center of the brain) is actively involved in emotions. If you want to learn more about these individual models, they are also available in 3D polygon data and ready to be aligned to our 3D brain model in AnatomyData for further computation.
Here is the brain activation area 3D graphic associated with emotion:
We can combine that graphic together with the brain model for visual comparison (the amygdala is highlighted in red; the right cerebral hemisphere is shown here for demonstration):
It’s fascinating to learn how our brain is organized and how it coordinates the processes in our nervous system. As we know, there is still a lot to be learned about human cognition, and exciting discoveries are being made every day. As we gain additional insights, we continue to expand our knowledgebase to attain a better and deeper understanding of the human nervous system.
Stay tuned for more neuroscience content to come!
We’re fascinated by artificial intelligence and machine learning, and Achim Zielesny’s second edition of From Curve Fitting to Machine Learning: An Illustrative Guide to Scientific Data Analysis and Computational Intelligence provides a great introduction to the increasingly necessary field of computational intelligence. This is an interactive and illustrative guide with all concepts and ideas outlined in a clear-cut manner, with graphically depicted plausibility arguments and a little elementary mathematics. Exploring topics such as two-dimensional curve fitting, multidimensional clustering and machine learning with neural networks or support vector machines, the subject-specific demonstrations are complemented with specific sections that address more fundamental questions like the relation between machine learning and human intelligence. Zielesny makes extensive use of Computational Intelligence Packages (CIP), a high-level function library developed with Mathematica’s programming language on top of Mathematica’s algorithms. Readers with programming skills may easily port or customize the provided code, so this book is particularly valuable to computer science students and scientific practitioners in industry and academia.
The Art of Programming in the Mathematica Software, third edition
Another gem for programmers and scientists who need to fine-tune and otherwise customize their Wolfram Language applications is the third edition of The Art of Programming in the Mathematica Software, by Victor Aladjev, Valery Boiko and Michael Shishakov. This text concentrates on procedural and functional programming. Experienced Wolfram Language programmers know the value of creating user tools. They can extend the most frequently used standard tools of the system and/or eliminate its shortcomings, complement new features, and much more. Scientists and data analysts can then conduct even the most sophisticated work efficiently using the Wolfram Language. Likewise, professional programmers can use these techniques to develop more valuable products for their clients/employers. Included is the MathToolBox package with more than 930 tools; their freeware license is attached to the book.
Introduction to Mathematica with Applications
For a more basic introduction to Mathematica, readers may turn to Marian Mureşan’s Introduction to Mathematica with Applications. First exploring the numerous features within Mathematica, the book continues with more complex material. Chapters include topics such as sorting algorithms, functions—both planar and solid—with many interesting examples and ordinary differential equations. Mureşan explores the advantages of using the Wolfram Language when dealing with the number pi and describes the power of Mathematica when working with optimal control problems. The target audience for this text includes researchers, professors and students—really anyone who needs a state-of-the art computational tool.
Geographical Models with Mathematica
The Wolfram Language’s powerful combination of extensive map data and computational agility is on display in André Dauphiné’s Geographical Models with Mathematica. This book gives a comprehensive overview of the types of models necessary for the development of new geographical knowledge, including stochastic models, models for data analysis, geostatistics, networks, dynamic systems, cellular automata and multi-agent systems, all discussed in their theoretical context. Dauphiné then provides over 65 programs that formalize these models, written in the Wolfram Language. He also includes case studies to help the reader apply these programs in their own work.
Our tour of new Wolfram Language books moves from terra firma to the stars in Geometric Optics: Theory and Design of Astronomical Optical Systems Using Mathematica. This book by Antonio Romano and Roberto Caveliere provides readers with the mathematical background needed to design many of the optical combinations that are used in astronomical telescopes and cameras. The results presented in the work were obtained through a different approach to third-order aberration theory as well as the extensive use of Mathematica. Replete with workout examples and exercises, Geometric Optics is an excellent reference for advanced graduate students, researchers and practitioners in applied mathematics, engineering, astronomy and astronomical optics. The work may be used as a supplementary textbook for graduate-level courses in astronomical optics, optical design, optical engineering, programming with Mathematica or geometric optics.
Don’t forget to check out Stephen Wolfram’s An Elementary Introduction to the Wolfram Language, now in its second edition. It is available in print, as an ebook and free on the web—as well as in Wolfram Programming Lab in the Wolfram Open Cloud. There’s also now a free online hands-on course based on the book. Read Stephen Wolfram’s recent blog post about machine learning for middle schoolers to learn more about the new edition. |
Derivatives of functions play a fundamental role in calculus and its applications. In particular, they can be used to study the geometry of curves, solve optimization problems and formulate differential equations that provide mathematical models in areas such as physics, chemistry, biology and finance. The function D computes derivatives of various types in the Wolfram Language and is one of the most-used functions in the system. My aim in writing this post is to introduce you to the exciting new features for D in Version 11.1, starting with a brief history of derivatives.
The idea of a derivative was first used by Pierre de Fermat (1601–1665) and other seventeenth-century mathematicians to solve problems such as finding the tangent to a curve at a point. Given a curve y=f(x), such as the one pictured below, they regarded the tangent line at a point {x,f(x)} as the limiting position of the secant drawn to the point through a nearby point {x,f(x+h)}, as the “infinitesimal” quantity h tends to 0.
Their technique can be illustrated as follows.
The slope of a secant line joining {x,f(x)} and {x+h,f(x+h)} is given by DifferenceQuotient.
Now suppose that the function f(x) is defined as follows.
Then the slope of a secant line joining {x,f(x)} and {x+h,f(x+h)} is given.
The mathematicians of the time then proceeded to find the slope of the tangent by setting h equal to 0.
The following animation shows the tangent lines along the curve that are obtained by using the formula for the slope derived above.
The direct replacement of the infinitesimal quantity h by 0 works well for simple examples, but it requires considerable ingenuity to compute the limiting value of the difference quotient in more difficult examples. Indeed, Isaac Barrow (1630–1677) and others used geometrical methods to compute this limiting value for a variety of curves. On the other hand, the built-in Limit function in the Wolfram Language incorporates methods based on infinite series expansions and can be used for evaluating the required limits. For example, suppose that we wish to find the derivative of Sin. We first compute the difference quotient of the function.
Next, we note that setting h equal to 0 directly leads to an Indeterminate expression, as shown below. The Quiet function is used to suppress messages that warn about the indeterminacy.
Although the direct substitution method has failed, we can use Limit to arrive at the result that the derivative of Sin[x] is Cos[x].
Continuing with the historical development, around 1670, Isaac Newton and Gottfried Wilhelm Leibniz “discovered” calculus in the sense that they introduced the general notions of derivative and integral, developed convenient notations for these two operations and established that they are inverses of each other. However, an air of mystery still surrounded the use of infinitesimal quantities in the works of these pioneers. In his 1734 essay The Analyst, Bishop Berkeley called infinitesimals the “ghosts of departed quantities”, and ridiculed the mathematicians of his time by saying that they were “men accustomed rather to compute, than to think.” Meanwhile, calculus continued to provide spectacularly successful models in physics, such as the wave equation for oscillatory motion. These successes spurred mathematicians on to search for a rigorous definition of derivatives using limits, which was finally achieved by Augustin-Louis Cauchy in 1823.
The work of Cauchy and later mathematicians, particularly Karl Weierstrass (1815–1897), laid to rest the controversy about the foundations of calculus. Mathematicians could now treat derivatives in a purely algebraic way without feeling concerned about the treacherous computation of limits. To be more precise, the calculus of derivatives could now be reduced to two sets of rules—one for computing derivatives of individual functions such as Sin, and another for finding derivatives of sums, products, compositions, etc. of these functions. It is this algebraic approach to derivatives that is implemented in D and allows us to directly calculate the derivative of Sin with a single line of input, as shown here.
Starting from the derivative of a function, one can compute derivatives of higher orders to gain further insight into the physical phenomenon described by the function. For example, suppose that the position s(t) of a particle moving along a straight line at time t is defined as follows.
Then, the velocity and the acceleration of the particle are given by its first and second derivatives, respectively. The higher derivatives too can be computed easily using D; they also have special names, which can be seen in the following computation.
Let us now return to our original example and compute the first four derivatives of Sin.
There is a clear pattern in the table, namely that each derivative may be obtained by adding a multiple of 𝜋/2 to x, as shown here.
In Version 11.1, D returns exactly this formula for the n^{th} derivative of Sin.
An immediate application of the above closed form would be to compute higher-order derivatives of functions with blinding speed. D itself uses this method to compute the billionth derivative of Sin in a flash, using Version 11.1.
The Wolfram Language has a rich variety of mathematical functions, starting from elementary functions such as Power to advanced special functions such as EllipticE. The n^{th} derivatives for many of these functions can be computed in closed form using D in Version 11.1. The following table captures the beauty and complexity of these formulas, each of which encodes all the information required to compute higher derivatives of a given function.
Some of the entries in the table are rather simple. For example, the first entry states that all the derivatives of the exponential function are equal to the function itself, which generalizes the following result from basic calculus.
In sharp contrast to that, the n^{th} derivative for ArcTan is given by a formidable expression involving HypergeometricPFQRegularized.
If we now give specific values to n in that formula, we obtain elementary answers from the first few derivatives.
These answers agree with the ones obtained if D is used separately for each derivative computation. The results are then simplified.
The familiar sum, product and chain rules of calculus generalize very nicely to the case of n^{th} derivatives. The sum rule is the easiest, and simply states that the n^{th} derivative of a sum is the sum of the n^{th} derivatives.
The product rule, or the so-called Leibniz rule, gives an answer that is essentially a binomial expansion, expressed as a sum wrapped in Inactive to prevent evaluation.
We can recover the product rule from a first course on derivatives simply by setting n=1 and applying Activate to evaluate the resulting inert expression.
Finally, there is a form of the chain rule due to the pious Italian priest Francesco Faà di Bruno (1825–1888). This is given by a rather messy expression in terms of BellY, and states that:
Once again, it is easy to recover the chain rule for first derivatives by setting n=1 as we did earlier.
The special functions in the Wolfram Language typically occur in families, with different members of each family labeled by integers or other parameters. For example, there is one function BesselJ[n,z] for each integer n. The first four members of this family are pictured below (the sinusoidal character of Bessel functions helps in the modelling of circular membranes).
It turns out that the derivatives of BesselJ[n,z] can be expressed in terms of other Bessel functions from the same family. While earlier versions did make some use of these relationships, Version 11.1 exploits them more fully to return compact answers for examples such as the following, which generated 2^{10}=1024 instances of BesselJ in earlier releases!
The functions considered so far are differentiable in the sense that they have derivatives for all values of the variable. The absolute value function provides a standard example of a non-differentiable function, since it does not have a derivative at the origin. Unfortunately, the built-in Abs function is defined for complex values, and hence does not have a derivative at any point. Version 11.1 overcomes this limitation by introducing RealAbs, which agrees with Abs for real values, as seen in the following plot.
This function has a derivative at all values except at the origin, which is given by:
The introduction of RealAbs is sure to be welcomed by users who have long requested such a function for use in differential equations and other applications.
This real absolute value function is continuous and only mildly non-differentiable, but in 1872, Karl Weierstrass stunned the mathematical world by introducing a fractal function that is continuous at every point but differentiable nowhere. Version 11.1 introduces several fractal curves of this type, which are named after their discoverers. Approximations for a few of these curves are pictured here.
Albert Einstein’s 1916 paper announcing the general theory of relativity provided a great impetus to the development of calculus. In this landmark paper, he made systematic use of the tensor calculus developed by Gregorio Ricci (1853–1925) and his student Tullio Levi-Civita (1873–1941) to formulate a theory of the gravitational field, which has now received superb confirmation through the detection of gravitational waves. The KroneckerDelta tensor, which derives its name from the Greek delta character δ that is used to represent it, plays a key role in tensor calculus.
The importance of KroneckerDelta lies in the fact that it allows us to “sift” a tensor and isolate individual terms from it with ease. In order to understand this idea, let us obtain the definition of this tensor by applying PiecewiseExpand to it.
From the above, we see that KroneckerDelta[i, j] is 1 if its components i and j are equal, and is equal to 0 otherwise. As a result, it allows us to sift through all the terms in the following sum and select, say, the third term f(3) from it.
In Version 11.1, D makes use of this property of KroneckerDelta to differentiate finite sums with symbolic upper limits with respect to an indexed variable x(j), as illustrated here.
The last result expresses the fact that only the j^{th} term in the derivative is nonzero, since none of the other terms depend on x(j), and hence their derivatives with respect to this variable are 0. For example, if we set n=5 and j=2, then the sum reduces to the single term f^{′}(x(2)).
Along with the improvements for the functionality of D, Version 11.1 also includes a major documentation update for this important function. In particular, the reference page now includes many application examples of the types encountered in a typical college calculus course. These examples are based on a large collection of more than 5,000 textbook exercises that were solved by a group of talented interns using the Wolfram Language during the summer of 2016. Some of the graphics from these examples are shown here. You can click anywhere inside each of the three following graphics to view their corresponding examples in the online documentation.
D is a venerable function that has been available since Version 1.0 (1988). We hope that the enhancements for this function in Version 11.1 will make it even more appealing to users at all levels. Any comments or feedback about the new features are very welcome.
Exoplanets are currently an active area of research in astronomy. In the past few years, the number of exoplanet discoveries has exploded, mainly as the result of the Kepler mission to survey eclipsing exoplanet systems. But Kepler isn’t the only exoplanet study mission going on. For example, the TRAnsiting Planets and PlanetesImals Small Telescope (TRAPPIST) studies its own set of targets. In fact, the media recently focused on an exoplanet system orbiting an obscure star known as TRAPPIST-1. As an introduction to exoplanet systems, we’ll explore TRAPPIST-1 and its system of exoplanets using the Wolfram Language.
To familiarize yourself with the TRAPPIST-1 system, it helps to start with the host star itself, TRAPPIST-1. Imagine placing the Sun, TRAPPIST-1 and Jupiter alongside one another on a table. How would their sizes compare? The following provides a nice piece of eye candy that lets you see how small TRAPPIST-1 is compared to our Sun. It’s actually only a bit bigger than Jupiter.
Although the diameter looks to be about the same as Jupiter’s, its mass is quite different—actually about 80 times the mass of Jupiter.
And it has only about 8% of the Sun’s mass.
TRAPPIST-1 is a thus very low-mass star, at the very edge of the main sequence, but still allowing the usual hydrogen fusion in its core.
The exoplanets in this system are what actually gained all of the media attention. All of the exoplanets (blue orbits) found in the TRAPPIST-1 system so far orbit the star at distances that would be far inside the orbit of Mercury (in green), if they were in our solar system.
As a more quantitative approach to study the planets in this system, it is useful to take a look at the orbital periods of these planets, which lie very close together. Planets in such close proximity can often perturb one another, which can result in planets being ejected out of the system, unless some orbital resonances ensure that the planets are never in the wrong place at the wrong time. It’s easy to look up the orbital period of the TRAPPIST-1 planets.
Divide them all by the orbital period of the first exoplanet to look for orbital resonances, as indicated by ratios close to rational fractions.
These show near resonances with the following ratios.
TRAPPIST-1 h has an inaccurately known orbital period so it’s not clear whether it partakes in any resonances.
Similarly, nearest-neighbor orbital period ratios show resonances.
Which are close to:
An orbital resonance of 3/2 means that one of the planets orbits 3 times for every 2 of the other. Pluto and Neptune in our solar system exhibit a near 3:2 orbital resonance.
This can help explain how so many planets can be packed into such a tight space without experiencing disruptive perturbations.
What about the distances of the exoplanets from their host star? If you placed TRAPPIST-1 and its planets alongside Jupiter and its four Galilean moons, how would they compare? The star and Jupiter are similar in size. The exoplanets are a bit larger than the moons (which are hard to see here) and they orbit a bit farther away, but the overall scales are of similar magnitude. In the following graphic, all distances are to scale, but we magnified the actual diameters of the planets and moons to make them easier to see.
The sizes of the planets can be compared to Jupiter’s four biggest moons for additional scale comparisons.
At the time of this writing, we have curated over 3,400 confirmed exoplanets:
Most of the confirmed exoplanets have been discovered since 2014, during missions such as Kepler.
You can query for data on individual exoplanets.
In addition, there are various classifications of exoplanets that can be queried.
You can perform an analysis to see when the exoplanets were discovered.
You can also do a systematic comparison of exoplanet parameters, which we limit here to the radius and density. We are only considering the entity class of super-Earths here. The red circle marks the approximate location of the TRAPPIST-1 system in this plot.
Here is another example of systematic comparison of exoplanet parameters by discovery method, indicated by color coding. Once again, the TRAPPIST-1 system is shown, with red dots at its mean values.
In addition to data specific to exoplanets, the Wolfram Language also includes data on chemical compounds present in planetary atmospheres.
You can use this data to show, for example, how the density of various atmospheric components changes with temperature.
As a more concrete application, the Wolfram Language also provides the tools needed to explore collections of raw data. For example, here we can import irregularly sampled stellar light-curve data directly from the NASA Exoplanet Archive for the HAT-P-7 exoplanet system.
Then we can remove data points that are not computable.
This raw data can be easily visualized, as shown here.
The following subsamples the data and does some additional post processing to both the times and magnitudes.
Plotting the data shows evidence of eclipses, appearing as a smattering of points below the dense band of data.
A Fourier-like algorithm can identify periodicities in this data.
Zoom into the fundamental frequency, at higher resolution.
We find that the fundamental peak frequency occurs at .453571 radians/day, for which the reciprocal gives an estimate of the corresponding orbital period in days.
With some additional processing, we can apply a minimum string length (MSL) algorithm to the raw data to look for periodicities.
We can apply the MSL algorithm to a range of potential periods to try to find a value that minimizes the distance between neighboring points when the data is phase folded.
Clearly, the minimum string length occurs at about 2.20474 days, in close agreement with method 1.
We can also validate this derived value with the value stored in the header of the original data.
This orbital period corresponds with that of exoplanet HAT-P-7b, as can be seen in the Wolfram curated data collection (complete with units and precision).
From the known orbital period, we can phase fold the original dataset, overlapping the separate eclipses, to obtain a more complete picture of the exoplanet eclipse.
Noise can be reduced by carrying out a phase-binning technique. All data points are placed into bins of width .0005 days, and the mean of the values in each bin is determined.
This graphic, mainly for purposes of visualization, shows the host star, HAT-P-7, with its exoplanet HAT-P-7b orbiting it. All parameters, including diameter and orbital radius, are to scale. The idea is to try to reproduce the brightness variations seen in the observed light curve. For this graphic, the GrayLevel of the exoplanet is set to GrayLevel[1], which enables you to more clearly see the exoplanet go though phases as it orbits the host star.
Now we can do an analogous thing, generating a list of frames instead of a static graphic. In this case, the GrayLevel of the exoplanet is much reduced, as compared to the animation above. For purposes of illustration and to reduce computation time, a small set of values has been chosen around the primary eclipse.
Now, to measure how the brightness of the scene changes, we can use image processing to total all of the pixel values. It’s rasterized at a large image size so that edge artifacts are minimized (which can otherwise have measurable effects on the resulting light curve). So this code takes a minute or so to run.
Next, we rescale all of the pixel counts to fit in the same vertical range as the observed light curve.
Now compare the model data to the actual data. The red points show the model data computed at a few orbital phases around the primary eclipse.
A more detailed model light curve can be constructed if you increase the computation time. The version above was done for speed. Of course, additional secondary effects can be included, such as the possibility of gravity darkening and other effects that cause brightness variations across the face of the star. Such secondary effects are beyond the scope of this blog.
Other star systems can be far more complicated and provide their own unique challenges to understanding their dynamical behavior. The Wolfram Language provides a powerful tool that allows you to explore the subtleties of stellar light curve analysis as well as the periodicities in irregularly sampled data. It would be interesting to see some of these more complicated systems tackled in similar ways to what we’ve done in this blog.
Calling all command-line junkies: the new WolframScript is here!
Now you can evaluate Wolfram Language code, call deployed APIs and execute standalone scripts directly from your favorite command-line interface. WolframScript works like any other command-line utility, enabling flexible connections between the Wolfram System and other programs and I/O.
WolframScript comes packaged with Version 11.1 of Mathematica; on Mac, you must run the Extras installer bundled with the Wolfram System. You can also download and install a standalone version from the WolframScript home page.
Once installed, the wolframscript executable can be found in the same folder as your desktop application, and it is added to the PATH so you can call it directly from any command-line interface.
When executed with no options, wolframscript opens a Wolfram Language console interpreter. This interactive shell (sometimes referred to as a REPL or read–eval–print loop) is a convenient way to write and run Wolfram Language code without launching the desktop front end. It also provides an alternative interface for headless servers or embedded computers (for example, a Raspberry Pi).
When running wolframscript in this way, you can simply enter a line of code and press Enter to see the result. Once you’re finished, use Quit to terminate the interactive session.
To run a single line of code without launching the interactive shell, use the -code option. Commands entered this way are evaluated immediately by the Wolfram Engine, with the result sent to standard output. When evaluation is complete, the Wolfram kernel is terminated. This is convenient for single-use applications, like viewing the contents of a text file using Import. (In some cases you’ll need to escape inner double quotes with the \ character.)
You can also use redirection to supply a file as input through standard input. This incoming data is represented within a script by $ScriptInputString. Adding -linewise uses the standard NewLine character as a delimiter, treating each line of text as a separate input value.
For more structured scripting, you can indicate a pure function using the -function option and pass in arguments with -args. By default, arguments are interpreted as strings.
With the -signature option, you can specify how arguments should be parsed in each function slot, including any format available to Interpreter—from basic numeric and string types to entities, quantities and many import/export formats. (Keep in mind that some high-level interpreter types require a connection to the Wolfram Cloud.)
If you don’t have a local installation of Mathematica, you can run wolframscript in the cloud. Adding the -cloud option to the end of your command sends the computation to an available cloud kernel. You’ll be asked to authenticate the first time you run something in the cloud.
The -cloud option uses a public kernel on the Wolfram Cloud by default. If you’re connected to Wolfram Enterprise Private Cloud, you can specify a different cloud base by passing its URL (e.g. https://privatecloud.mycompany.com) as an argument directly after -cloud.
You can open and close these connections manually using -auth and -disconnect. Each cloud requires separate authentication, and connection data is stored for use during your session. Cloud authentication is only necessary for sending dedicated computations; it doesn’t affect Wolfram Knowledgebase access.
Code from Wolfram Language packages (.wl, .m) can be executed through wolframscript using the -file option. This evaluates each successive line of code in the file, terminating the kernel when finished.
Unlike with interactive scripting, results from -file are not displayed by default unless enclosed in Print, Write or some similar output function. Using the -print option sends the result of the final computation to standard output, and -print all shows intermediate results as well.
You can also call deployed APIs with the -api option. The following API (generated using APIFunction and CloudDeploy) returns a forecast of high temperatures for the next week in a given city. To call the API with wolframscript, you can reference it by URL or by UUID (the last part of the URL). Parameters are passed in by name; in this case, -args is optional.
By default, wolframscript gives a low-level text representation of the result. You can select the type of output you want, including any format understood by Export, using the -format option. For instance, some output may be easier to read in a table format.
When working with non-textual formats (e.g. spreadsheets, audio, video, graphics), it’s often best to write output directly to a file; you can do this using redirection.
Wolfram Language scripts (.wls) are standalone files containing Wolfram Language code that can be executed like any other application. Structurally, scripts are just packages that are launched as programs rather than notebooks by default. You can create a script from Mathematica with the File > New > Script menu item and execute it by typing its name in the command line (prepending ./ on Linux and Mac systems) or by double-clicking its icon in a file explorer.
The shebang line (starting with #!) tells the Unix environment to check the PATH for the wolframscript executable. On Unix-based systems, you can add launch options to this line by opening the script in a text editor. For instance, if you wanted to implement the travel distance function above as a standalone script, you would include the -function and -signature options in this line. (As of this writing, these options are bypassed when running scripts in Windows, but the goal is to eventually have all platforms work the same.)
To access command-line arguments, use $ScriptCommandLine within your script. Arguments are stored as a list of strings, starting with the full path of the script. In most cases, you’ll want to discard that initial value using Rest.
You may need to convert arguments to the correct data type for computations; this can be done using ToExpression. This script also checks for arguments first, printing a message if none are found.
Redirection works both ways when executing scripts, allowing for advanced applications such as the following image processing example. To maintain formatting for non-textual output, use -format when writing to a file.
You can even launch external programs directly from your script. The following will take a fundamental frequency, generate a bell sound using harmonics, export it to a temporary file and play it in your system’s default audio player.
WolframScript makes it easy to access Wolfram kernels from familiar, low-level interfaces for more flexible and universal computations. And with its cloud connectivity, you can access the Wolfram Language even from machines with no Wolfram System installed.
All the scripts demonstrated here are available for direct download as .wls files. You can execute them directly, change code and launch options in a text editor, or open them in Mathematica for standard notebook features like interactive execution, code highlighting and function completion.
For even more ideas, take a look at the WolframScript documentation and our tutorial on writing scripts. These examples barely scratch the surface—with the full functionality of the Wolfram Language available, the possibilities are endless.
So what are you waiting for? Let’s get scripting!
]]>This year, we’re bringing the European Wolfram Technology Conference to Amsterdam! Join us June 19–20 for two days of expert talks showcasing the latest releases in Wolfram technologies, in-depth explorations of key features and practical use cases for integrating Wolfram technologies in your ecosystem.
Catering to both new and existing users, the conference provides an overview of the entire Wolfram technology stack while also exploring some of our new products and features, including Wolfram|One, the Wolfram Data Repository and the latest capabilities released in Mathematica 11.1!
With a conference dinner rounding out the first day, this is a great opportunity for attendees not only to meet those who develop Wolfram technologies but also to connect with our thriving community of like-minded users.
Session highlights will include keynotes from Conrad Wolfram, Jon McLoone, and a range of Wolfram experts and users from around the world, giving you the inside track on the future direction of computational technology.
Key topics will include:
To join us in Amsterdam, register now!
]]>
Differential Equations with Mathematica, Fourth Edition
The fourth edition of Differential Equations with Mathematica is a supplementing reference that uses the fundamental concepts of Mathematica to solve (analytically, numerically and/or graphically) differential equations of interest to students, instructors and scientists. Authors Martha L. Abell and James P. Braselton include instruction on basic methods and algorithms. They cover the Mathematica functions relevant to differential equations and dependant concepts from calculus and linear algebra. This book contains many helpful illustrations that make use of Mathematica’s visualization capabilities.
Solution Techniques for Elementary Partial Differential Equations, Third Edition
Christian Constanda teaches students to solve partial differential equations through concise, easily understood explanations and worked examples that allow students to see the techniques in action. The third edition includes new sections on series expansions of more general functions, other problems of general second-order linear equations, vibrating string with other types of boundary conditions and equilibrium temperature in an infinite strip. It also includes new and improved exercises with a brief Mathematica program for nearly all of the worked examples, teaching students how to verify their results with a computer.
Differential Equations & Linear Algebra, Fourth Edition
Authors C. Henry Edwards, David E. Penney and David Calvis provide updated and improved figures, examples, problems and applications. With real-world applications and a blend of algebraic and geometric approaches, Differential Equations & Linear Algebra introduces students to mathematical modeling of real-world phenomena and offers an array of problem sets. Alongside this fourth edition, an expanded applications website is now available that includes programming tools from Mathematica and Wolfram|Alpha.
Exploring Calculus: Labs and Projects with Mathematica
Authors Crista Arangala and Karen A. Yokley created a hands-on lab manual that can be used in class every day to guide the exploration of the theory and applications of differential and integral calculus. Each lab consists of an explanation of material with integrated exercises. The exercise sections integrate problems, technology, Mathematica R visualization and the Computable Document Format (CDF) to help students discover the theory and applications of differential and integral calculus in a meaningful and memorable way.
Calculus and Differential Equations with Mathematica
In this book, Pramote Dechaumphai offers a clear and easy-to-understand presentation of how to use Mathematica to solve calculus and differential equation problems. It contains essential topics that are taught in calculus and differential equation courses, including differentiation, integration, ordinary differential equations and Laplace and Fourier transforms, as well as special functions normally encountered in solving science and engineering problems. Numerical methods are employed when the exact solutions are not available. Additionally, the finite element method in Mathematica is used to analyze partial differential equations for problems with complex geometry. These partial differential equations could be in elliptic, parabolic and hyperbolic forms. Many examples are presented with detailed derivation for their solutions before using Mathematica to confirm the results.
Geometry, Language and Strategy Vol. 2: The Dynamics of Decision Processes
The first volume, Geometry, Language and Strategy, extended the concepts of game theory, replacing static equilibrium with a deterministic dynamic theory. It also opened up many applications that were only briefly touched on. To study the consequences of the deterministic approach and the extent of these applications in contrast to standard Bayesian approaches requires an engineering foundation and discipline, which this volume supplies. It provides a richer list of applications, such as the prisoner’s dilemma, expanding the relevance of volume 1 to more general time-dependent and transient behaviors.
Mathematica for Mathematics, Physics and Engineers
Mehrzad Ghorbani expands on an earlier work, Applied Mathematical Softwares: Mathematica, developed over the course of more than 10 years of teaching mathematics software and Mathematica code in Iranian universities. This new title includes more elegant and basic mathematical problems from a range of specializations including calculus, number theory, numerical analysis, vector and matrix algebra, complex variables, graph theory, engineering mathematics and mathematical physics. Although applicable to undergraduate and graduate studies in math and science, Ghorbani’s book is additionally relevant to those who use Mathematica in computational scientific branches that need symbolic or numerical code.
]]>Ever since the partnership between the Raspberry Pi Foundation and Wolfram Research began, people have been excited to discover—and are often surprised by—the power and ease of using the Wolfram Language on a Raspberry Pi. The Wolfram Language’s utility is expanded even more with the addition of the Sense HAT, a module that gives the Raspberry Pi access to an LED array and a collection of environmental and movement sensors. This gives users the ability to read in data from the physical world and display or manipulate it in the Wolfram Language with simple, one-line functions. With the release of Mathematica 11, I’ve been working hard to refine functions that connect to the Sense HAT, allowing Mathematica to communicate directly with the device.
The Sense HAT functionality is built on Wolfram’s Device Driver Framework, so connecting to the device is incredibly simple. To start, use the DeviceOpen function to establish a connection. This will return a DeviceObject, which we will use later to tell Mathematica which device we are wanting to read from or write to.
In the case of the Sense HAT, there are three onboard sensors that Mathematica can read from. Accessing the data from these sensors is as easy as calling DeviceRead with the name of the measurement wanted. For instance:
There are a total of seven measurements that can be read from the Sense HAT: temperature, humidity, air pressure, acceleration, rotation, magnetic field and orientation. All readings are returned with appropriate units, making it easy to convert the values to other formats if necessary.
The other physical component of the Sense HAT is the 8-by-8 LED array. Similar to reading data with DeviceRead, it is only a matter of calling the DeviceWrite function to send either an image or a string to the array. For strings, the text scrolls across the device sideways. You can manipulate the speed and color of the scrolling text with relevant options as well.
Alternatively, the Sense HAT can receive an 8-by-8 list of RGB values to be displayed on the LED array. Using this method, it’s possible to display small images on the screen of the Sense HAT.
Here is a picture of what this looks like when written to a Sense HAT:
Using these functions, you can write Mathematica programs that process the data received from the sensors on the Sense HAT. For example, here is a demo I ran at the Wolfram Technology Conference in October 2016. It reads the temperature, humidity and air pressure around the Pi every five minutes and pushes that data to the Wolfram Data Drop.
The above function generates a new databin to record data to, but what does that data look like once it’s been recorded? Let’s look at the recordings I made at the aforementioned Wolfram Technology Conference.
That data can be downloaded into Mathematica by anyone anytime after the conference to show the changes in atmospheric conditions over the course of the conference using DateListPlot. Below, you can see the rise in air pressure inside the conference center as more people gathered to see the many demos Wolfram employees had set up, followed by a drop as the conference ended.
Another demo I ran at the Wolfram Tech Conference made use of DeviceWrite. Using the Wolfram Language’s financial database, I turned the Sense HAT into a miniature stock ticker. This demo downloads the current stock market data from Wolfram’s servers, then displays them by picking a random stock from the list and showing the stock’s name and price on the Sense HAT’s LED array.
The final demo that was run at the Wolfram Tech Conference this year used the Sense HAT’s LED array to run Conway’s Game of Life, a famous cellular automaton. For those unfamiliar with the “Game,” imagine each lit LED is a cell in a Petri dish. If a cell has too few or too many neighbors, it dies out. If an empty space has exactly three living neighbors, a new cell appears there. When these rules have been applied to all of the spaces on the grid, a new “generation” begins and the rules are reapplied. This pattern can continue indefinitely, given the right conditions. In my demo, a random array of lit and unlit LEDs constitutes the starting pattern; then the automaton runs for a given number of iterations.
The rounds, pause and color parameters can all be modified to change how the automaton is displayed and how long Mathematica waits before displaying the next iteration.
These demos give a taste of what is possible when Mathematica connects with the Sense HAT. Have a look yourself at the Sense HAT documentation page, and send a note to Wolfram Community if you come up with something interesting!