However, I always find myself fascinated by this question. I like to think about the events leading up to a situation and what sorts of unseen mechanisms might be at work. I interpret the question as a challenge, an exciting topic worthy of discussion. In some cases the odds may seem incalculable—and I’ll admit it’s not always easy. However, a quick investigation of the surrounding mathematics can give you a lot of insight. Hopefully after reading this post, you’ll have a better answer the next time someone asks, “What *are* the odds?”

But before we go more in depth about odds, it’s important to touch on the more general topic of probability.

Probability theory incorporates many ideas on how random events play out, including many functions (distributions) that describe such phenomena mathematically. Although the subject can get quite complex, the basic strategy for estimating an event’s probability involves counting up the possible outcomes and determining all the ways each outcome can occur.

In the simple example of flipping a coin, this process is almost trivial. As long as the coin is “fair” (i.e. not altered in a way that favors a particular result), it will come up either heads or tails each time, and each outcome can only occur one way. You could easily create a `coinFlip` function to model this behavior:

Now imagine this situation with two coins. Each coin still has the same equal chance of coming up heads or tails. There are now four equally likely outcomes:

Each result has a probability of 1/4, or 25%. But if we treat the two mixed cases (TH and HT) as equivalent (i.e. one head and one tail), our interpretation changes. Now we have a 1/4 chance of getting both heads (or both tails), and a 2/4=1/2 chance of getting one head and one tail:

In either case—and, indeed, in all cases—the sum of these probabilities is 1. Also note that the probability of getting two heads in a row is the same as the product of the probabilities of getting a head for each toss: 1/2 * 1/2=1/4. When predicting a specific series of events (in this case, a head followed by another head), the probabilities are multiplied. This means that as the number of successive predictions grows, the probability of being correct diminishes. This seems sensible, since nobody can be right *all* the time:

It’s common to interpret probability in terms of chances. A probability of 1/4 can be read as a 1 in 4 chance of an event happening. Indeed, many people will quote this type of figure as “the odds of” an event happening, though this is technically incorrect. Here are the chances of drawing any of the possible hands in five-card draw poker:

Odds are a slightly different representation of probability often used in games and wagering situations. Whereas chances are expressed as a ratio of the number of ways an event will occur versus *all* possible outcomes, odds are expressed as a ratio of the number of ways an event *will* occur versus the number of ways it *won’t* occur—a comparison of wins to losses. This may either be formulated as odds in favor (wins/losses) or odds against (losses/wins):

In the single coin flip example, the odds in favor of landing on heads are 1 to 1—either it *will* turn up heads (the first 1) or it *won’t* (the second 1). In a double coin flip, your odds against getting two heads (HH in the chart above) are 3 to 1:

Since both are expressed as ratios of simple numbers, chances and odds notation can be especially useful for non-mathematical types. For instance, the chances of winning the recent $1.6B Powerball jackpot were stated as 1 in 292,201,338. In other words, the odds against winning are 292,201,337 to 1. These odds are based on the process for the drawing—choose 5 of the 69 white balls and 1 of the 26 Powerballs. Here `Binomial[n,k]` is the Wolfram Language function for a combination (the probability of choosing *k* specific items out of *n* available choices), and the chance of getting all 5 white balls is multiplied by the chance of getting the Powerball:

Since only one of the possibilities (out of 292,201,338 total) results in a win, the total odds against are 292,201,337 to 1. To put this in perspective, the odds of being dealt a royal flush (the rarest hand in poker) are much better. There are only four possible ways to get this hand, out of the ~2.5 million unique hands that can be drawn:

Clearly, having nearly 300 million ways to lose is less appealing than having 600 thousand. With such large numbers it’s easy to see why odds and chances are often interchanged—the ratios are almost exactly the same, because almost all of the possibilities are considered losses. In either case, a small proportion of wins means a small chance of winning. Simple enough, right?

In many cases where one might run into this “odds” notation, it’s not exactly clear where the numbers are coming from. What are the odds of a particular team winning the Super Bowl? While it can be easy to find quotes for such figures (FiveThirtyEight currently has Carolina at 3 to 2 against Denver), it’s much more difficult to figure out how they came to be.

This brings us to another important subject: statistics. It turns out that while random events are difficult to predict, long-term trends and patterns emerge over time that can help shape our predictions. Statistics is the discipline of collecting and organizing data—usually in large quantities—in order to create a model.

Team and player ratings in most sports are based on historical statistics such as batting average, passing yards, goals blocked, etc. Since sports (and many other competitive activities such as board games) are a precarious mashup of chance and skill, trends are often difficult to pin down through casual observation. This is the reason that betting situations (most recently daily fantasy sports) are often swept by folks with an understanding of both probability and statistics.

But for now, let’s get back to our coin. It seems intuitive that any series of coin flips ought to come out (about) half heads, half tails. To test this assumption, we can simulate a series of coin flips, tallying up the results:

Results vary, but usually it’s pretty close. Why is this?

Mathematically, you can think of the coin flip as a random choice between 0 (tails) and 1 (heads), with the expectation that you’ll get each value about half the time. This puts the *expected value* of a single coin flip at 1/2, or 0.5.

Let’s set up a function using this model, along with a way to visualize the results of those trials:

Looking at a series of ten or twenty flips might not be sufficient; each flip is independent, so it’s quite possible to come up mostly heads or mostly tails:

It is, as they say, anyone’s guess. However, after flipping the coin hundreds or thousands of times, you’ll start to notice the data converging on a pattern:

In this series, you see what is known as the law of large numbers, which states that the average value of repeated trials in a particular system (the solid blue line) should tend toward the expected value of a single trial (the dotted red line). In this case, you should get an average of around 0.5—a trend clearly shown by the graphs.

The takeaway here is that the system’s long-term behavior is much more important than any small-scale effects. Also, regardless of any short-term trends, the probability of the next flip coming up heads is still just 50%. The mistaken belief that a period of frequent “heads” results will increase the chances of flipping “tails” the next time is known as the gambler’s fallacy.

Sports fans may be familiar with a related pitfall: when an athlete has a “hot streak” during a particular match, many fans are inclined to believe the streak will continue (as though by some sort of cosmic momentum). Most often, this is not the case, and it can be detrimental to one trying to predict an outcome. This is in part because the prediction of multiple events must also take into account whether the events are related or, in mathematical terms, whether the variables are *independent*.

In our coin example, it’s clear that the outcome of one flip should not affect the next. In more complex cases, it may not be easy to tell.

Consider actuarial science, for example. The insurance industry bases its entire model around assessment of the risks involved in various activities. They sift through a lot of data to find long-term patterns in risky behavior. It’s an actuary’s job to know that you’re more likely to be struck by lightning than to die in a plane crash (though, fortunately, we have Wolfram|Alpha for that). As morbid as this field may seem, it increases our collective understanding of risk. Indeed, these risk assessment models can often shape government policies.

Of course, statistics don’t have to be too serious. You can try to estimate the chances of really fun stuff like alien life in our galaxy. Or calculate the odds of at least two people in the room sharing a birthday (a favorite at parties). The possibilities are endless, so the odds are pretty low that you’ll run out of ideas anytime soon.

Download this post as a Computable Document Format (CDF) file.

]]>The Koch snowflake (shown below) is a popular mathematical curve and one of the earliest fractal curves to have been described. It’s easy to understand because you can construct it by starting with a regular hexagon, removing the inner third of each side, building an equilateral triangle at the location where the side was removed, and then repeating the process indefinitely:

If you isolate the hexagon’s lower side in the process above you’ll get the Koch curve, described in a 1904 paper by Helge von Koch (1870–1924). It has a long history that goes back way before the age of computer graphics. See, for example, this handmade drawing by the French mathematician Paul Lévy (1886–1971):

Lévy’s work “Plane or Space Curves and Surfaces Consisting of Parts Similar to the Whole” was presented to the Société Mathématique de France on February 22, 1938. But as stated in one of his notes, it was in November 1902 when Lévy had his first encounter with the curve of von Koch, which had not been discovered at that time. He defined this curve, having set himself the task of proving the existence of curves without derivatives. It’s also interesting to note that Benoit Mandelbrot (1924–2010), who coined and popularized the term fractal, was one of the best disciples of Paul Lévy.

OK, so how is it possible that I’ve been recently hit by a blizzard of new Koch-like curves if Koch snowflakes have been falling steadily for more than a century? The answer is the Wolfram Language and a tremendous passion for fractal trees. Yes, you heard it right, self-similar trees allowed me to put things in a different perspective. Few people know that a simple symmetric binary branching rule can generate the Koch curve:

That was my starting point. I set the central trunk to measure 1, so then the first pair of successive branches measured , which is the scaling factor for successive branches. For example, the four branches that stem from the initial pair of branches measured , the eight successive ones measured , and so on. Then I added five extra copies of that tree rotated in angles of 60 degrees ( π/3 radians) around the base of the central trunk:

Beautiful. Then I noticed that the scaling factor *r* = was equal to the ratio of side BC to the regular hexagon first diagonal AC, and that the angle spanned by these two segments was equal to the angle between the central trunk and the first pair of branches, 30º or π/6 rad:

So I wondered if the ratio of side CD to the second diagonal AD (*r*=1/2) and the angle spanned between them (60º) have a Koch-like curve associated to them. To my surprise, the answer was yes:

This time, what generated the Koch-like curve was the following ternary branching rule (three branches per joint). A left branch turned 60°, a middle branch pointed downward, and a right branch turned -60°:

Again, I assembled five extra copies of it around the base of the central trunk and it generated another snowflake with 6-fold rotational symmetry:

My next move consisted of checking if the remaining diagonal had a quaternary branching rule (four branches per joint) associated with it. The ratio of side DE to the third diagonal AE is *r* =, and the angle spanned between them measures 90º. I constrained the tree to be mirror symmetric around its trunk and to have its four branches equally spaced starting from DE:

When I added five extra copies of the resulting quaternary tree, I obtained the following intricate Koch snowflake:

Impressed by the output of these three simple branching rules, I fed a general rule into the Wolfram Language. And here is where the blizzard began. The rule was set to generate Koch-like snowflakes associated with the ratios and angles spanned by the diagonals and sides of regular polygons. For example, these are the fractal snowflakes associated with the five diagonals of the regular octagon:

Now you keep counting: the nonagon, the decagon, the hendecagon, and so on up to infinity. And don’t forget to count the infinite sides of their limiting element, the circle. So the question is how did I survive the blizzard? I stopped the snowflakes from falling. I managed to trap them all in this map:

Don’t panic; this diagram is really easy to understand. The points on the lower horizontal line are the end points of the first pair of branches that generate the binary trees that make Koch-like snowflakes. For example, the point labeled as H(5,2) generates the following tree:

This tree has 5-fold rotational symmetry when four extra copies are assembled around the base of its trunk:

So now you might be wondering what happens if the first pair of branches are not exactly set at these points. Well, if the branches are still lying on the horizontal line, the tree generates a Koch-like curve, but it can no longer be assembled to make a snowflake:

The points on the horizontal line are the points that define binary branching rules to generate snowflakes of *n*-rotational symmetry. And these snowflakes are associated with the regular polygons’ first diagonal. Here is a table that shows what these snowflakes look like:

The diagram is that simple. The curve above the horizontal line is where Koch-like ternary trees live. Quaternary trees live in the next one, 5-ary trees in the next next one, and so on.

The next time a blizzard keeps you home, don’t just stare out the window at the snowflakes. Hop on your computer and reach infinity in less than an hour.

Download this post as a Computable Document Format (CDF) file.

]]>It’s been very satisfying to see how successfully Wolfram|Alpha has democratized computational knowledge and how its effects have grown over the years. Now I want to do the same thing with knowledge-based programming—through the Wolfram Open Cloud.

Last week we released Wolfram Programming Lab as an environment for people to learn knowledge-based programming with the Wolfram Language. Today I’m pleased to announce that we’re making Wolfram Programming Lab available for free use on the web in the Wolfram Open Cloud.

Go to wolfram.com, and you’ll see buttons labeled “Immediate Access”. One is for Wolfram|Alpha. But now there are two more: Programming Lab and Development Platform.

Wolfram Programming Lab is for learning to program. Wolfram Development Platform (still in beta) is for doing professional software development. Go to either of these in the Wolfram Open Cloud and you’ll immediately be able to start writing and executing Wolfram Language code in an active Wolfram Language notebook.

Just as with Wolfram|Alpha, you don’t have to log in to use the Wolfram Open Cloud. And you can go pretty far like that. You can create notebook documents that involve computations, text, graphics, interactivity—and all the other things the Wolfram Language can do. You can even deploy active webpages, web apps and APIs that anyone can access and use on the web.

If you want to save things then you’ll need to set up a (free) Wolfram Cloud account. And if you want to get more serious—about computation, deployments or storage—you’ll need to have an actual subscription for Wolfram Programming Lab or Wolfram Development Platform.

But the Wolfram Open Cloud gives anyone a way to do “casual” programming whenever they want—with access to all the core computation, interface, deployment and knowledge capabilities of the Wolfram Language.

In Wolfram|Alpha, you give a single line of natural language input to get your computational knowledge output. In the Wolfram Open Cloud, the power and automation of the Wolfram Language make it possible to give remarkably small amounts of Wolfram Language code to get remarkably sophisticated operations done.

The Wolfram Open Cloud is set up for learning and prototyping and other kinds of casual use. But a great thing about the Wolfram Language is that it’s fully scalable. Start in the Wolfram Open Cloud, then scale up to the full Wolfram Cloud, or to a Wolfram Private Cloud—or instead run in Wolfram Desktop, or, for that matter, in the bundled version for Raspberry Pi computers.

I’ve been working towards what’s now the Wolfram Language for nearly 30 years, and it’s tremendously exciting now to be able to deliver it to anyone anywhere through the Wolfram Open Cloud. It takes a huge stack of technology to make this possible, but what matters most to me is what can be achieved with it.

With Wolfram Programming Lab now available through the Wolfram Open Cloud, anyone anywhere can learn and start doing the latest knowledge-based programming. Last month I published *An Elementary Introduction to the Wolfram Language* (which is free on the web); now there’s a way anyone anywhere can do all the things the book describes.

Ever since the web was young, our company has been creating large-scale public resources for it, from Wolfram MathWorld to the Wolfram Demonstrations Project to Wolfram|Alpha. Today we’re adding what may ultimately be the most significant of all: the Wolfram Open Cloud. In a sense it’s making the web into a true computing environment—in which anyone can use the power of knowledge-based programming to create whatever they want. And it’s an important step towards a world of ubiquitous knowledge-based programming, with all the opportunities that brings for so many people.

]]>

I’ve long wanted to have a way to let anybody—kids, adults, whoever—get a hands-on introduction to the Wolfram Language and everything it makes possible, even if they’ve had no experience with programming before. Now we have a way!

The startup screen gives four places to go. First, there’s a quick video. Then it’s hands on, with “Try It Yourself”—going through some very simple but interesting computations.

Then there are two different paths. Either start learning systematically—or jump right in and explore. My new book *An Elementary Introduction to the Wolfram Language* is the basis for the systematic approach.

The whole book is available inside Wolfram Programming Lab. And the idea is that as you read the book, you can immediately try things out for yourself—whether you’re making up your own computations, or doing the exercises given in the book.

But there’s also another way to use Wolfram Programming Lab: just jump right in and explore. Programming Lab comes with several dozen Explorations—each providing an activity with a different focus. When you open an Exploration, you see a series of steps with code ready to run.

Press Shift+Enter (or the button) to run each piece of code and see what it does—or edit the code first and then run your own version. The idea is always to start with a piece of code that works, and then modify it to do different things. It’s like you’re starting off learning to read the language; then you’re beginning to write it. You can always press the “Show Details” button to open up an explanation of what’s going on.

Each Exploration goes through a series of steps to build to a final result. But then there’s usually a “Go Further” button that gives you suggestions for free-form projects to do based on the Exploration.

When you create something neat, you can share it with your friends, teachers, or anyone else. Just press the button to create a webpage of what you’ve made.

I first started thinking about making something like Wolfram Programming Lab quite a while ago. I’d had lots of great experiences showing the Wolfram Language in person to people from middle-school-age on up. But I wanted us to find a way for people to get started with the Wolfram Language on their own.

We used our education expertise to put together a whole series of what seemed like good approaches, building prototypes and testing them with groups of kids. It was often a sobering experience—with utter failure in a matter of minutes. Sometimes the problem was that there was nothing the kids found interesting. Sometimes the kids were confused about what to do. Sometimes they’d do a little, but clearly not understand what they were doing.

At first we thought that it was just a matter of finding the one “right approach”: immersion language learning, systematic exercise-based learning, project-based learning, or something else. But gradually we realized we needed to allow not just one approach, but instead several that could be used interchangeably on different occasions or by different people. And once we did this, our tests started to be more and more successful—leading us in the end to the Wolfram Programming Lab that we have today.

I’m very excited about the potential of Wolfram Programming Lab. In fact, we’ve already started developing a whole ecosystem around it—with online and offline educational and community programs, lots of opportunities for students, educators, volunteers and others, and a whole variety of additional deployment channels.

Wolfram Programming Lab can be used by people on their own—but it can also be used by teachers in classrooms. Explain things through a demo based on an Exploration. Do a project based on a Go Further suggestion (with live coding if you’re bold). Use the *Elementary Introduction* book as the basis for lectures or independent reading. Use exercises from the book as class projects or homework.

Wolfram Programming Lab is something that’s uniquely made possible by the Wolfram Language. Because it’s only with the whole knowledge-based programming approach—and all the technology we’ve built—that one gets to the point where simple code can routinely do really interesting and compelling things.

It’s a very important—and in fact transformative—moment for programming education.

In the past one could use a “toy programming language” like Scratch, or one could use a professional low-level programming language like C++ or Java. Scratch is easy to use, but is very limited. C++ or Java can ultimately do much more (though they don’t have built-in knowledge), but you need to put in significant time—and get deep into the engineering details—to make programs that get beyond a toy level of functionality.

With the Wolfram Language, though, it’s a completely different story. Because now even beginners can write programs that do really interesting things. And the programs don’t have to just be “computer science exercises”: they can be programs that immediately connect to the real world, and to what students study across the whole curriculum.

Wolfram Programming Lab gives people a broad way to learn modern programming—and to acquire an incredibly valuable career-building practical skill. But it also helps develop the kind of computational thinking that’s increasingly central to today’s world.

For many students (and others) today, Wolfram|Alpha serves as a kind of “zeroth” programming language. The Wolfram Language is not only an incredibly powerful professional programming language, but also a great first programming language. Wolfram Programming Lab lets people learn the Wolfram Language—and computational thinking—while preserving as much as possible the accessibility and simplicity of Wolfram|Alpha.

I’m excited to see how Wolfram Programming Lab is used. I think it’s going to open up programming like never before—and give all sorts of people around the world the opportunity to join the new generation of programmers who turn ideas into reality using computational thinking and the Wolfram Language.

]]>This blog describes how the SystemModeler hydraulic library can be used in education, but the focus is not only on the hydraulic part. The idea is also to show how to build up an interesting, real application where hydraulics play an essential role. In the model it is then possible to study the effects of filter locations, choose valves, adjust settings, study different oil grades, etc. This post may also give ideas to hydraulic engineers used to working with conventional software as to what more can be done with SystemModeler compared to the standard software.

First, we need a mechanical system where a hydraulic system is needed. In this case we chose a scissors lift. The model is built using the standard libraries included in SystemModeler, such as the MultiBody and StateGraph libraries. In this section the mechanical part will be shown. This lift uses a wagon as a base:

The scissor:

The basket:

And finally, the assembly:

The mechanical parts of this system are now in place. We need the hydraulics to get the lifting force and a control and maneuvering system.

First, we need a motor and a pump. A good first approximation of a motor is to assume a constant rpm, i.e. 1500 rpm (= 157 rad/s):

The pump needs oil and something to drive—for instance, an actuator:

There are several ways to model this piston force in SystemModeler. A common way in SystemModeler is to use the prismatic joint and “use flange” option. Let the prismatic joint lift a mass. We now have a very simple hydraulic system:

This system will, of course, only work in one direction, which is not a very good design for a scissors lift. If we introduce a double piston and a directional control valve, it will be possible to have forces in both directions. Exchanging the “mass” in the above example to “connections” in the structure causes the system to look like this:

This system fulfills the requirements for the simulation; however, in reality we need to have at least one filter to remove foreign particles so as to avoid damage to the actuator and valves. The filter location is not obvious. We can have one filter directly after the tank to prevent contaminated oil in the system. Other possible locations are after the pump, prior to the valves and/or actuators, or on the return line. There are drawbacks and benefits with all locations. If all three are used, which seldom is the case, it would look like this:

And finally, especially if the system works in cold environments, we should have a pressure relief valve to avoid working with too-high viscosity. The system will idle until the oil is warm enough:

From an educational point of view, there are many questions that can be discussed. What kind of motor is realistic? Will the system work better with a variable pump? How is the pressure in the system affected by filter positions? Can the directional control valve be replaced with something else, for instance a simpler valve and a check valve? Can the system be changed to one with a rotational hydraulic motor and a gear to fulfill the lifting task?

It is very common for hydraulic systems to need some kind of control system. In this blog, state graphs will be used to illustrate how simply a complex workflow can be modeled. Proportional–integral–derivative (PID) controllers and other types of controllers can also easily be used.

The StateGraph library is used for sequence control and procedure handling. It supports, for instance, steps and transitions with parallel and alternative paths.

The control system for this lift is divided into two blocks, one for the lifting of the basket/person and one for the work to be done when the basket is in its working position:

The controller for the lifting of the basket with the hydraulic cylinder and scissor design looks like this:

The first thing that happens is that a signal is sent, telling the directional control valve to open. When the position reaches 3.2 m, the signal for lifting stops and the valve closes. A signal is sent to the controller basket to perform an action, in this case open the almost-closed doors. When the controller basket is ready, a signal is sent back to the controller hydraulics and the basket is lowered. When the position reaches 1.4 m, the signal for lowering stops and the valve closes again. It is easy to build this kind of controller with StateGraph, and also easy to expand it to include things such as PID controllers.

Hydraulic systems often require multidomain analysis. By using the Hydraulic library in SystemModeler together with other SystemModeler library components, many more complex and realistic systems can be analyzed without the need for common simplifications. The Hydraulic library includes many of the most common components. If necessary, it is simple to add new ones.

From an educational standpoint, the drag-and-drop modeling makes it easy to study complex systems behavior using different components and locations.

Download this post as a Computable Document Format (CDF) file.

]]>

Basic Mathematics Knowledge: The Smart Start to University Math (German)

This resource, written by Jürgen Schmidt for students minoring in mathematics, clearly explains key mathematical concepts. Multiple examples, an extensive practice section, and inclusion of Wolfram Research’s free “knowledge engine,” Wolfram|Alpha, provide help and orientation to students who study math at the college level. High-school graduates preparing for a career in the sciences, engineering, or economics, as well as students wishing to keep up with math, will appreciate the included collection of annotated formulas.

How to Utilize Mathematica in the Field of Electrical Engineering (Japanese)

Ichiro Sasada’s book picks out the basic tasks in the common areas of electrical engineering—electrical mathematics, electrical circuits, electromagnetism, and digital signal processing—and describes in detail how to use Mathematica to solve them. Each step includes code that can be used in the calculations. The objective of this book is to help students utilize the power of Mathematica, which could be called “state-of-the-art paper and pencil,” and to help them understand the problems in electrical engineering more quickly and deeply.

Introduction to Mathematica: Including the Free Version 10 for Raspberry Pi (German)

After a brief introduction to Mathematica, this book by Knut Lorenzen gives an overview of Mathematica’s functionalities and of the features of the knowledge base Wolfram|Alpha. Topics covered include options, numerical output, constants, units, types of numbers, fractions, power series, and free-form natural language input. Other chapters discuss lists, tables, expressions, functions, and user-defined objects, as well as graphic and sound features and music. Readers will learn the fundamental principles of programming with Mathematica and the Wolfram Language and how to use Mathematica with the Raspberry Pi.

Mathematica Companion for Finite Mathematics and Business Calculus

This companion, written by Fred Szabo, is meant as a reference to many of the basic concepts and formulas of finite mathematics and calculus required in business and economics as well as in the life and social sciences. The material is organized and presented in a dictionary-style format. All concepts and formulas are illustrated with examples written in the Wolfram Language, the foremost computational software system used in scientific, engineering, mathematical, and computing fields. Throughout the book, the emphasis is on how to solve mathematical problems and to learn and explore mathematical ideas with Mathematica and Wolfram|Alpha.

Introduction to System Simulation with Modelica (Japanese)

This book is the translated version of *Introduction to Modeling and Simulation of Technical and Physical Modeling with Modelica* (Wiley, 2011), written by Professor Peter Fritzson, a board member of the Modelica Association. It details the basics of physical modeling with Modelica, the increasingly popular physical modeling language. The book covers the meaning of modeling, the purpose of modeling, common problems, and the categorization of modeling. It is ideal for those who are new to Modelica or want to try it.

Introduction to Number Theory, 2nd Edition

In this second edition, Martin Erickson, Anthony Vazzana, and David Garth provide a classroom-tested, student-friendly text that covers a diverse array of number theory topics, from the ancient Euclidean algorithm for finding the greatest common divisor of two integers to recent developments such as cryptography, the theory of elliptic curves, and the negative solution of Hilbert’s tenth problem. The authors illustrate the connections between number theory and other areas of mathematics and describe applications of number theory to real-world problems. This text provides calculations for computational experimentation using Mathematica and other popular software packages online via a robust, author-maintained website, and includes a solutions manual.

Introduction to Mathematica—Basics and Applications (Japanese)

Eiichi Nakagawa, a professor emeritus, and Akijiro Katsu, a Wolfram Training certified instructor, put together their extensive knowledge of Mathematica in this hands-on reference book that introduces Mathematica 10.2. This book consists of two chapters, one covering Mathematica basics and the other covering its applications. It is ideal for beginners looking to improve their skills. A notebook in the second chapter can be downloaded to the user’s computer.

Looking for more Wolfram technologies books? Don’t forget to visit Wolfram Books to browse by both topic and language!

]]>Partial differential equations (PDEs) play a vital role in mathematics and its applications. They can be used to model real-world phenomena such as the vibrations of a stretched string, the flow of heat in a bar, or the change in values of financial options. My aim in writing this post is to give you a brief glimpse into the fascinating world of PDEs using the improvements for boundary value problems in `DSolve` and the new `DEigensystem` function in Version 10.3 of the Wolfram Language.

The history of PDEs goes back to the works of famous eighteenth-century mathematicians such as Euler, d’Alembert, and Laplace, but the development of this field has continued unabated during the last three centuries. I have, therefore, chosen examples of both classical as well as modern PDEs in order to give you a taste of this vast and beautiful subject.

Let us begin by considering the vibrations of a stretched string of length π that is fixed at both ends. The vibrations of the string can be modeled by the one-dimensional wave equation, which is given below. Here, *u(x,t)* is the vertical displacement of a point at the location *x* on the string, at time *t*:

Next, we specify a boundary condition to indicate that the ends of the string remain fixed during the vibrations:

Finally, we prescribe the displacement and speed at different points on the string at time *t*=0 as initial conditions for the motion of the string:

We can now use `DSolve` to solve the initial boundary value problem for the wave equation:

As seen above, the solution is an infinite sum of trigonometric functions. The sum is returned in an `Inactive` form since each individual term in the expansion has a physical interpretation and, typically, a small number of terms will provide a useful approximation to the exact solution. For example, we can extract four terms from the sum to obtain an approximate solution asol(*x,t*):

Each term in the sum represents a standing wave, which can be visualized as follows:

These standing waves combine in a magical way to produce the smooth vibrations of the string, as shown in the following animation:

The wave equation belongs to the class of linear hyperbolic PDEs, which govern the propagation of signals with finite speeds. This PDE arose as a convenient way to model the vibrations of strings and other deformable bodies, but it plays an even more important role in modern physics and engineering since it governs the propagation of light and other electromagnetic waves.

Let us now model the flow of heat in a bar of length 1 that is insulated at both ends using the heat equation, which is given as follows:

Since the bar is insulated at both ends, no heat flows through the ends, which translates into a boundary condition at the two ends *x*=0 and *x*=1:

Next, we must prescribe the initial temperature at different points in the bar. In this example, we choose the linear function given below. The initial temperature is 20 at the left end (*x*=0) and 100 at the right end (*x*=1):

Finally, we solve the heat equation subject to these conditions:

As in the case of the wave equation, we can extract a few terms from the `Inactive` sum to obtain an approximate solution, as shown below:

The first term in the approximate solution, 60, is the average of the initial temperatures at the left and the right ends of the bar, and is also the steady-state temperature for this bar. As the plot of temperature along the length of the bar below shows, the temperature of the bar quickly evolves to the steady-state value of 60 degrees:

The heat equation belongs to the class of linear parabolic PDEs, which govern diffusion processes. This rather simple-looking equation is ubiquitous and makes a surprising appearance in a wide variety of applications. We will see two examples of this phenomenon later in the post.

We now turn to the Laplace equation, which is used to model the steady state of systems, i.e. the behavior after any time-dependent transients have died out. In two dimensions, this equation is given as follows:

Assume that the coordinates x and y are restricted to the rectangular region Ω given below:

The classical Dirichlet problem is to find a function *u(x,y)* that satisfies the Laplace equation inside Ω along with a given `DirichletCondition` that specifies values on the boundary of Ω, as illustrated below:

The Dirichlet problem can be solved using the elegant region notation for `DSolve`:

As in earlier examples, we can extract a fixed number, say 100, of terms from the `Inactive` sum and visualize the solution, as shown below:

Notice that the solution *u*(*x,y*) of the Dirichlet problem is smooth within Ω despite the fact that the boundary conditions have sharp features. Also, *u*(*x,y*) attains its maximum and minimum values on the boundary, while there is a saddle point at the center of the rectangle. These features are typical of linear elliptic equations, the class of PDEs to which the Laplace equation belongs.

The wave, heat, and Laplace equations are the most famous examples of classical PDEs. We will now consider three representative examples of modern PDEs, beginning with Burgers’ equation for viscous fluid flow, which can be stated as follows:

This nonlinear PDE was introduced by J. M. Burgers around 1940 as a simple model for turbulent flows (the parameter ϵ in the equation represents the viscosity of the fluid). However, a decade later, E. Hopf and J. D. Cole showed that Burgers’ equation can be transformed into the heat equation, thereby indicating that this equation cannot exhibit chaotic behavior. The Hopf–Cole transformation allows us to solve Burgers’ equation in closed form for an initial condition, such as the one given below:

For this example, we use `DSolveValue`, which returns only an expression for the solution. The error function (`Erf`) terms in the formula below arise from the solution of the corresponding initial value problem for the heat equation:

The following plot is showing the evolution over time of a hypothetical fluid velocity field in one dimension. The solution is smooth for a positive value of ϵ, although the initial condition is a piecewise function:

As seen below, the solution develops a shock discontinuity in the limit when the viscosity ϵ approaches 0. These shock solutions are a well-known feature of the inviscid (ϵ = 0) Burgers’ equation:

As a second example of a modern PDE, let us consider the Black–Scholes PDE from finance. This equation was introduced by Fischer Black and Myron Scholes in 1973 as a model for the prices of European-style financial options, and can be stated as follows:

Here, *c* is the price of the option as a function of stock price *s* and time *t*, *r* is the risk-free interest rate, and σ is the volatility of the stock.

In their landmark paper (which has been cited more than 28,000 times), Black and Scholes noted that their equation can be reduced to the heat equation by a transformation of variables. This dramatic simplification leads to the celebrated Black–Scholes formula for European call options, under a terminal condition based on the “strike price” *k* of the asset at time *t=T*:

Armed with this formula, we can compute the value of a financial option for typical values of the parameters:

The answer agrees with the value given by the built-in `FinancialDerivative` function:

As our third example of a modern PDE, we will solve the free Schrödinger equation for an electron that is constrained to move in a one-dimensional box of length *d*, with a suitable initial condition. The equation and conditions may be stated as follows:

This example has an elementary solution that takes imaginary values due to the presence of `I` in the Schrödinger equation:

The probability density function for the electron, ρ=Ψ Ψ^{⊹}, can be computed using suitable values for the parameters in the problem as follows:

We can create a movie showing the evolution of the probability density over time, which reveals that the “center” of the electron moves from side to side in the box:

Eigenvalues and eigenfunctions play an important role in the solution of Schrödinger’s equation and other PDEs. In particular, they provide the building blocks for the infinite sum solutions of the wave and heat equations that we saw earlier in the post. Hence, as our final example, let us consider the problem of finding the nine smallest eigenvalues and eigenfunctions for the `Laplacian` operator with a homogeneous (zero) Dirichlet condition over a ball in three dimensions. That is, we are interested in the nine smallest values of λ and the corresponding functions ϕ that satisfy 𝓛ϕ = λ ϕ, with the following definitions:

The new `DEigensystem` function in Version 10.3 allows us to compute the required eigenvalues and eigenfunctions as follows:

The eigenvalues for this problem are expressed in terms of `BesselJZero`. For example:

The eigenfunctions can be visualized using `DensityPlot3D`, which returns the beautiful plots that are shown below:

Partial differential equations are an essential tool in many branches of science, engineering, statistics, and finance. At a more fundamental level, they provide precise mathematical formulations for some of the deepest and most subtle questions about our universe, such as whether naked singularities can really exist. In my experience, the study of PDEs rewards one with a rare combination of practical insights and intellectual satisfaction.

I invite you to look at the documentation for `DSolve`, `NDSolve`, `DEigensystem`, `NDEigensystem`, and the finite element method to learn more about the various approaches for solving PDEs in the Wolfram Language.

Symbolic PDEs is supported in Version 10.3 of the Wolfram Language and Mathematica, and is rolling out soon in all other Wolfram products.

Download this post as a Computable Document Format (CDF) file.

]]>All structures have natural frequencies. The critical speed of a rotating machine occurs when the rotational speed matches one of these natural frequencies, often the lowest. Until the end of the nineteenth century the primary way of improving performance, increasing the maximum speed at which a machine rotates without an unacceptable level of vibration, was to increase the lowest critical speed: rotors became stiffer and stiffer. In 1889, the famous Swedish engineer Gustaf de Laval pursued the opposite strategy: he ran a machine faster than the critical speed, finding that at speeds above the critical threshold, vibration decreased. The trick was to accelerate fast through the critical speed. Thirty years later in 1929, the American Henry Jeffcott wrote the equation for a similar system, a simple shaft supported at its ends. Such a rotor is now called the de Laval rotor or Jeffcott rotor and is the standard rotor model used in most basic equations describing various phenomena.

Almost all machines have something that rotates or vibrates, so rotor dynamics is a large field of mechanical engineering. Traditionally, research on gas and steam turbines, jet engines, and pumps have contributed to our understanding of the field.

Here one of these phenomena, internal damping, will be explored using Wolfram SystemModeler. The video below shows a Jeffcott rotor starting from a rotational speed of zero and accelerating through the critical speed:

Damping in a mechanical system is a technique for lowering vibration. Damping almost always works for a non-rotating system. However, in the first half of the twentieth century many engineers learned the hard way that increasing the damping in rotating structures may lead to catastrophic failures.

When this phenomenon is studied, a system like that in the figure below is often used. A disc with mass m is mounted on a shaft with stiffness k and internal viscous damping c_{i}. The shaft is subjected to a fictitious external damper with the damping constant c_{e}. The rotor rotates with the angular velocity W. An external force F is acting on the mass:

Let the x-y-z coordinate system be fixed in space and the ξ-η-z coordinate system be fixed in the shaft, rotating with the angular velocity Ω. The transformation of the coordinate system between these two systems will be:

The transformation of the forces will be:

The internal forces acting on the mass will be:

Transforming both the forces and the deflections to the fixed coordinate:

The equation of motion for the system can now be written as:

The damping force, given by c_{e}ẋ and c_{e}ẏ, is assumed to act directly on the disc.

The equation of motion in matrix form:

Rewrite to the complex form and let:

Study the homogeneous part—the external force F is equal to zero:

The nonzero solution requires that the determinant is equal to zero or:

This equation has a solution of the form λ = Κ + i ω. If Κ > 0, the motion is unstable. And if Κ < 0, the motion is stable. The stability threshold is obtained when Κ = 0, i.e. λ = i ω. The determinant can be written as:

Both the real and imaginary parts have to individually be zero. The real part gives:

And the imaginary part gives:

Simplify the real part and use the solution for the imaginary part:

In the world of vibration and rotor dynamics, this is a well-known relation. It follows from the latter equation that the motion will be stable up until a certain limit of the rotor’s rotational frequency. Above this limit, the motion will be unstable. Once it becomes unstable, it will whirl with a frequency equal to the eigenfrequency.

We may make the following observation: if the internal damping is much higher than external damping, i.e. the bearing damping, it is not wise to exceed the first resonance frequency. This is the reason why many electrical motors with high internal damping never pass the eigenfrequency. Note also that a rotor vibration with its (lowest) eigenfrequency, although the rotational frequency is different, is most likely unstable. This is a behavior that also yields from many other situations than internal damping instability.

The SystemModeler model is built up with standard components and an Euler–Bernoulli beam. In this application, we use ten beam elements on each side. The resulting damping is Rayleigh damping and is proportional to the mass and the stiffness matrix. In this theoretical example, we need to have an external static force for the damping test and also a component for damping close to the mass as the external damping. In reality, external damping will include the bearings:

The pinned supports are easily built up with just a few standard components:

Our first task is to determine the equivalent viscous damping coefficient due to Rayleigh damping. We do it by the same method as determining damping when it is measured out in the field. Load it and release it, and measure the damping and frequencies. The setup is shown in the following film clip:

The response in the vertical direction:

The equivalent viscous damping constant for these P-P beams can now be determined by looking at how fast the vibrations’ amplitude decreases. If the amplitude for the first five peaks after the release at 20 s are taken into account, the equivalent viscous damping will be:

Set the external damping to:

c_{e}=100 Ns/m

The system will become unstable at approximately:

≈20 rad/s(⟷ 200 seconds in the below graph).

The figure shows the deflection when running up from 0 to 30 rad/s during 300 s:

As can be seen, the system starts to become unstable at 2.5 times the resonance frequency, exactly as the theory says. The instability in this simulation is fully developed at approximately 2.75 times resonance frequency.

An FFT on the vibration between 190–200 seconds (i.e. shaft speed 19–20 rad/s or approximately 3.1 Hz) looks like:

As expected, the motion of the whirling shaft follows the shaft rotation. What happens with the vibration frequency when the system becomes unstable? That is the interesting part. In the figure below, the shaft rotates at 25 rad/s but the mass whirls at 9.9 rad/s, i.e. the first eigenfrequency. So the behavior is exactly as the theory predicts:

SystemModeler is a powerful tool for studying advanced problems in rotating machinery. This example is nearly impossible for most commercial finite element software without a lot of extra work. With SystemModeler, it is straightforward.

We conclude by looking at a video depicting the system when the internal damping is present. Below, the first eigenfrequency, the imbalance, and the whirl are in phase (0–10 rad/s). Then they are out of phase at between 10–25 rad/s. And then the system is unstable and whirls at 10 rad/s:

Download this post as a Computable Document Format (CDF) file.

]]>Interested in drones? Check out these posts.

Connecting ROS to the Wolfram Language, Or Controlling a Parrot ArDrone 2.0 from Mathematica, by Loris Gliner, a student in aeronautical engineering.

Loris Gliner used his time in the Wolfram mentorship program to work out how to connect the Wolfram Language to the Linux Robot Operating System. He includes code examples and a video showing the flight of a Parrot ArDrone 2.0 controlled via the Wolfram Language.

Analyzing Crop Yields by Drone, by Arnoud Buzing, Wolfram Research.

Using a Phantom 2 Vision+ drone from DJI, Arnoud Buzing got a bird’s-eye view of soy fields in order to apply the Wolfram Language’s photo-analyzing capabilities to estimate crop yields. As someone with a small farm, this post makes me want to get a drone and try this out.

In the comments, Diego Zviovich said he had played with a similar concept using Landsat data.

Ocean currents: from Fukushima and rubbish, to Malaysian airplane MH370, by Marco Thiel, University of Aberdeen, Department of Physics/Mathematics.

Marco Thiel used the Wolfram Language and NASA’s ECCO2 data to simulate the flow of Fukushima’s radioactive particles drifting across the Pacific, carried by the currents. He also modeled the flow of garbage in the oceans that ends up trapped in gyres. The results are quite thought-provoking.

He challenges his readers to use these models to predict where parts of MH370 might be. If you try it, you could see how your version stacks up against this simulation published in *The New York Times*.

2D & 3D simulations of a glioma, by Laura Carrera Escalé.

Laura Carrera, a mathematics student who did an Erasmus internship at the mathematics department of the University of Aberdeen with Marco Thiel, follows up on Thiel’s post Simulating Brain Tumor Growth with Mathematica 10. She has created a simulation of a rapidly growing brain cancer that is both fascinating and creepy to watch.

Carrera remarks: “We can clearly see the difference between the two gliomas’ growth velocities displayed on the above videos over a period of just 200 days. Following steps on this topic might help to calculate the volume of the tumor and therefore to elaborate a time-volume graphic. Thus, the patient’s life expectancy on diagnosis could be predicted, given that death normally occurs when the volume of the glioma is higher than a sphere, radius 3 cm.”

What these two cancer simulation posts suggest to me is the possibility that in the near future, people diagnosed with tumors will be able to take the relatively cryptic radiologists’ reports and use the descriptions to create 3D simulations in order to better understand the course of their diseases. While the code behind these simulations is a bit out of the range of most patients, with the help of the Wolfram Language maybe soon the ability to create or access these models won’t be.

[GIF] Enneper surface + an introduction, by Clayton Shonkwiler, a mathematician at Colorado State University.

Clayton Shonkwiler has used the Wolfram Language to create an animated visualization of an Enneper surface, a self-intersecting minimal surface generated by Enneper–Weierstrass parameterization. You can see more of his mathematical art on his website, shonkwiler.org/art.

Solving a KenKen puzzle using logic, by Sander Huisman, a postdoc in Lyon, France, researching Lagrangian turbulence.

Sander Huisman has come up with a way to solve KenKen puzzles with the Wolfram Language, a recurring discussion of interest. Frank Kampas’ post about KenKen was featured earlier this year. KenKen is a game similar to Sudoko, invented by Japanese math teacher Tetsuya Miyamoto; it is regularly featured in *The New York Times* and is widely syndicated.

Just this week, Bernat Espigulé Pons has followed up with a Secret Santa generator deployed in the Wolfram Cloud.

More games-related posts:

First forays into game design and agent reasoning about uncertainty, by Michael Hale.

Analysis of ‘Magic the Gathering’ card game, by David Gathercole.

Solving 2D Incompressible Flows using Finite Elements, by Ole Christian Astrup, Senior Principal Researcher at DNV GL.

New features in Version 10 of the Wolfram Language and Mathematica are spurring new investigations. Ole Christian Astrup describes the inspiration for his project: “I was inspired by the Wolfram blog by Mokhasi showing how to use Mathematica to solve a 2D stationary Navier–Stokes flow using a finite difference scheme to write this blog. I thought it should be possible to solve the 2D cavity box flow problem using Mathematica’s finite element capabilities. In the following, I show how the problem can be discretized and solved by the finite element method using an iterative scheme.”

More physics and engineering posts:

Basics of Supercapacitor Modeling, by Johan Rhodin, Wolfram Research.

GPS Mountainbike analysis, by Sander Huisman.

Finally, if you’re in the mood for a little light humor, you may appreciate this topic.

Joke Generator, by Jesse Friedman, current intern at Wolfram Research.

“Robert Heinlein famously said, ‘Never try to teach a pig to sing; it wastes your time and it annoys the pig.’ I had been thinking about how to teach computers expressive language, and one of the core skills of expressive language is making jokes. So making jokes was on my list of ‘How to teach the pig to sing.’”

I was delighted to see Jesse Friedman’s strategies for getting the Wolfram Language to tell jokes. Not all of these jokes are funny, but sometimes how they go wrong is as interesting as the jokes that work.

The post inspired me to see if I could take it further. I worked on methods to get nice groups of rhyming words out of which poems could later be made. Here I was testing out how to get the best rhymes by aggregating from both Wolfram|Alpha and the Wolfram Language data libraries. (For rhyming words, these are not identical.)

When it works, this makes pleasing sets of three words. Like so:

(Not all words have rhymes available in these databases, so sometimes there are unpoetic error messages instead, which I have not yet suppressed.)

Thanks, Jesse, for the inspiration.

Visit Wolfram Community today to see what captures your attention, and join in on these and other interesting discussions. Better yet, make joining Community a New Year’s resolution. It’s a great place to share ideas, get feedback, and learn new things.

]]>The Wolfram Language Worldwide Translations Project provides any non-English-speaking programming novice with an effortless way into the Wolfram Language. It aims to introduce the Wolfram Language while at the same time addressing any lack of English language skills.

How does one typically learn to program? In my experience, new students were given a piece of code with an explanation of its purpose. That way they could familiarize themselves with the coding structure and functions. To help with this situation, Wolfram Research has added functionality that lets you enable Wolfram Language code annotations in a language of your choosing. This is an ongoing effort, and we are planning on covering as many languages as possible. But we have already added support for Japanese, Traditional as well as Simplified Chinese, Korean, Spanish, Russian, Ukrainian, Polish, German, French, and Portuguese.

As part of this project, we added menu translations in Traditional Chinese and Spanish to the already-available Japanese and Simplified Chinese translations.

Back to my struggling days: had I, for example, been given the code behind “Major Multinational Languages,” an example of the Wolfram Demonstrations Project, I would have been able to view the annotated version in German. The annotations do not change or limit the functionality of the code. It is still fully evaluatable and editable, with code captions updating on the fly:

If I felt adventurous and wanted to test my eight years of Russian, I could try that as well:

You can conveniently enable this new feature by cell, by notebook, or even globally. This way I was able to compare the same piece of code above, annotated in German, to its Russian version. You can find code annotations as part of our autocompletion as well. I opted for French in this case:

One issue that was instantly raised at the beginning of this project was the length of the translations. The design of the descriptive and camel cased English symbol names can provide challenges when trying to keep translations to a reasonable length.

Let’s take Spanish to illustrate the issue. The function `String` was translated as “cadena de caracteres.” This translation is already much longer than its English original. Now taking into account that we have a multitude of system symbols containing the substring “string,” e.g. `StringFreeQ` and `StringReplacePart`, you can imagine what lengths these translations can reach.

Let’s compare the Russian and Japanese translations of `$FrontEnd`. Conveniently, the translations are not just accessible interactively but also programmatically through `WolframLanguageData`’s `"Translations"` property:

The string lengths of the two translations differ by 66. That begs the question: how do our translations compare in length to their English counterparts? Let’s first load all Wolfram Language symbols as well as their translations:

Now let’s have a look at how the string lengths of the translations compare to the underlying English symbols:

Clearly the Asian writing systems allow for much more condensed translations. On the other hand, digging deeper into the minimum and maximum of the string length differences between Polish and English, we can find the following (hover over the `ListPlot` for tooltips that compare the English names with their Polish translations):

There are currently 251 cases of translation pairs where the two elements match in length. Here are a few examples:

This is the pair with the greatest string length difference, an astounding 75:

Let’s look at string length differences in all available languages:

Given these discrepancies, we can find some interesting tidbits about languages and their relationships in this data.

For example, what five symbols are closest in length to their translations in all languages? `Here`, `Byte`, `ColorQ`, `ListQ`, and `Ball`:

Which language has the largest percentage of white spaces? Korean:

And what language beats all others in average string length? German:

Here’s a quick visualization:

Given these discrepancies in translation lengths, how did we accommodate the differences in our interface? Let’s return to our example code for “Major Multinational Languages” with German code captions enabled. Any time the length of a code caption exceeds the length of the English original, we trim the caption with ···. Upon hovering over the code caption, it is fully expanded and emphasized in bold:

Using the new `WordCloud` functionality in the Wolfram Language, we can get graphical overviews of translations with words sized according to the frequency of their use. Taking the translations of the 120 most frequently used symbols, and making use of the recently added `GeoGraphics`[] functionality, the Wolfram Language allows us to generate word clouds in country shapes. Here’s a look at Germany and Portugal:

We can take it a bit further and place the country-shaped word cloud on the actual country polygon. This works quite nicely, as can be seen for Spain:

After playing with the translations and looking at them from different angles, the logical consequence is to see if we can produce code that does not just show annotations but also uses the translations in lieu of the English Wolfram Language symbol names. And sure enough, with the programmatic version of the translations at your fingertips, one can easily implement such a function, `TranslateCodeCompletely`. Provided a piece of code and a target language as arguments, our new function returns a full translation. For the sake of showing complete translations, we are avoiding symbol shortcuts:

Here is the code producing the different string length histograms above—in Korean. User-defined symbols appear as they were, whereas system symbols are fully translated and emphasized through the use of a gray background. You can mouse over them in the attached PDF to read the original symbol name:

An incredible side effect for the Wolfram Language programming wizard: if you are already firm in your understanding and use of our symbols, this might in turn give you a chance to “programmatically” learn a new language…

I hope this functionality is going to help a great number of new users and will pave their way into the world of the Wolfram Language. Going forward, we do not just intend to extend the collection of languages. We are planning to add translations to more aspects of the Wolfram Language as well—for example, more menu items and shortcuts. We’d be happy to hear back from you about what languages you would like to see the Wolfram Language translated into. And stay tuned: there might be future opportunities for you to contribute.

Download this post as a Computable Document Format (CDF) file.