We found no less than 10 Nobel Prize–winning physicists who personally registered copies of *Mathematica*. That’s at least one in every eight Physics laureates since 1980! And anecdotal evidence suggests that nearly every Nobel laureate uses *Mathematica* through their institution’s site license.

It’s not just in Physics that *Mathematica* has shone on the Nobel stage. We’ve also had winners in Chemistry and Economics. The case of Economics we’re talking about is none other than famed genius John Forbes Nash. Nash, who was the subject of the film *A Beautiful Mind* and won the Nobel Memorial Prize in Economic Sciences in 1994, has been among our best-known users.

So rest assured we will be watching all of this month’s 2014 winner announcements with interest…

]]>One potential cause of such a scenario is a flap system failure. Flaps are hinged devices located on the trailing edges of the wings, where their angular position can be adjusted to change the lift properties of the plane. For example, suitably adjusting the flap position can enable the plane to be flown at a lower speed while maintaining its lift, or allow it to be landed with a steeper angle of descent without any increase in speed. One of several resulting advantages is that the LDR becomes shorter. This makes me wonder: Could a small flap failure increase the LDR so much that the assigned runway is suddenly too short?

To answer such a question, you have to understand the effects that a failure on a component level have at a system level. How will the control system react to it? Can we somehow figure out how to detect it during a test procedure? Can we come up with a safety procedure to compensate for it, and what happens if the pilot or maintenance personnel for some reason fail to follow that procedure?

Together with my colleague, engineer Olle Isaksson, we thought we’d use Wolfram *SystemModeler* 4 and the newly released Wolfram Hydraulic library to simulate and analyze some potential failures that can occur in the flap system of a Cessna 441 Conquest II aircraft.

The desired angular position of the flaps on this Cessna aircraft is set manually by the pilot via the plane’s instrument panel. Which flap angle is preferable depends, among other factors, on which flight phase the plane is in, since that directly affects which flight characteristics are desired. For example, during takeoff, the flaps are extended to an angle of 10 degrees in order to provide extra lift force, and during landing, they’re extended to 30 degrees to increase both lift and drag force. These seemingly small adjustments to the flaps’ angular position allow for shorter runways, reduce the stress put on the aircraft, and give the pilot more time to react. For this particular aircraft, there are two additional positions: 0 degrees in mid-air and 20 degrees when approaching landing. Take a look at the video below for a short demonstration of how the flaps move.

The flap system of this Cessna aircraft is electrically operated and hydraulically actuated. This means that the flap system is controlled by electrical signals, but the actual movement of the flaps and landing gears is driven by a hydraulic system with pumps, valves, cylinders, and other useful components. The pilot controls the flap position through a flap selector switch located on the instrument panel. Changing the flap selector switch position sends out an electrical signal that, together with limit switches, energizes a bypass valve so that pressure builds up in the hydraulic power system.

In tandem with bypass valve energization, a solenoid of the flap control valve becomes energized, resulting in an open connection between the cylinder (flap actuator) and the pump and reservoir. The hydraulic cylinder is mechanically connected to the flaps, which consequently causes the flaps to extend or retract when the cylinder moves in response to changes in chamber pressure.

**The Model**

The Cessna flap system model, implemented in *SystemModeler*, consists of six customized components: a pilot, an electrical system, a power plant, a hydraulic power system, the flaps, and the landing gears.

As shown above, the Cessna model is hierarchical, with several sub levels. The pilot model receives signals from both the electrical subsystem and the flaps, for example, in the form of system pressure data or information about the current flap position. The powerPlant subsystem contains two engines that connect to pumps in the hydraulicPower model, which in turn provides pressurized fluid to the flaps and landingGear subsystems. To avoid a blog post longer than *Ulysses*, I’m going to leave the detailed exploration of the models up to a forthcoming post, where the modeling process will be described in more detail.

Now that you have a general sense of the model components and their interactions, let’s take some time to think about some potential failures. Despite the risk of aggravating my slight fear of flying, I took a look at some accident reports for different Cessna aircraft. This revealed that failing limit switches, hydraulic leakages, and mechanical failures are examples of flap failures that affect the aircraft at a system level. So let’s include the following scenarios in the failure analysis:

1. A pipe in the flap subsystem is leaking.

2. The mechanical rod that connects the flap to the cylinder is broken.

3. An electrical failure in the flap control valve occurs in mid-air.

Let’s first take a look at the nominal scenario where everything works as it should, and the pilot can enjoy the perks of having a fully functioning flap system. The pilot moves the flap selector switch to positions 10° -> 20° -> 30° -> 0°, which corresponds to takeoff -> approach -> landing -> up. Note that this is quite an odd combination of flap commands to use in real life within a time span of 20 seconds, so it’s just a means of studying the system. It can for example be seen as a test run to see if the flaps are working properly.

First load the *WSMLink* and define the model.

My colleague Olle conveniently included different failure modes in several components in the Cessna model, so in order to investigate the effects of, for example, a pin-short in a solenoid, I simply have to change the failure mode parameter of a solenoid to pin-short, and then simulate the model.

In the nominal case, I want to make sure that the relevant failure parameters are set to 0, which means that the components are fully functioning. Since the failure modes are structural parameters, I need to use `WSMSetValues` instead of `WSMParameterValues`.

Which angles do the flaps actually take on compared to those commanded by the pilot in the nominal case?

As can be seen, the angular position of the flaps follows the commands given by the pilot, with some delay due to the time it takes for the flaps to extend or retract.

Let’s also take a look at the pressure development in the hydraulic relief valve, which corresponds to the pressure supplied to the flaps. Another interesting aspect, especially for electrical failures, is the electrical signal that commands the flaps to extend.

In the plots above, we can see how the pressure peaks correspond to the retraction and extension of the flaps, and how the electrical signal peaks when an extend flap command is issued from the pilot.

Now let’s examine the other scenarios and see how they compare to the nominal scenario.

**Scenario: Leaking Pipe**

In this failure scenario, the flaps have the same initial position as in the nominal case, but there is a leaking pipe in the flap subsystem. The leakage is injected by changing the value of parameter fm for the pipe in question from 0 to 2.

The figures show a simulation of the system with a pipe leakage: to the left, the commanded flap angle and the actual flap angle; to the right, the pressure development in the hydraulic relief valve in subsystem hydraulicPower.

We can see that the leakage reduces system pressure, which in turn causes a reduction in cylinder force. The reduction leads to a slower flap movement, which, seen from a system perspective, curtails the response time. Such a scenario could potentially be dangerous if the pilot is in a situation where the flaps have to be moved quickly, for example, if the plane approaches the runway too fast or at a wrong angle.

**Scenario: A Broken Rod**

As previously mentioned, the flap subsystem contains a hydraulic cylinder that drives the movement of the flaps. In this scenario, let’s investigate what happens if the rod connected to the flaps is broken, which in the model means that no force can be transferred between the two ends of the rod.

In the two bottom plots, we can see that there is a pressure buildup in the system, and the electrical signal behaves as expected. Despite this, the flaps remain in the up position. Since the rod is broken, the cylinder cannot transfer any force at all to move the flaps (see upper right plot), independent of the flap switch command. In this case, the seemingly small component failure would actually lead to a longer LDR.

**Scenario: Mid-Air Electrical Failure**

Sometimes failures aren’t discovered until the plane is already in the air, and in such situations, it is even more important to be prepared and have safety procedures that can remedy any problems that might occur. Let’s for example explore the scenario when there is a mid-air electrical failure. The pilot failed to test the flaps before takeoff, and so the retraction command is first used in mid-air. The electrical failure occurs in the flap control valve where the up-solenoid has a pin-short. The shorted solenoid trips the circuit breaker in mid-air, causing the pilot to lose control over the flaps.

From the pilot’s point of view, the failure isn’t noticeable until he or she tries to retract the flaps from 10 to 0 degrees, since the extend function is still initially functional. However, the second the pilot tries to move the flaps back up, the short circuit to ground is no longer isolated from the circuit breaker, and all control over the flaps is lost with the unpleasant side effect of the plane suddenly needing a longer runway. Seems like a pretty bad position to be in, right? Actually, it doesn’t necessarily have to be, if we can use our modeling expertise to model and test a safety procedure that might help the situation.

**Scenario: Mid-Air Electrical Failure with a Safety Procedure**

It’s possible to use the Cessna model to come up with a safety procedure that makes it possible to land the plane safely despite the mid-air electrical failure. Such a procedure could, for instance, be to move the switch to landing and then reset the circuit breaker. When reset, it should be possible to directly move the flaps to landing position and land safely, even though the retract function is still malfunctioning. So let’s see if this maneuver does the trick.

The figures show a simulation of the system with a mid-air electrical failure where the shorted solenoid triggers an emergency flap extension.

The shorted solenoid trips the circuit breaker in mid-air, causing the pilot to lose control over the flaps. The pilot then puts the switch in landing position, resets the circuit breaker, and manages to extend the flaps.

**What’s the Conclusion?**

So, what about my original question: Could a flap failure increase the LDR so much that the assigned runway becomes too short?

Judging from the failure analysis just performed, it does seem like a plausible scenario. If the increase in the LDR resulting from a flap failure (for example, the mid-air electrical failure discussed above) exceeds the runway margin, then that could potentially happen. However, the model has not been created in cooperation with Cessna, and assumptions have been made regarding, for example, the electrical design and parameter values. In other words, it’s not possible to guarantee that all aspects of the model are 100% accurate or complete. Still, it shows the potential of using modeling as a means of exploring different failure scenarios, how faults can be detected, and how to design safety procedures.

I used Wolfram *SystemModeler* to analyze faults after a particular sequence of commands, something that could be done during a test procedure, for example. Using the same principles, it’s possible to use *SystemModeler* to perform fault-code coverage analysis for systems with diagnostic trouble codes. I also tested a proposed safety procedure, and seeing how that interacts and triggers different responses in the system as a whole, such tests have the potential to lead to a better understanding of the human-machine interaction.

If you want to try out some failure modeling yourself, or just get a feel for the tools that I have used in this blog, trial versions of both *SystemModeler* and *Mathematica* and available for download online. Also the Wolfram Hydraulic library, along with several other libraries from other domains, can be explored and downloaded from Wolfram’s brand new *SystemModeler* Library Store.

A dozen Wolfram experts and *Mathematica *developers came together at our headquarters—both in person and remotely via online connections—to take turns showing off new advances in usability, algorithmic functionality, and integration with the Wolfram Cloud. Presenters participated in a live Q&A with the online audience, and in turn were able to hear from *Mathematica* users and enthusiasts.

All presentations were recorded; videos of individual talks, complete with presentation notebooks, as well as the full event are now available.

]]>Compose a tweet-length Wolfram Language program, and tweet it to @WolframTaP. Our Twitter bot will run your program in the Wolfram Cloud and tweet back the result.

One can do a lot with Wolfram Language programs that fit in a tweet. Like here’s a 78-character program that generates a color cube made of spheres:

It’s easy to make interesting patterns:

Here’s a 44-character program that seems to express itself like an executable poem:

Going even shorter, here’s a little “fractal hack”, in just 36 characters:

Putting in some math makes it easy to get all sorts of elaborate structures and patterns:

You don’t have to make pictures. Here, for instance, are the first 1000 digits of π, sized according to their magnitudes (notice that run of 9s!):

The Wolfram Language not only knows how to compute π, as well as a zillion other algorithms; it also has a huge amount of built-in knowledge about the real world. So right in the language, you can talk about movies or countries or chemicals or whatever. And here’s a 78-character program that makes a collage of the flags of Europe, sized according to country population:

We can make this even shorter if we use some free-form natural language in the program. In a typical Wolfram notebook interface, you do this using , but in Tweet-a-Program, you can do it just using =[...]:

The Wolfram Language knows a lot about geography. Here’s a program that makes a “powers of 10” sequence of disks, centered on the Eiffel Tower:

There are many, many kinds of real-world knowledge built into the Wolfram Language, including some pretty obscure ones. Here’s a map of all the shipwrecks it knows in the Atlantic:

The Wolfram Language deals with images too. Here’s a program that gets images of the planets, then randomly scrambles their colors to give them a more exotic look:

Here’s an image of me, repeatedly edge-detected:

Or, for something more “pop culture” (and ready for image analysis etc.), here’s an array of random movie posters:

The Wolfram Language does really well with words and text too. Like here’s a program that generates an “infographic” showing the relative frequencies of first letters for words in English and in Spanish:

And here—just fitting in a tweet—is a program that computes a smoothed estimate of the frequencies of “Alice” and “Queen” going through the text of *Alice in Wonderland*:

Networks are good fodder for Tweet-a-Program too. Like here’s a program that generates a sequence of networks:

And here—just below the tweet length limit—is a program that generates a random cloud of polyhedra:

What’s the shortest “interesting program” in the Wolfram Language?

In some languages, it might be a “quine”—a program that outputs its own code. But in the Wolfram Language, quines are completely trivial. Since everything is symbolic, all it takes to make a quine is a single character:

Using the built-in knowledge in the Wolfram Language, you can make some very short programs with interesting output. Like here’s a 15-character program that generates an image from built-in data about knots:

Some short programs are very easy to understand:

It’s fun to make short “mystery” programs. What’s this one doing?

Or this one?

Or, much more challengingly, this one:

I’ve actually spent many years of my life studying short programs and what they do—and building up a whole science of the computational universe, described in my big book *A New Kind of Science*. It all started more than three decades ago—with a computer experiment that I can now do with just a single tweet:

My all-time favorite discovery is tweetable too:

If you go out searching in the computational universe, it’s easy to find all sorts of amazing things:

An ultimate question is whether somewhere out there in the computational universe there is a program that represents our whole physical universe. And is that program short enough to be tweetable in the Wolfram Language?

But regardless of this, we already know that the Wolfram Language lets us write amazing tweetable programs about an incredible diversity of things. It’s taken more than a quarter of a century to build the huge tower of knowledge and automation that’s now in the Wolfram Language. But this richness is what makes it possible to express so much in the space of a tweet.

In the past, only ordinary human languages were rich enough to be meaningfully used for tweeting. But what’s exciting now is that it seems like the Wolfram Language has passed a kind of threshold of general expressiveness that lets it, too, be meaningfully tweetable. For like ordinary human languages, it can talk about all sorts of things, and represent all sorts of ideas. But there’s also something else about it: unlike ordinary human languages, everything in it always has a precisely defined meaning—and what you write is not just readable, but also runnable.

Tweets in an ordinary human language are (presumably) intended to have some effect on the mind of whoever reads them. But the effect may be different on different minds, and it’s usually hard to know exactly what it is. But tweets in the Wolfram Language have a well-defined effect—which you see when they’re run.

It’s interesting to compare the Wolfram Language to ordinary human languages. An ordinary language, like English, has a few tens of thousands of reasonably common “built-in” words, excluding proper names etc. The Wolfram Language has about 5000 built-in named objects, excluding constructs like entities specified by proper names.

And one thing that’s important about the Wolfram Language—that it shares with ordinary human languages—is that it’s not only writable by humans, but also readable by them. There’s vocabulary to acquire, and there are a few principles to learn—but it doesn’t take long before, as a human, one can start to understand typical Wolfram Language programs.

Sometimes it’s fairly easy to give at least a rough translation (or “explanation”) of a Wolfram Language program in ordinary human language. But it’s very common for a Wolfram Language program to express something that’s quite difficult to communicate—at least at all succinctly—in ordinary human language. And inevitably this means that there are things that are easy to think about in the Wolfram Language, but difficult to think about in ordinary human language.

Just like with an ordinary language, there are language arts for the Wolfram Language. There’s reading and comprehension. And there’s writing and composition. Always with lots of ways to express something, but now with a precise notion of correctness, as well as all sorts of measures like speed of execution.

And like with ordinary human language, there’s also the matter of elegance. One can look at both meaning and presentation. And one can think of distilling the essence of things to create a kind of “code poetry”.

When I first came up with Tweet-a-Program it seemed mostly like a neat hack. But what I’ve realized is that it’s actually a window into a new kind of expression—and a form of communication that humans and computers can share.

Of course, it’s also intended to be fun. And certainly for me there’s great satisfaction in creating a tiny, elegant gem of a program that produces something amazing.

And now I’m excited to see what everyone will do with it. What kinds of things will be created? What popular “code postcards” will there be? Who will be inspired to code? What puzzles will be posed and solved? What competitions will be defined and won? And what great code artists and code poets will emerge?

Now that we have tweetable programs, let’s go find what’s possible…

*To develop and test programs for Tweet-a-Program, you can log in free to the Wolfram Programming Cloud, or use any other Wolfram Language system, on the desktop or in the cloud. Check out some details here.*

In the past, using *Mathematica* has always involved first installing software on your computer. But as of today that’s no longer true. Instead, all you have to do is point a web browser at *Mathematica* Online, then log in, and immediately you can start to use *Mathematica*—with zero configuration.

Here’s what it looks like:

It’s a notebook interface, just like on the desktop. You interactively build up a computable document, mixing text, code, graphics, and so on—with inputs you can immediately run, hierarchies of cells, and even things like Manipulate. It’s taken a lot of effort, but we’ve been able to implement almost all the major features of the standard *Mathematica* notebook interface purely in a web browser—extending CDF (Computable Document Format) to the cloud.

There are some tradeoffs of course. For example, Manipulate can’t be as zippy in the cloud as it is on the desktop, because it has to run across the network. But because its Cloud CDF interface is running directly in the web browser, it can immediately be embedded in any web page, without any plugin, like right here:

Another huge feature of *Mathematica* Online is that because your files are stored in the cloud, you can immediately access them from anywhere. You can also easily collaborate: all you have to do is set permissions on the files so your collaborators can access them. Or, for example, in a class, a professor can create notebooks in the cloud that are set so each student gets their own active copy to work with—that they can then email or share back to the professor.

And since *Mathematica* Online runs purely through a web browser, it immediately works on mobile devices too. Even better, there’s soon going to be a Wolfram Cloud app that provides a native interface to *Mathematica* Online, both on tablets like the iPad, and on phones:

There are lots of great things about *Mathematica* Online. There are also lots of great things about traditional desktop *Mathematica*. And I, for one, expect routinely to use both of them.

They fit together really well. Because from *Mathematica* Online there’s a single button that “peels off” a notebook to run on the desktop. And within desktop *Mathematica*, you can seamlessly access notebooks and other files that are stored in the cloud.

If you have desktop *Mathematica* installed on your machine, by all means use it. But get *Mathematica* Online too (which is easy to do—through Premier Service Plus for individuals, or a site license add-on). And then use the Wolfram Cloud to store your files, so you can access and compute with them from anywhere with *Mathematica* Online. And so you can also immediately share them with anyone you want.

By the way, when you run notebooks in the cloud, there are some extra web-related features you get—like being able to embed inside a notebook other web pages, or videos, or actually absolutely any HTML code.

*Mathematica* Online is initially set up to run—and store content—in our main Wolfram Cloud. But it’ll soon also be possible to get a Wolfram Private Cloud—so you operate entirely in your own infrastructure, and for example let people in your organization access *Mathematica* Online without ever using the public web.

A few weeks ago we launched the Wolfram Programming Cloud—our very first full product based on the Wolfram Language, and Wolfram Cloud technology. *Mathematica* Online is our second product based on this technology stack.

The Wolfram Programming Cloud is focused on creating deployable cloud software. *Mathematica* Online is instead focused on providing a lightweight web-based version of the traditional *Mathematica* experience. Over the next few months, we’re going to be releasing a sequence of other products based on the same technology stack, including the Wolfram Discovery Platform (providing unlimited access to the Wolfram Knowledgebase for R&D) and the Wolfram Data Science Platform (providing a complete data-source-to-reports data science workflow).

One of my goals since the beginning of *Mathematica* more than a quarter century ago has been to make the system as widely accessible as possible. And it’s exciting today to be able to take another major new step in that direction—making *Mathematica* immediately accessible to anyone with a web browser.

There’ll be many applications. From allowing remote access for existing *Mathematica* users. To supporting mobile workers. To making it easy to administer *Mathematica* for project-based users, or on public-access computers. As well as providing a smooth new workflow for group collaboration and for digital classrooms.

But for me right now it’s just so neat to be able to see all the power of *Mathematica* immediately accessible through a plain old web browser—on a computer or even a phone.

And all you need do is go to the *Mathematica* Online website…

Throughout the two weeks, students learned the Wolfram Language from a variety of faculty and guest speakers. They had the opportunity to hear Stephen Wolfram speak about the company, the Wolfram Cloud, and all of the other exciting products to come in the near future. Students also heard from guest speakers such as Vitaliy Kaurov and Christopher Wolfram, who showed off other aspects of the Wolfram Language.

Although students spent a vast amount of time hard at work on their projects, they also had many laughs throughout the program. They participated in group activities such as the human knot, the Spikey photo scavenger hunt, and the toothpick and gumball building contest, as well as weekend field trips to the Boston Museum of Science and laser tag.

Every year, I am thoroughly impressed by the projects students complete within two short weeks; of course this wouldn’t be possible without our talented faculty and awesome students!

Let’s check out some projects from this year:

Single- and Multibank Engines Using Four-Stroke Cycle, by Alec Nelson

Law of Moments for Lever with Two Weights, by Mariam Martirosyan

And here are some more to check out on the Wolfram Demonstrations Project:

- Color Schemes, by Katie Borg
- Scuderi Split Cycle Engine, by Ray Yuan
- Coulomb’s Law for Three Aligned Charges, by Eduard Mihranyan

You can check out all of the projects created at *Mathematica* Summer Camp. To find out more information about *Mathematica* Summer Camp 2015, check out our website.

In college a few years later, I spent some hours trying, and failing, to find any knight’s tour, using pencil and paper in various systematic and haphazard ways. And for no particular reason, this memory came to me while I was driving to work today, along with the realization that the problem can be reduced to finding a Hamiltonian cycle—a closed path that visits every vertex—of the graph of possible knight moves. Something that is easy to do in *Mathematica*. Here is how.

The first thing that we have to do is construct the graph of knight moves. A knight can move to eight possible squares in the open, but as few as two in the corners. But if you ignore that and think of when you were taught chess, you were probably told that a knight could move along two and across one, or vice versa. This gives the knight the property that it always moves a `EuclideanDistance` of √5. We can use this test to construct the edges of our graph from the list of all possible pairs of positions.

Sorting removes unhelpful multiple edges that just slow the computation down. Ignoring chess conventions, I choose *x* and *y* to both take values 1–8 (rather than *x* being from A to H).

Now that we have the graph, we can find a tour.

Which looks very nice visualized on a chess board.

I understand that this problem is often used as an exercise in computer science courses, and you can find some iterative solutions and specific solutions at demonstrations.wolfram.com. But at four lines, this graph theory solution isn’t too much of a challenge. Indeed, it seemed a little unsatisfying to stop here.

Giving `FindHamiltonianCycle` a second argument finds more than one tour. But scrolling through a notebook of the first 1,000 didn’t reveal anything interesting. And the properties of differently sized chess boards have been well studied. So I decided to extend the idea in a slightly more whimsical way.

Oddly, all the definitions of a knight’s tour that I found on the web assume a rectangular space. Indeed, *Mathematica* contains a built-in function `KnightTourGraph`, which does roughly the same as my `knightGraph` function, but assumes a rectangular grid. But if I bring in *Mathematica*‘s image processing, it is easy to generate the knight’s tour graph over points in non-rectangular shapes, like this:

First I `Binarize` the pixels of some source image. The knight is allowed to visit the black pixels in the result, but not the white. Then I get the coordinates of the black pixels and map them to the coordinate space of my plotting function.

Now I just solve and plot that graph as before, but this time visualize with the black pixels, instead of a chess-board grid.

It turns out that most irregular shapes do not have a closed knight’s tour, but messing around with the image size can find exceptions. So a bit of search code is needed. This one takes a single character, rasterizes it at different font sizes, and returns the size for which a tour exists. The `Monitor` part lets you watch the progress; tour finding scales poorly and can get very slow if you let the font sizes grow too big.

Let’s start by searching simple filled circles.

So out of the 42 sizes tested, only these 5 have solutions. The smaller, more crowded font sizes produce more irregular tour paths.

But larger image sizes make the shape more recognizable.

Playing around with other characters is amusing for a while. Here is the letter “o” in 34 point:

An asterisk at 52 point:

And since chess puts us firmly in the “games” space, here are some playing card suits:

And finally my rendition of Pacman:

Feel free to share any interesting discoveries in the comments section.

Download this post as a Computable Document Format (CDF) file.

]]>If you want to follow along, you can download a trial of *SystemModeler*. It’s also available with a student license, or you can buy a home-use license.

Let’s start with the simplest electrical circuit I can think of:

I named it `HelloSystemModeler` in the tradition of computer language tutorials, where most first simple programs usually start with a greeting like “Hello, world.” This circuit only contains three components: a resistor, a voltage source (like an ideal battery), and a ground. To create this circuit, I used drag-and-drop from the built-in components that come with *SystemModeler*.

This circuit is something that you would build in middle school (at least I did), with the resistor replaced with a small light bulb. You’d also learn Ohm’s law, stating that *I = V/R*, so with a potential of 10 volts and a resistance of 5 ohms, we get a current of 2 amperes. Let’s see how *SystemModeler* handles this.

Simulate the circuit using `WSMSimulate`:

I can now query the simulation object for all variables and parameters. Let’s look at the current in the resistor:

Now let’s go for something a bit more interesting and usable, say, an RC circuit. The name refers to the two components in the circuit: a resistor (R) and a capacitor (C). Together they act as a low-pass filter that will filter out the high-frequency components of a signal.

The capacitor and resistor components are defined directly from their constitutive equations: for a capacitor, we in general have the equation *I(t) = C dV(t)/dt*, and if we look at the model for the capacitor, we see exactly this equation popping up:

The code above is written in the object-oriented modeling language Modelica, which *SystemModeler* implements. Not shown here is the annotation in the code that specifies how the component should look when used in the graphical user interface.

The same way of defining components from equations works for the resistor, inductor, and so on. This brings a very powerful concept to the modeling table, namely, to be able to define components and let the computer figure out a form of the complete system that can be simulated. Less burden on the modeler and more work for the computer. If I didn’t have such an acausal modeling environment, I’d have to derive the complete relationship from input to output, and then implement that function. That would look something like this:

Not only has the real-world reassemblance completely disappeared, but it’s also hard to reuse this if I want to change the structure and add an extra component, say an inductor, to it. I would have to redo all my work, whereas in *SystemModeler* I can simply connect it where I want it, and the system will regenerate the simulation equations.

Returning to the RC circuit, in the diagram below, the filter is connected to the power sources and a load resistor.

Next I run a simulation with this model and can directly see the result of the filter. In the diagram below, the high-frequency signal is damped.

Another way to see this is to look at the frequency spectrum of these signals. First I sample the signals:

Then I can look at their spectra by using the discrete Fourier transform:

Here I can see the peak 600 Hz for both signals, and for the input signals there’s a second peak at 30,000 Hz, where the noise is added.

There are many ways to improve this filter, and one of them is to add an amplifier to the circuit. Let’s take a look first at an inverting amplifier circuit, which happens to be one of the first lab exercises I did at university.

The operational amplifier in the center of the next figure will amplify the voltage at the negative pin by a factor of *resistor1/resistor = 30000/10000 = 3*, if that value is within the limits from the two constant voltages I’ve connected to the circuit. If it’s outside the limits, the limits will be the value. The constant voltages are 25 volts in this case.

Simulating this circuit, we can see exactly how that plays out:

The voltage reaches 25 V, where the limit for the operational amplifier is, and saturates there.

There are many applications for operational amplifiers, such as amplifiers, regulators, and active filters. Let’s now look at an active low-pass filter, similar to our first RC filter, but with the operational amplifier added:

An interesting application made possible by *SystemModeler* and *Mathematica* being symbolic tools is the possibility to derive analytical expressions for the transfer functions of circuits. To do that I’ll first get a linearized state-space model:

The state-space model is a set of ordinary differential equations (ODEs) that relate the input *Vi* to the output *Vo*. The variables of the ODEs are `capacitor1.v` and `capacitor2.v`, which are the states of the state-space model.

It’s possible to directly convert the state-space model to a transfer function, displaying the relationship between input voltage *Vi* and output voltage *Vo*:

Or display its Bode plot to show the cut-off frequency:

The cut-off frequency is where the magnitude of output voltage is 3 dB lower than the input voltage. In the above plot, one can see that the frequency is around .7 rad/s by moving the slider.

So far, so good.

Now I’m going to look at an example where I don’t know how to directly apply Ohm’s law. Rectification is the process of taking an alternating current and turning it into a direct current. This is what happens in your laptop charger, which takes the AC supplied from the wall socket and converts it into DC that is usable in your computer.

To do that, a full-wave rectifier is usually used, with a so-called diode bridge:

The diodes in the bridge will “fire” so that a positive current will always flow into the positive pin on the DC side.

Next I connect this to a power source and a load:

Running a simulation, we can now see the how this full-wave rectifier converts the alternating current (blue) into a direct current (red):

As can be seen, there are a lot of ripples in this circuit. To smooth it out, a capacitor can be applied across the DC bus, like so:

Simulating with three different values for the capacitance of the capacitor, we get:

I’ll pick one of these capacitor values, say 10 mF, and save the corresponding simulation in a separate variable:

And as we can see, the behavior is smoother. How much ripple is there? Instead of looking at values in the plot and calculating it with my calculator, *Mathematica* can do the work for me by finding the maximum and minimum values in a cycle:

Now that I have them, I can go on to mark them in a plot:

Or just calculate the ripple as a percentage of the mean signal value:

OK, so that’s about a 7% ripple. We can improve this by adding an inductor on the DC bus side together with the capacitor.

Another interesting analysis task is to look at the efficiency of the conversion. For that task, I connect power sensors to the AC and DC sides.

In a separate simulation, I integrate the power consumed on both sides and look at the quotient:

As we can see, the efficiency at first is low, when the capacitor is charging. But later the efficiency is very high.

I’ll end this blog post by showing how these electrical components in turn can be connected to other domains. One such example is an electrical motor. So what I’ll do is connect our rectifier to a DC motor and some mechanical components, including a torque opposite of the motor’s that kicks in 10 seconds after the motor starts.

The load connected to the motor will spin up and find a steady state value around 10 rad/s. After 10 seconds, a torque (the component on the right in the above diagram) with the opposite direction from the DC motor’s torque starts to turn, and the load is slowed down.

With the new release of *SystemModeler* 4, even more use cases are possible, including using the digital library to construct AD/DA converters and other digital circuits or quasi-stationary circuits for fast large analog circuit simulations. You can also create models directly from equations in *Mathematica*.

*SystemModeler* can give you a leg up in your next course or project. Keep an eye on the Wolfram Blog for further posts in this series.

Download this post, as a Computable Document Format (CDF) file, and its accompanying models.

]]>Disclaimer: you may read, carry out, and modify inputs in this blog post independent of your age. Hands-on taste tests might require a certain minimal legal age (check your countries’ and states’ laws).

We start by importing two images from Wikipedia to set the theme; later we will use them on maps.

We will restrict our analysis to the lower 48 states. We get the polygon of the US and its latitude/longitude boundaries for repeated use in the following.

And we define a function that tests if a point lies within the continental US.

We start with beer. Let’s have a look at the yearly US beer production and consumption over the last few decades.

This production puts the US in second place, after China, on the world list of beer producers. (More details about the international beer economy can be found here.)

And here is a quick look at the worldwide per capita beer consumption.

The consumption of the leading 30 countries in natural units, kegs of beer:

Some countries prefer drinking wine (see here for a detailed discussion of this subject). The following graphic shows (on a logarithmic base 2 scale) the ratio of beer consumption to wine consumption. Negative logarithmic ratios mean a higher wine consumption compared to beer consumption. (See the American Association of Wine Economists’ working paper no. 79 for a detailed study of the correlation between wine and beer consumption with GDP, mean temperature, etc.)

We start with the beer breweries. To plot and analyze, we need a list of breweries. The Wolfram Knowledgebase contains data about a lot of companies, organizations, food, geographic regions, and global beer production and consumption. But breweries are not yet part of the Wolfram Knowledgebase. With some web searching, we can more or less straightforwardly find a web page with a listing of all US breweries. We then import the data about 2600+ beer breweries in the US as a structured dataset. This is an all-time high over the last 125 years. (For a complete list of historical breweries in the US, you can become a member of the American Breweriana Association and download their full database, which also covers long-closed breweries.)

Here are a few randomly selected entries from the dataset.

We see that for each brewery, we have their name, the city where they are located, their website URL, and their phone number (the BC, BP, and similar abbreviations stand for if and what you can eat with your beer, which is irrelevant for today’s blog post).

Next, we process the data, remove breweries no longer in operation, and extract brewery names, addresses, and ZIP codes.

We now have data for 2600+ breweries.

For a geographic analysis, we resolve the ZIP codes to actual lat/long coordinates using the `EntityValue` function.

Unfortunately, not all ZIP codes were resolved to actual latitudes and longitudes. These are the ones where we did not successfully find a geographic location.

Why did we not find coordinates for these ZIP codes? As frequently happens with non-programmatically curated data, there are mistakes in the data, and so we will have to clean it up. The easiest way would be to simply ignore these breweries, but we can do better. These are the actual entries of the breweries with missing coordinates.

A quick check at the USPS website shows that, for instance, the first of the above ZIP codes, 54704, is not a ZIP code that the USPS recognizes and/or delivers mail to.

So no wonder the Wolfram Knowledgebase was not able to find a coordinate for this “ZIP code”. Fortunately, we can make progress in fixing the incorrect ZIP codes programmatically. Assume the nonexistent ZIP code was just a typo. Let’s find a ZIP code in Madison, WI that has a small string distance to the ZIP code 54704.

The ZIP code 53704 is in string (and Euclidean) distance as near as possible to 54704.

And taking a quick look at the company’s website confirms that 53704 is the correct ZIP code. This observation, together with the programmatic ZIP code lookups, allows us to define a function to programmatically correct the ZIP codes in case they are just simple typos.

For instance, for Black Market Brewing in Temecula, we find that the corrected ZIP code is 92590.

So, to clean the data, we perform some string replacements to get a dataset that has ZIP codes that exist.

We now acquire coordinates again for the corrected dataset.

Now we have coordinates for all breweries.

And all ZIP codes are now associated with a geographic position. (At least when I wrote the blog post; because the used website gets regularly updated, at a later point in time new typos could have occurred and the `fixDataRules` would have to be updated appropriately.)

Now that we have coordinates, we can make a map with all the breweries indicated.

Let’s pause for a moment and think about what goes into beer. According to the *Reinheitsgebot* from November 1487, it’s just malted barley, hops, and water (plus yeast). The detailed composition of water has an important influence on a beer’s taste. The water composition in turn relates to hydrogeology. (See this paper for a detailed discussion of the relation.) Carrying out a quick web search lets us find a site showing important natural springs in the US. We import the coordinates of the springs and plot them together with the breweries.

We redraw the last map, but this time add the natural springs in blue. Without trying to quantify the correlation here between breweries and springs, a visual correlation is clearly visible.

We quickly calculate a plot of the distribution of the distances of a brewery to the nearest spring from the list `springPositions`.

And if we connect each brewery to the nearest spring, we obtain the following graphic.

We can also have a quick look at which regions of the US can use their local barley and hops, as the Wolfram Knowledgebase knows in which US states these two plants can be grown.

(For the importance of spring water for whiskey, see this paper.) Most important for a beer’s taste is the hops (see this paper and this paper for more details). The -acids of hops give the beer its bitter taste. The most commonly occurring -acid in hops is humulone. (To refresh your chemistry knowledge, see the Step-by-step derivation for where to place the dots in the below diagram.)

But let’s not be sidetracked by chemistry and instead focus in this blog post on geographic aspects relating to beer.

Historically, a relationship has existed between beer production and the church (in the form of monasteries; see “A Comprehensive History of Beer Brewing” for details). Today we don’t see a correlation (other than through population densities) between religion and beer production. Just to confirm, let’s draw a map of major churches in the US together with the breweries. At the website of the Hartford Institute, we find a listing of major churches. (Yes, it would have been fun to really draw all 110,000+ churches of the US on a map, but the blog team did not want me to spend $80–$100 to buy a US church database and support spam-encouraging companies, e.g from here or here.)

Back to the breweries. Instead of a cloud of points of individual breweries we can construct a continuous brewery probability field and plot it. This more prominently shows the hotspots of breweries in the US. To do so, we calculate a smooth kernel distribution for the brewery density in projected coordinates. We use the Sheather–Jones bandwidth estimator, which relieves us from needing to specify an explicit bandwidth. Determining the optimal bandwidth is a nontrivial calculation and will take a few minutes.

We plot the resulting distribution and map the resulting image onto a map of the US. Blue denotes a low brewery density and red a high one. Denver, Oregon, and Southern California clearly stand out as local hotspots.

The black points on top of the brewery density map are the actual brewery locations.

Using the brewery density as an elevation, we can plot the beer topography of the US. Previously unknown (beer-density) mountain ranges and peaks become visible in topographically flat areas.

The next graphic shows a map where we accumulate the brewery counts by latitude and longitude. Similar to the classic wheat belt, we see two beer belts running East to West and two beer belts running North to South.

Let’s determine the elevations of the breweries and make a histogram to see whether there is more interest in a locally grown beer at low or high elevations.

It seems that elevations between 500 and 1500 ft are most popular for places making a fresh cold barley pop (with an additional peak at around 5000 ft caused by the many breweries in the Denver region).

For further use, we summarize all relevant information about the breweries in `breweryData`.

We define some functions to find the nearest brewery and the distance to the nearest brewery.

Here are the nearest breweries from the Wolfram headquarters In Champaign, IL.

And here is a plot of the distances from Champaign to all breweries, sorted by size. After accounting for the breweries in the immediate neighborhood of Champaign, for the first nearly 1000 miles we see a nearly linear increase in the number of breweries with a slope of approximately 2.1 breweries/mile.

Now that we know where to find a freshly brewed beer, let’s switch focus and concentrate on whiskey distilleries. Again, after some web searching we find a web page with a listing of all distilleries in the continental US. Again, we read in the data, this time in unstructured form, extract the distillery and cities named, and carry out some data cleanup as we go.

This time, we have the name of the distillery, their website, and the city as available data. Here are some example distilleries.

A quick check shows that we did a proper job in cleaning the data and now have locations for all distilleries.

We now have a list of about 500 distilleries.

We retrieve the elevations of the cities with distilleries.

The average elevation of a distillery does not deviate much from the one for breweries.

We summarize all relevant information about the distilleries in `distilleryData`.

Define functions to find the nearest brewery and the distance to the nearest brewery.

We now use the function `nearestDistilleries` to locate the nearest distillery and make a map of the bearings to take to go to the nearest distillery.

Let’s come back to breweries. What’s the distribution by state? Here are the states with the most breweries.

If we normalize by state population, we get the following ranking.

And which city has the most breweries? We accumulate the ZIP codes by city. Here are the top dozen cities by brewery count.

And here is a more visual representation of the top 25 brewery cities. We show a beer glass over the top brewery cities whose size is proportional to the number of breweries.

Oregon isn’t a very large state, and it includes beer capital Portland, so let’s plan a trip to visit all breweries. To minimize driving, we calculate the shortest tour that visits all of the state’s breweries. (All distances are along geodesics, not driving distances on roads.)

A visit to all Oregon breweries will be a 1,720-mile drive.

And here is a sketch of the shortest trips that hit all breweries for each of the lower 48 states.

Let’s quickly make a website that lets you plan a short beer tour through your state (and maybe some neighboring states). The function `makeShortestTourDisplay` calculates and visualizes the shortest path. For comparison, the length of a tour with the breweries chosen in random order is also shown. The shortest path often allows us to save a factor 5…15 in driving distances.

We deploy the function `makeShortestTourDisplay` to let you easily plan your favorite beer state tours.

And if the reader has time to take a year off work, a visit to all breweries in the continental US is just a 41,000-mile trip.

The collected caps from such a trip could make beautiful artwork! Here is a graphic showing one of the possible tours. The color along the tour changes continuously with the spectrum, and we start in the Northeast.

On average, we would have to drive just 15 miles between two breweries.

Here is a distribution of the distances.

Such a trip covering all breweries would involve driving nearly 300 miles up and down.

Here is a plot of the height profile along the trip.

We compare the all-brewery trip with the all-distillery trip, which is still about 21,000 miles.

To calculate the distribution function for the average distance from a US citizen to the nearest brewery and similar facts, we build a list of coordinates and the population of all ZIP code regions. We will only consider the part of the population that is older than 21 years. We retrieve this data for the ~30,000 ZIP codes.

We exclude the ZIP codes that are in Alaska, Hawaii, and Guam and concentrate on the 48 states of the continental US.

We will take into account adults from the ~29,000 populated ZIP code areas with a non-vanishing number of adults totaling about 214 million people.

Now that we have a function to calculate the distance to the nearest brewery at hand and a list of positions and populations for all ZIP codes, let’s do some elementary statistics using this data.

Here is a plot of the distribution of distances from all ZIP codes to the nearest brewery.

More than 32 million Americans have a local brewery within their own ZIP code region.

While ~15% of the above-drinking-age population is located in the same ZIP code as a brewery, this does not imply zero distance to the next brewery. As a rough estimation, we will model the distribution within a ZIP code as the distance between two random points. In the spirit of the famous spherical cow, the shape of a ZIP code we will approximate as a disk. Thus, we need the size distribution of the ZIP code areas.

The average distance between two randomly selected points from a disk is approximately the radius of the disk itself.

Within our crude model, we take the areas of the cities and calculate the radius of the corresponding disk. We could do a much more refined Monte Carlo model using the actual polygons of the ZIP code regions, but for the qualitative results that we are interested in, this would be overkill.

Now, with a more refined treatment of the same ZIP code data, on average, for a US citizen in the lower 48 states, the nearest brewery is still only about 13.5 miles away.

And, modulo a scale factor, the distribution of distances to the nearest brewery is the same as the distribution above.

Let’s redo the same calculation for the distilleries.

The weighted average distance to the nearest distillery is about 30 miles for the above-drinking-age customers of the lower 48 states.

And for about 1 in 7 Americans the nearest distillery is closer then the nearest brewery.

We define a function that, for a given geographic position, calculates the distance to the nearest brewery and the nearest distillery.

E.g. if you are at Mt. Rushmore, the nearest brewery is just 18 miles away, while the nearest distillery is nearly 160 miles away.

For some visualizations to be made below, for a dense grid of points in the US, find the distance to the nearest brewery and the nearest distillery. It will take 20 minutes to calculate these 320,000 distances, so we have time to visit the nearest espresso machine in the meantime.

So, how far away can the nearest brewery be from an adult US citizen (within the lower 48 states)? We calculate the maximal distance to a brewery.

We find that the city furthest away from a freshly brewed beer is Ely in Nevada–about 170 miles away.

And here is the maximal distance to a distillery. From Redford, Texas it is about 335 miles to the nearest distillery.

Of the inhabitants of these two cities, the people from Ely have “only” a 188-mile distance to a distillery and the people from Redford are 54 miles from the next brewery.

After having found the external distance cities, the next natural question is for the city that has the maximal distance to either a brewery or a distillery.

Let’s have a look at the situation in the middle of Kansas. The ~100 adult citizens of Manter, Kansas are quite far away from a local alcoholic drink.

And here is a detailed look at the breweries/distilleries situation near Manter.

Now that we have the detailed distances for a dense grid of points over the continental US, let’s visualize this data. First, we make plots showing the distance, where blue indicates small distances and red dangerously large distances.

Using these distance plots properly projected into the US yields a more natural-looking image.

And here is the corresponding image for distilleries. Note the clearly visible great Distillery Ridge mountain range between Eastern US distilleries and Western US distilleries.

For completeness, here is the maximum of either the distance to the nearest brewery or the distance to the nearest distillery.

And here is the equivalent 3D image with the distance to the next brewery or distillery shown as vertical elevation. We also use a typical elevation plot coloring scheme for this graphic.

We can also zoom into the Big Dry Badlands mountain range to the East of Denver as an equal-distance-to-freshly-made-alcoholic-drink contour plot. The regions with a distance larger than 100 miles to the nearest brewery or distillery are emphasized with a purple background.

Or, more explicit graphically, we can use the beer and whiskey images from earlier to show the regions that are closer to a brewery than to a distillery and vice versa. In the first image, the grayed-out regions are the ones where the nearest distillery is at a smaller distance than the nearest brewery. The second image shows regions where the nearest brewery is at a smaller distance than the nearest distillery in gray.

There are many more bells and whistles that we could add to these types of graphics. For instance, we could add some interactive elements to the above graphic that show details when hovering over the graphic.

Earlier in this blog post, we constructed an infographic about beer production and consumption in the US over the last few decades. After having analyzed distillery locations, a natural question is what role whiskey plays among all spirits. This paper analyzes the average alcohol content of spirits consumed in the US over a 50+ year time span at the level of US states. If you have a subscription, you can easily import the main findings of the study, which is Table 1.

Here is a snippet of the data. The average alcohol content of the spirits consumed decreased substantially from 1950 to 2000, mainly due to a decrease in whiskey consumption.

Here is a graphical representation of the data from 1950 to 2000.

So far we have concentrated on beer- and whiskey-related issues on a geographic scale. Let’s finish with some stats and infographics on the kinds of beer produced in the breweries mapped above. Again, after some web searching, we find a page that lists the many types of beer, 160+ different styles to be precise. (See also the *Handbook of Brewing* and the “Brewers Association 2014 Beer Style Guidelines” for a detailed discussion of beer styles.)

We again import the data. The web page is perfectly maintained and checked, so this time we do not have to carry out any data cleanup.

How much beer one can drink depends on the alcohol content. Here is the distribution of beer styles by alcohol content. Hover over the graph to see the beer styles in the individual bins.

Beer colors are defined on a special scale called *Standard Reference Method* (SRM). Here is a translation of the SRM values to RGB colors.

How do beer colors correlate with alcohol content and bitterness? The following graphic shows the parameter ranges for the 160+ beer styles. Again, hover over the graph to see the beer style categories highlighted.

In an interactive 3D version, we can easily restrict the color values.

After visualizing breweries in the US and analyzing the alcohol content of beer types, what about the distribution of the actual brewed beers within the US? After doing some web searching again, we can find a website that lists breweries and the beers they brew.

So, let’s read in the beer data from the site for 2,600 breweries. We start with preparing a list of the relevant web pages.

Next, we prepare for processing the individual pages.

As this will take a while, we can display the breweries, their beers, and a link to the brewery website to entertain us in the meantime. Here is an example of what to display while waiting.

Now we process the data for all breweries. Time for another cup of coffee. To have some entertainment while processing the beers of 2,000+ breweries, we again use `Monitor` to display the last-analyzed brewery and their beers. We also show a clickable link to the brewery website so that the reader can choose a beer of their liking.

Here is a typical data entry. We have the brewery name, its location, and, if available, the actual beers, their classification as Lager, Bock, Doppelbock, Stout, etc., together with their alcohol content.

Here is the distribution of the number of different beers made by the breweries. To get a feeling, we will quickly import some example images.

Concretely, more than 24,400 US-made beers were listed in the just-imported web pages.

Accumulating all beers gives the following cumulative distribution of the alcohol content.

On average, a US beer has an alcohol content (by volume) of (6.72.1)%.

If we tally up by beer type, we get the following distribution of types. India Pale Ale is the winner, followed by American Pale Ale.

Now let’s put the places where a Hefeweizen is freshly brewed on a map.

And here are some healthy breakfast beers with oatmeal or coffee (in the name).

For the carnivorous beer drinkers, there are plenty of options. Here are samples of beers with various mammals and fish in their name. (Using `Select[#&@@@Flatten[Last/@Take[brewerBeerDataUS,All],2],DeleteCases[Interpreter["Species"][
StringSplit[#]],_Failure]=!={}&]`, we could get a complete list of all animal beers.)

What about the names of the individual beers? Here is the distribution of their (string) lengths. Hover over the columns to see the actual names.

Presume you plan a day trip up to 125 miles in radius (meaning not longer than about a two-hour drive in each direction). How many different beers and beer types would you encounter as a function of your starting location? Building a fast lookup for the breweries up to distance *d*, you can calculate these numbers for a dense set of points across the US and visualize the resulting data geographically. (For simplicity, we assume a spherical Earth for this calculation.)

In the best-case scenario, you can try about 80 different beer types realized through more than 2000 different individual beers within a 125-mile radius.

After so much work doing statistics on breweries, beer colors, beer names, etc., let’s have some real fun: let’s make some fun visualizations using the beers and logos of breweries.

Many of the brewery homepages show images of the beers that they make. Let’s import some of these and make a delicious beer (bottle, can, glass) collage.

We continue by making a reduced version of `brewerBeerDataUS` that contains the breweries and URLs by state.

Fortunately, many of the brewery websites have their logo on the front page, and in many cases the image has *logo* in the image filename. This means a possible automated way to get images of logos is to read in the front page of the web presences of the breweries.

We will restrict our logo searches to logos that are not too wide or too tall, because we want to use them inside graphics.

We also define a small list of special-case lookups, especially for states that have only a few breweries.

Now we are ready to carry out an automated search for brewery logos. To get some variety into the vizualizations, we try to get about six different logos per state.

After removing duplicates (from breweries that brew in more than one state), we have about 240 images at hand.

A simple collage of brewery logos does not look too interesting.

So instead, let’s make some random and also symmetrized kaleidoscopic images of brewery logos. To do so, we will map the brewery logos into the polygons of a radial-symmetric arrangement of polygons. The function `kaleidoscopePolygons` generates such sets of polygons.

The next result shows two example sets of polygons with threefold and fourfold symmetry.

And here are two random beer logo kaleidoscopes.

Here are four symmetric beer logo kaleidoscopes of different rotational symmetry orders.

Or we could add brewery stickers onto the faces of the Wolfram|Alpha Spikey, the rhombic hexecontahedron. As the faces of a rhombic hexecontahedron are quadrilaterals, the images don’t have to be distorted very much.

Let’s end with randomly selecting a brewery logo for each state and mapping it onto the polygons of the state.

The next graphic shows some randomly selected logos from states in the Northeast.

And we finish with a brewery logo mapped onto each state of the continental US.

We will now end and leave the analysis of wineries for a future blog post. For a more detailed account of the distribution of breweries throughout the US over the last few hundred years, and a variety of other beer-related geographical topics, I recommend reading the recent book *The Geography of Beer*, especially the chapter “Mapping United States Breweries 1612 to 2011″. For deciding if a bottle of beer, a glass of wine, or a shot of whiskey is right for you, follow this flowchart.

Download this post as a Computable Document Format (CDF) file.

*To comment, please visit the original post at the Wolfram|Alpha Blog »*

Even without the effect of tree branches, Mars is a difficult planetary target because it is small, and atmospheric turbulence can make shooting it a process of capturing a lot and hoping for the best during those moments when the air stabilizes enough to provide a clear view.

Before the days of photographic imaging, astronomers would sketch Mars at the eyepiece of their telescope, waiting for moments of atmospheric clarity before adding to their drawings. Unfortunately this technique sometimes resulted in features being added to the planet that did not actually exist.

Now the standard way to process planetary images like this is to take a bunch of short exposures and then normalize and stack the collection, weeding out the obviously bad frames. This process increases the signal-to-noise ratio of the final image.

I initially collected my images using software specific to my CCD camera. Here is the path to my collection of Mars exposures in the standard astronomical FITS format.

The first step was to import the collection into *Mathematica*. With `marsPath` set to a directory containing the images, I imported the entire set:

All the images were now in an easy-to-manipulate list. Wolfram Language is a powerful functional programming language, which allowed me to perform image processing by mapping pure functions onto elements of this list instead of writing looping code.

Each frame of my FITS files consisted of a sublist of three color channels, one for red, green, and blue.

I needed to convert them to combined color first, which was done by just mapping the `ColorCombine` function onto each image in the list:

As Mars occupies only a small portion of the frame, I cropped the image to 200 pixels wide, so time wouldn’t be spent processing uninteresting pixels.

Here is what the array of cropped images looked like:

As you can see, Mars was moving around a lot due to atmospheric turbulence, and because of that and the movement of the tree branches, I needed to align the images before combining them. Fortunately *Mathematica* can do this automatically. Here I specified that only a translation transformation should be applied to each image, otherwise `ImageAlign` might try to rotate the images if it got confused. The subsequent line filtered out all the cases where `ImageAlign` failed to find a successful transformation because the image was just too fuzzy.

The FITS images I collected were 16-bit images. They needed to be represented by a more fine-grained data type before doing scaling and arithmetic on the images. Otherwise, I would get banding, and fine detail would have been washed out. In order to prevent this effect, I converted them to use 64-bit real values.

This was the resulting array of aligned images:

Every time I image things like this, I need to eliminate the “clinker” images—ones that are too fuzzy to add to the stack. To sort them out, I created a table of the images I was going to use, initialized to `True`.

It was easy to spin up a quick UI to allow me to sort out the clinkers.

The `useTable` array could then be used to pull out the good images in one step.

To combine the good images into a single image with an improved signal-to-noise ratio, I took their statistical median using the `Median` function.

Of course, it is completely trivial to get the `Mean` of the image list as well, and it is sometimes good to see how the `Mean` image compares to the `Median` image. Other statistical functions, like `TrimmedMean`, could also have been used here.

The final adjustment sharpened the image and brought down the brightness while bringing up the contrast.

Not bad, considering the quality of the initial frames.

*Mathematica* provided all the tools I needed to easily import, prepare, align, combine, and adjust fuzzy planetary images taken through tree branches in my back yard to yield a decent amateur image of Mars.

Download this post as a notebook (.nb) file or CDF (.cdf) file.

Download the compressed (.zip) accompanying Flexible Image Transport System (FITS) files (**Note:** This file is over 100 MB).