For those of you who are interested, Wolfram|Alpha possesses a wealth of sports stats so that you can get all the cold, hard facts about the Patriots and the Seahawks.

And if you can’t wait for Sunday to get your next football fix, or find yourself suffering withdrawal afterward, VICTIV is doing very cool things with the Wolfram Language to run a fantasy sports league. Earl Mitchell delves into the step-by-step process for new users on his blog, The Rotoquant.

But some of you are probably just plain old tired of all this “Deflatriots” business and of having your television occupied by football games, news, talking heads, and commercials from September through February, because after a while, the teams start to blur together. Fortunately, with the help of the Wolfram Language, you can pick your team out of the crowd using this `Graph` of NFL logos we created by pulling the images from our Wolfram Knowledgebase and using `Nearest` to organize them by graphical similarity.

If you’re one of those who are weary of all the football hoopla, then let us soothe your soul with a time-honored and longstanding tradition of cuteness: Animal Planet’s Puppy Bowl XI.

With celebrities such as Katty Furry performing in the halftime show, it promises to be the most adorable sports game you’ll watch all year. The competition will be fierce, with 57 shelter-donated puppies—all up for adoption!—fighting for the honor to be the Bissel MVP (Most Valuable Puppy).

Past MVPs have included Max and Abigail, both Jack Russell Terriers, and the last MVP, Loren, was a Brittany, a breed not present in 2015′s lineup.

It’s not unlikely that one of the eight Labrador Retrievers will take home the prize for the first time ever. Again using the Wolfram Language, here’s the breakdown of Puppy Bowl breeds:

But who knows, one of those Beagles could come out of the end zone and snatch the victorious touchdown from right under their wet noses. Are you ready for some puppy ball?

]]>

This hackathon was a race to the finish, requiring all the creativity and innovation each team could muster. “Using Wolfram was a no-brainer,” said Walch. “We needed a fast way to do computations off the device, and the Wolfram Language had so much of the functionality we needed built in already: from image processing to computing Fourier coefficients. Making the app in 36 hours would not have been possible without it!”

According to the MHacks project profile, with the use of the Wolfram Language and Wolfram Programming Cloud, “our fabulous new iOS App takes any input image, converts it into a line drawing, and computes its Fourier series expansion using the fast Fourier transform (FFT). Draw Anything creates step by step drawing guides by truncating the Fourier series at different lengths.”

“We ran our computations in the Wolfram Programming Cloud so that they would run quickly and efficiently, and so that the user would not experience any slow down in their device,” said Jacobs. “I am relatively new to programming, but it was incredibly easy for me to pick up the language and use it. I’m really looking forward to coming up with new projects to code in the Wolfram language!”

The designers also included a shout-out on their home page to Wolfram’s Michael Trott for his blog post that inspired the creation of the Draw Anything app.

At MHacks V, which was hosted by the University of Michigan and in part sponsored by Wolfram Research, teams of up to 4 members completed submissions that were judged on the usefulness, originality, technical difficulty, and design of their hacks. Including the winning hack, a total of 14 teams worked on projects involving Wolfram technologies.

The creator of one of those, WolfWeather, had this to say about using Wolfram tech: “…the language itself is something out of a science fiction movie being able to perform one hundred lines of code in two or three lines of code. I wanted to do something simple and fun, so I created WolfWeather. Its goals are straightforward: it gives users current weather updates, the time, date, weekday, the Zodiac year, and their GPS location. It also promotes the Wolfram Language and shows off a bit of the sheer power the language has as a knowledge base.”

*The Michigan Daily*‘s article on the event includes a brief interview with Jacobs and Walch, who revealed that they plan to continue developing Draw Anything and will be attending future hackathons, including TreeHacks and Seoul Global Hackathon.

Congratulations to team Draw Anything and all participants, and thank you, MHacks, for another unforgettable hackathon!

Got a hackathon coming up? Contact Wolfram to request our participation, or check out the resources on our hackathon page.

]]>Jacob Bernoulli was the first mathematician in the Bernoulli family, which produced many notable mathematicians of the seventeenth and eighteenth centuries.

Jacob Bernoulli’s mathematical legacy is rich. He introduced Bernoulli numbers, solved the Bernoulli differential equation, studied the Bernoulli trials process, proved the Bernoulli inequality, discovered the number ** e**, and demonstrated the weak law of large numbers (Bernoulli’s theorem).

Bernoulli’s treatise *Ars Conjectandi* (i.e. *The Art of Conjecturing*) was posthumously published in 1713, eight years after his demise, and was written in Latin, science’s *lingua franca* of the time. It is considered a seminal work of mathematical probability. Its importance is witnessed, in part, by its translations to French by G. Le Roy in 1801, and, recently, to English by E. D. Sylla in 2005.

*The Art of Conjecturing* comprises four parts. The first part reproduces Christiaan Huygens’ *De Ratiociniis in Ludo Aleae* (*On Reasoning in Games of Chance*), with extensive commentary from Bernoulli and detailed solutions of Huygens’ five problems, posed at the end of Huygens’ work with answers, but without derivations. In the first part, Bernoulli also derives the probability that at least *m* successes will occur in *n* independent trials with success probability of *p*:

The second part, “The Doctrine of Permutations and Combinations,” is devoted to combinatorics and to the study of figurate numbers, i.e. numbers that can be represented by a regular geometrical arrangement of equally spaced points:

It is here that Bernoulli introduces Bernoulli numbers. He starts by noting the identity among binomial coefficients , namely that .

Bernoulli knew that for a fixed *m*, binomial coefficient is a polynomial in *n*, namely

. This identity allows him to solve for . He gives a table of results for 0≤m≤10.

. This identity allows him to solve for . He gives a table of results for 0≤m≤10.

To reproduce Bernoulli’s table, define a function to construct equations for the sum of powers:

Solving for the sum of powers:

Bernoulli writes, “[*W*]*hoever has examined attentively the law of the progression herein can continue the Table further without these digressions for computations*” by making the following educated guess:

He notes that coefficients *B _{r+1}* do not depend on

These coefficients are the celebrated Bernoulli numbers, which have found their way into many areas of mathematics [e.g. see mathoverflow.net].

In the second part of his book, Bernoulli counts the number of permutations, the number of permutations in sets with repetitions, the number of choosing objects from a set, etc., which he later applies to compute probabilities as the ratio of the number of configurations of interest to the total number of configurations.

In part three, Bernoulli applies results from the first two chapters to solve 24 problems related to games of chance. A recurring theme in these problems is a sequence of independent 0 or 1 outcomes, which bears the name of Bernoulli trial, or Bernoulli process. I thought Jacob Bernoulli’s birthday anniversary to be an apt occasion to explore his problems with *Mathematica*.

For example, problem 9 asks to find the expected payout in a three-player game. Players alternately draw cards without replacement from a pack of twenty cards, of which ten are face cards. When the cards are exhausted, winnings are divided among all those who hold the highest number of face cards.

With c1, c2, and c3 denoting the number of face cards each player has, the payout of the first player is:

After the pack of twenty has been distributed, the first and the second players each receive seven cards, but the third one only receives six. The tally vector of face cards received by each player follows `MultivariateHypergeometricDistribution`:

This and other problems are stated and solved in the accompanying *Mathematica* notebook.

The concluding part four of *Ars Conjectandi* discusses uses of probability in civil, moral, and economic matters. Here Bernoulli argues that the probability reflects our incomplete knowledge of the state of the world, and unlike in a game of chance, where probabilities can be determined by finding the proportion that configurations of interest take in the whole set of possible configurations, the probabilities here cannot be a priori established. Bernoulli argues that these unknown probabilities can be inferred from past outcomes.

He proves the weak law of large numbers, asserting that the observed frequency of successes in *n* independent trials where the probability of success equals *p* will converge to *p* as the number of trials grows. Thus, you can estimate *p* arbitrarily accurately by running a sufficient number of trials. Specifically, for any *δ* and *ε*, there exists a large enough sample size *n* that:

The demonstration “Simulated Coin Tossing Experiments and the Law of Large Numbers” by Ian McLeod, among others, explores this convergence.

Download this post as a Computable Document Format (CDF).

Download Bernoulli problems as a *Mathematica* notebook.

The liver is a vital organ, and currently there isn’t really a way to compensate for loss of liver function in the long term. The liver performs a wide range of functions, including detoxification, protein synthesis, and secretion of compounds necessary for digestion, just to mention a few. In the US and Europe, up to 15 % of all acute liver failure cases are due to drug-induced liver injury, and the risk of injuring the liver is of major concern in testing new drug candidates. So in order to safely monitor the impact of a new drug candidate on the liver, researchers at the pharmaceutical company AstraZeneca have recently published a method for evaluating liver function that combines magnetic resonance imaging (MRI) and mathematical modeling—potentially allowing for early identification of any reduced liver function in humans.

Last year, Wolfram MathCore and AstraZeneca worked together on a project where we investigated some modifications of AstraZeneca’s modeling framework. We presented the promising results at the ISMRM-ESMRMB Joint Annual Meeting, which is the major international magnetic resonance conference. In this blog post, I’ll show how the Wolfram Language was used to calculate liver function and how more complex models of liver function can be implemented in Wolfram *SystemModeler*.

**A quick introduction to the method**

You might be wondering what happens within the liver during the examination using a mathematical model. It all starts after the injection of the MRI contrast agent into the blood, where it spreads and ultimately reaches the liver. Inside the liver (see the figure below) the blood vessel walls are highly permeable, like a coffee filter, allowing for a rapid diffusion of the agent into the extracellular space. The MRI contrast agent accumulates in the liver cells, and finally is excreted into the bile. The accumulation and efflux require that the cells are healthy, have enough energy, and are not overloaded with other work. If the cells are compromised, the transfer rates of the agent will be reduced. A reduced liver function can thus be observed by the calculated transfer rates in the model.

Okay, now that you have some background on the basics of how liver function can be estimated, let’s move on to the fun part of computation and modeling. I will start by showing examples of the types of data we use.

**Extracting data**

Data is extracted from the images in regions of interest (ROIs) within the liver as well as the spleen. The latter is used as a good and stable surrogate for measuring the amount of contrast agent within the blood directly, since the splenic cells do not accumulate any contrast agent; this means that our measurement in the spleen is only influenced by the contrast agent in the spleen’s blood vessels. The ROIs can be of any geometry. In *Mathematica*, I draw and modify ROIs with a custom interactive interface. Of course, you could also select the entire liver or other distinct parts in images using some of the automated algorithms implemented in *Mathematica*.

Below is an example of the types of images that we used. These two images were acquired about five minutes after the injection of the contrast agent, which is the reason that the liver is so bright (compared to, for instance, the muscles that can be seen on the sides of the images). The images are captured on a coronal imaging plane, which means that the images are what you see when the subject is lying down on its back and you are looking down on the subject from above. Images a) and b) are at different heights from the table, where b) is further from the table; there you can also see a portion of the spleen.

If you are familiar with human anatomy, especially in medical imaging, you might have noticed that the images don’t look like the inside of a human. Well, in that case you are right: the images show the inside of a rat, which were the subjects used in the study performed at AstraZeneca.

**Data**

AstraZeneca has gathered quite a lot of high-quality data on its rats, using the approach mentioned above, and I will not show all of that here. Rather, I’ve been inspired by early TV chefs and prepared some of the data. I will now exemplify the method with three subjects; i) a rat with normal liver function, ii) a rat with slightly reduced liver function, and iii) a rat with severely reduced liver function. The data covers 60 minutes, where the first four minutes are baseline (prior to the injection of the contrast agent) and used for the post-processing, so those values should by definition be equal to zero.

As I mentioned previously, data is extracted from the image series in two different regions. One of these two regions is the liver, and after some post-processing of this data, we get the mean contrast agent concentrations within the liver cells (I will name this data set ` cHep` in the code from here on). You can see what these concentration time series look like for all three subjects in the figure below. I will use this data for model fitting.

The second region from which data is extracted is the spleen, and after the post-processing, we get the mean contrast agent concentration within the extracellular space (I will name this data set ` cES`). This data tells us how much contrast agent is available for accumulation in the liver, and it will be used as input in the models. You can see what this data looks like for all three subjects in the figure below.

In order to use the measured extracellular concentrations (` cES`) in the model, the values need to be continuous. So let’s go ahead and generate an interpolating function (

And we can check the agreement with the data. Here I’ll just show the normal case, remembering that we just set the first four points to be identical to zero.

**Defining the model**

The model is defined using an ordinary differential equation, where we solve for the concentration of the contrast agent within the liver cells. The uptake comes from the extracellular space (step 3 in the figure) governed by the kinetic parameter *k*_{1}. The transfer of this agent into the bile is described using Michaelis–Menten kinetics (step 4 in the figure) using the kinetic parameters *V*_{max} and *K _{m}*.

With the initial condition for the concentration in liver cells being:

In the project, a simplification of the above model was investigated, specifically, the efflux into the bile was described with a linear rate equation. Since Michaelis–Menten kinetics are approximately linear in low substrate concentrations, this simplification can be valid if the concentrations of the contrast agent are low enough.

Now it’s time to solve the two models using `ParametricNDSolve`. Since we have the interpolating functions (` intES`), specific for each subject, inside the model, we need to compute a solution specifically for each subject:

**Fitting the model to data**

In order to fit the model to the data, we need a target or objective function to guide our optimization algorithm in the correct direction. In this case, I’ve used the Euclidean norm, as a measure of the goodness of model fit:

Whenever I use a global optimization algorithm for estimating the parameters, which takes a fair bit of time to complete, I like to see where the algorithm is moving in the parameter space. This way, I can see if it struggles, or maybe it finds a local minima it can’t get out of, or anything else that might be fun and educational to observe. For this purpose, the monitor functionalities are suitable:

In order to improve the optimization, we also include a list of reasonable parameter boundaries and start guesses, which covers a wide range of scenarios.

Completing the code for the above `Block`, with the necessary inputs to `NMinimize`, we get the following compact piece of code that helps us with the answer to: How good is the liver function?

If you’re interested in the different global optimization schemes available in *Mathematica,* there is great tutorial available here.

Once again the TV-chef magic kicks in, and we have a bunch of optimal parameter values already prepared for the three subjects (for both models):

And we combine the parameter values with the parametric solutions of the models to calculate the model predictions for both models and all three cases:

**Results**

Below you can see the predictions made by the model with the fitted parameter values compared to the data on our three subjects, ranging from normal to severely reduced liver function.

As you can see in the figures, there is a clearly reduced concentration of contrast in the last case. This reduction can be appreciated quantitatively in the table underneath the figures, where the uptake rate is almost a factor 20 lower in the last case. It’s noteworthy that the uptake rate is in all practical aspects identical for both model variants in the three cases, indicating that the use of a linear description of the efflux of contrast into the bile instead of Michaelis–Menten kinetics might be valid. Also, the models are able to predict the data very well; of course, you wouldn’t be reading this unless that was the case (data from humans is much noisier, for various reasons).

In the animation below, I’ve correlated the model predictions for the rat with normal liver function with the acquired images, so that you might better appreciate how the numbers relate to the images. As you might remember from the beginning of this post, the liver is the large organ at the top of the images.

**On the horizon**

In the original paper by Jose Ulloa et al., where the first model and the data come from, the model parameters were able to separate between the different groups with strong significance. In this project we found that the uptake rate was in practice identical for both model variants, and that the simplified model was also good at separating between the different groups.

These methods that AstraZeneca has developed were evaluated on rats, and the work continues at AstraZeneca, and in other pharmaceutical companies, on refining and ultimately utilizing these methods for investigating liver function in pre-clinical and clinical trials, as well as in the clinic. We are all very excited about these results, and as you read this, both AstraZeneca and Wolfram MathCore are involved in new projects dedicated to evaluating these methodologies further, even applying them to patients suffering from liver disease.

**Modeling liver function in Wolfram SystemModeler**

In the above calculations, I used *Mathematica* exclusively; however these models can just as easily be implemented in *SystemModeler* by using the BioChem library, as shown by the figure below. In this particular case, the model contains so few states that the model implementation is just as fast programmatically in *Mathematica*, but if this were a larger or hierarchical model, *SystemModeler* would be my first choice. It’s also worth noting that it if I had implemented the model using *SystemModeler*, the code for fitting the parameters would have been basically the same. In principle, I would only need to modify to the target function.

Wolfram MathCore collaborates with researchers at Linköping University’s Center for Medical Image Science and Visualization (CMIV) on research aimed toward a comprehensive non-invasive diagnostic MRI-based toolset for patients suffering from liver disease. The collaboration has for example led to the development of a mathematical model for estimating liver function in humans based on, in principle, the same kind of MRI data we have shown in this post. The underlying assumptions for this model and the one used above are very similar. The figure below shows this model implemented in *SystemModeler* using the BioChem library, and more details on this model can be found on our example pages.

If you want to try the tools I’ve used for yourself, you can get a trial of both *Mathematica* and *SystemModeler* and get cracking.

At Wolfram MathCore, we have done numerous consultancy projects for a wide range of customers, from machine dynamics and 3D mechanical systems to thermodynamics and, of course, life science. The results from another life science project we worked on together with MedImmune (a subsidiary of AstraZeneca) were recently published in a daughter journal of *Nature*. So, if you need to solve tricky problems or want to get your modeling and simulation project up and running quickly with our tools, don’t hesitate to contact us at Wolfram MathCore!

In particular, I’ve been curious about using the Wolfram Language as a way to drive my telescope mount, for the purpose of automating an observing session. There is precedent for this because some amateurs use their computerized telescopes to hunt down transient phenomena like supernovas. Software already exists for performing many of the tasks that astronomers engage in—locating objects, managing data, and performing image processing. However, it would be quite cool to automate all the different tasks associated with an observing session from one notebook.

*Mathematica* is highly useful because it can perform many of these operations in a unified manner. For example, *Mathematica* incorporates a vast amount of useful astronomical data, including the celestial coordinates of hundreds of thousands of stars, nebula, galaxies, asteroids, and planets. In addition to this, *Mathematica*‘s image processing and data handling functionality are extremely useful when processing astronomical data.

Previously I’ve done some work interfacing with telescope mounts using an existing library of functions called ASCOM. Although ASCOM is powerful and can drive many devices associated with astronomy, like domes and filter wheels, it is limited because it only works on PCs and needs to be pre-installed on your computer. I wanted to be able to drive my telescope directly from *Mathematica* running on any platform, and without any special set up.

**Telescope Serial Communication Protocols**

I did some research and determined that many telescope mounts obey one of two serial protocols for their control: the Meade LX200 protocol and the Celestron NexStar protocol.

The LX200 protocol is used by Meade telescopes like the LX200 series as well as the ETX series. The LX200 protocol is also used by many non-Meade telescope mounts, like those produced by Losmandy and Astro-Physics.

The NexStar protocol is used by Celestron telescopes and mounts as well as those manufactured by its parent company, Synta, including the Orion Atlas/Sirius family of computerized mounts.

The full details of these protocols can be found in the Meade Telescope Serial Command Protocol PDF and the NexStar Communication Protocol PDF.

A notable exception is the Paramount series of telescope mounts from Software Bisque, which use the RTS2 (Remote Telescope System) protocol for remote control of robotic observatories. The RTS2 standard describes communication across a TCP/IP link and isn’t serial-port based. Support for RTS2 will have to be a future project.

Since *Mathematica* 10 has added direct serial-port support, it’s possible to implement these protocols directly in top-level Wolfram Language code and have the same code drive different mounts from *Mathematica* running on different platforms, including Linux, Mac, Windows, and Raspberry Pi.

**Example: Slewing the Scope**

Here’s an example of opening a connection to a telescope mount obeying the LX200 protocol, setting the target and then slewing to that target.

Open the serial port (“/dev/ttyUSB0″) connected to the telescope:

First we need a simple utility for issuing a command, waiting for a given amount of time (usually a few seconds), and then reading off the single-character response.

These are functions for setting the target right ascension and declination in the LX200 protocol. Here, the right ascension (RA) is specified by a string in the form of HH:MM:SS, and the declination (Dec) by a string in the form of DD:MM:SS.

Now that we have the basics out of the way, in order to slew to a target at coordinates specified by RA and Dec strings, setting the target and then issuing the slew command are combined.

We can also pass in real values as the coordinates, and then convert them to correctly formatted strings for the above function.

Now we can point the scope to the great globular cluster in Hercules (Messier 13). Here is an image:

Slew the scope to the Ring Nebula (Messier 57):

And slew the scope to Saturn:

When the observing session is complete, we can close down the serial connection to the scope.

Please be aware that before trying this on your own scope, you should have limits set up with the mount so that the scope doesn’t accidentally crash into things when slewing around. And of course, no astronomical telescope should be operated during the daytime without a proper solar filter in place.

The previous example works with *Mathematica* 10 on all supported platforms. The only thing that needs to change is the name of the serial port. For example, on a Windows machine, the port may be called “COM8″ or such.

**Telescope Control with Raspberry Pi**

One interesting platform for telescope control is the Raspberry Pi. This is an inexpensive ($25–$35), low-power-consumption, credit-card-sized computer that runs Linux and is tailor-made for all manner of hackery. Best of all, it comes with a free copy of *Mathematica* included with the operating system.

Since the Pi is just a Linux box, the Wolfram Language code for serial-port telescope control works on that too. In fact, since the Pi can easily be wirelessly networked, it is possible to connect to it from inside my house, thus solving the number one problem faced by amateur astronomers, namely, how to keep warm when it’s cold outside.

The Pi doesn’t have any direct RS-232 ports in hardware, but an inexpensive USB-to-serial adapter provides a plug-n-play port at /dev/ttyUSB0. In this picture, you can see the small wireless network adapter in the USB socket next to the much larger, blue usb-to-serial adapter.

The serial cable connects to the serial port on my computerized equatorial mount. The mount I used was a Losmandy G11, however, any other mount could be used as long as it has a serial input port and obeys the LX200 protocol.

**Astrophotography with the Pi**

Once I had the Pi controlling the telescope, I wondered if I could use it to take pictures through the scope as well. The Raspberry Pi has an inexpensive camera available for $25, which can take reasonably high-resolution images with a wide variety of exposures.

This isn’t as good as a dedicated astronomical camera, because it lacks the active cooling needed to take low-noise images of deep sky objects, but it would be appropriate for capturing images of bright objects like planets, the Moon, or (with proper filtering) the Sun.

It was fairly easy to find the mechanical dimensions of the camera board on the internet, design a telescope adapter…

…and then build the adapter using my lathe and a few pennies worth of acetal resin (Dupont Delrin®) I had in my scrap box. The normal lens on the Pi camera was unscrewed and removed to expose the CCD chip directly because the telescope itself forms the image.

Note that this is a pretty fancy adapter, and one nearly as good could have been made out of 1 1/4 plumbing parts or an old film canister; this is a place where many people have exercised considerable ingenuity. I bolted the adapter to the side of the Pi case using some 2-56 screws and insulating stand-offs cut from old spray-bottle tubing.

This is how the PiCam looks plugged into the eyepiece port on the back of my telescope, and also plugged into the serial port of my telescope’s mount. In this picture, the PiCam is the transparent plastic box at the center. The other camera with the gray cable at the top is the guiding camera I use when taking long-exposure astrophotographs.

**Remotely Connecting to the PiCam**

The Pi is a Linux box, and it can run vncserver to export its desktop. You can then run a vnc client package, like the free TightVNC, on any other computer that is networked to the Pi. This is a screen shot taken from my Windows PC of the TightVNC application displaying the PiCam’s desktop. Here, the PiCam is running *Mathematica* and has imported a shot of the Moon’s limb from the camera module attached to the telescope via the adapter described above.

It’s hard to read in the above screen shot, but here is the line I used to import the image from the Pi’s camera module directly into *Mathematica*:

This command invokes the Pi’s raspistill camera utility and captures a 1024 x 1024 image exposed at 1,000 microseconds after a 10-second delay, and then brings the resulting JPEG file into *Mathematica*.

One problem that I haven’t solved is how to easily focus the telescope remotely, because the PiCam’s preview image doesn’t work over the vnc connection. One interesting possibility would be to have *Mathematica* take a series of exposures while changing the focus via a servo attached to the focus knob of the telescope. An even better solution would be to have *Mathematica* use image processing functions to determine when the best focus has been achieved.

Pharmacodynamics and Systems Pharmacology in the Netherlands. The conference focuses on the use of mathematical modeling in pharmacology and pharmaceutical R&D, and this year, the main topic was the emerging concept of systems pharmacology.

In general terms, systems pharmacology can be seen as the combination of pharmacometrics and systems biology, with one of its key principles being the integration of biological data and mathematical models describing several different levels of biological complexity—spanning from the molecular or cellular level to that of a whole organism or population. Usually, such integration of data and models is referred to as multilevel, or multiscale, modeling, and has the important benefit of allowing us to translate information on disease and drug effects from the biochemical level—where the effects originate—to changes on the whole body or population level, which are more important from a clinical and pharmacological point of view.

In this blog post, I thought we would take a closer look at what a systems pharmacology approach might look like. Specifically, I’ll focus on some of the practical aspects of building complex, multilevel biological models, and how these can be dealt with using Wolfram *SystemModeler*.

**Systems pharmacology and type 2 diabetes**

To provide you with a relevant example, I’ll base this post on a research project—led by my former mentor Dr. Gunnar Cedersund at Linköping University, Sweden—that I myself have been much involved in over the last couple of years. In this project, we’re seeking to develop a multilevel modeling framework for the integration of knowledge, data, and models related to type 2 diabetes mellitus (T2DM), a disease currently affecting over a quarter of a billion people worldwide, but whose primary cause(s) still remain unknown and incurable.

To get a clearer picture of what the project is all about, we’ll start by looking at one of the more important mathematical models on T2DM published during the last decade (REF1). This model, developed by Professor Claudio Cobelli and his team at the University of Padova, Italy, describes the production, distribution, and removal of glucose and insulin—two of the most important biological substances related to T2DM—in the human body. The diagram view of the model, as implemented in *SystemModeler*, is shown below:

The model includes a number of important system characteristics, such as gastrointestinal glucose absorption as a result of food intake, glucose-stimulated insulin secretion by the pancreatic *β*-cells, and insulin-controlled glucose production and uptake by the liver and peripheral tissues. All in all, these different processes make up an intricate feedback system, which, if everything is functioning optimally, keeps plasma glucose (blood sugar) levels within safe limits.

To get to know this system a little bit better, we’ll use *Mathematica* and its link to *SystemModeler* to run a few simulations of the model and study what happens to glucose and insulin if you were to eat a meal containing glucose. Specifically, we’ll look at the difference between a well-functioning system and the case of T2DM, a difference that can be simulated by changing some of the parameters in the model.

The normally functioning system is shown with solid lines and the T2DM results with dashed lines. In both of the scenarios, the concentration of glucose in plasma starts increasing momentarily after glucose ingestion. In the normal case, however, this increase quickly triggers an increased release of insulin, leading to decreased glucose release by the liver and increased glucose uptake in peripheral tissue. Glucose levels therefore peak at about an hour after eating and then return back to normal as the body utilizes and stores the glucose for future use.

In the T2DM case, the results are evidently quite different. First of all, glucose levels are higher already in the fasting state before eating. Furthermore, the absolute increase in glucose is more than twice as large as in the non-diabetic case, causing glucose to peak at very high concentrations. By studying the insulin, glucose production, and glucose uptake figures, it is possible to conclude that the cause of this behavior is twofold: First, the pancreatic insulin response to increasing glucose concentrations is impaired and much slower in the diabetic case. Second, even though insulin peaks at a higher concentration, the glucose production and uptake is still higher and lower, respectively—a phenomenon known as insulin resistance.

**Understanding insulin resistance requires an intracellular perspective**

There is no doubt that the model I’ve used so far is a great example of how mathematical modeling can be used to investigate and increase our understanding of biological systems and diseases such as diabetes. In fact, a type 1 diabetes version of this model has actually been accepted by the Food and Drug Administration as a substitute to animal trials for certain insulin studies.

As with all models, however, this one also has its limitations.

In this particular case, one important limitation is that the model lacks details of lower-level systems. Specifically, if you consider the model from a hierarchical point of view, you can see the model as focusing only on the top levels of the physiological hierarchy. Therefore, the model can’t be used to investigate questions such as how and why malfunctions like insulin resistance develop in the first place, and if there are drug targets that could potentially restore system behavior. To answer such questions, we need to extend the model with more details of the lower-cellular and intracellular levels, from which the malfunctions actually originate. Remember, this type of multilevel modeling is one of the key concepts of systems pharmacology. It is also what the project I’m describing is all about.

Fortunately for us (and for this story), an increasing number of intracellular models related to glucose and/or insulin are being developed, and below you can see the *SystemModeler* diagram of one of these models (REF2, REF3). The model focuses on the intracellular behavior of fat cells when stimulated by insulin, and how such stimulation affects glucose uptake.

The green circles in this diagram represent key proteins in what is known as the insulin signaling pathway, a complex biological network responsible for transducing insulin signals from outside the cell to an appropriate intracellular response. In this case, the model describes how insulin activates the different proteins, finally causing an increased glucose uptake into the cell.

So, how can we build a multilevel model that links these intracellular details to what we previously saw happening at the whole-body level?

First of all, such linking has to involve making sure that the intracellular submodel is compatible with the behavior of the overall whole-body system. Specifically, the intracellular model needs to be able to describe both the behavior of its own constituents and the constraints posed by higher-level systems. In biology, this can sometimes be a difficult task, since intracellular systems often are studied outside of their natural environment, where their behavior might differ significantly from that in the human body.

Assuming, however, that the behavior of the detailed submodel already has been validated in the context of the overall system, multilevel modeling still comes with a number of challenges. Of these, an essential one is the practical issue of building your models in a way that makes them easy to communicate, maintain, and extend—an aspect where having the appropriate software tools becomes very important.

**Multilevel modeling—Readability, maintainability, and reusability**

In most cases, building small models of just a few variables is quite easily achieved by just writing down the equations as lines of text. When building larger models, however—potentially including hundreds of equations at different levels of complexity—things get a bit more complicated. Just keeping track of the equations, finding errors, and making updates becomes a rather tedious task. Furthermore, when you need to communicate models to others, the challenge becomes even greater. This is particularly true in the fields of biology and pharmacology, since some (read: many) biologists, doctors, and so on, don’t like mathematics.

To deal with these aspects of working with and communicating more complex models, tools that increase model readability—for instance, by providing a graphical layer to the model—can be of great assistance. If the tool also allows for a multilevel, hierarchical representation of the model, and lets you design reusable and individually testable components and submodels, then things become even more convenient.

Looking at our previous two glucose/insulin models as implemented in *SystemModeler*, we note that they are already given in a graphical form. But how can you use *SystemModeler* to connect these models together in a multilevel fashion? And how do you make the linking of the models as intuitive as possible for users of the models?

If we start by looking at the whole-body model, the part describing glucose uptake and utilization can be found in the peripheral tissue submodel:

Let’s take a closer look at this submodel:

What you see is the diagram of the submodel, showing the glucose uptake and utilization reaction and how this reaction is modified by plasma insulin.

Even though *SystemModeler* is designed to let you work with graphical representations of your models, you can easily access (and modify) the underlying mathematics. For instance, here is the mathematical (Modelica) code describing the dynamics of the reaction above:

Note that this is an empirical model of glucose uptake, lacking details of what is actually happening at lower physiological levels.

So, how can you conveniently replace this empirical model with the more detailed model of insulin signaling?

Since *SystemModeler* is a component-based modeling environment, this is pretty easy. As long as the new model has the same external interface as the one you want to replace, you can just replace the old reaction model with the new one by using, for instance, drag and drop. However, it is also possible to make the process of replacing submodels or components a bit more user-friendly, especially to someone completely new to the model.

When developing models, *SystemModeler* allows you to specify pre-defined choices for any of the components used in a model, that is, if you have several descriptions of the same process—for instance, models valid under different conditions, using different parameter values, or including more or less details—you can include this information directly in your model. In this way, new users can easily see what different models are available for a certain reaction or process.

Here’s an example of how such a choice might look like considering the two different models of glucose uptake:

Right-clicking the component and selecting the “Redeclare” menu gives you a list of the two pre-defined choices, and you can easily switch between the two. In this case, selecting the insulin signaling model will update the reaction component and add the more detailed glucose uptake model to the whole-body model:

And by running a new simulation of the model, you can now study the behavior of the two original models together, and investigate what effect changes on the intracellular level would have on the overall system.

The insulin signaling curves below show the dynamics of the proteins corresponding to the green circles in the detailed insulin signaling model.

For me, features like this are key to an efficient modeling workflow. Here, we’ve only looked at two different models. However, new, detailed models of other subsystems and organs, such as the liver, *β*-cells, muscles, and brain, are under development, further increasing the need for a well-designed hierarchical modeling environment.

Interestingly, the incorporation of new submodels also highlights some other benefits of *SystemModeler* when it comes to multilevel modeling in biology and pharmacology. One of these is the multidomain aspect of the *SystemModeler* environment. Here, we’ve focused on biochemical models, but if you look at physiological systems from a wider perspective, these systems are by nature multidomain systems, including, for example, bioelectrical, biomechanical, and biochemical subsystems. Therefore, as physiological models become more and more complex, tools supporting only one of these domains may become less useful.

Furthermore, connecting many models—usually developed by equally many research groups and scientists—increases the need for good model documentation. In *SystemModeler*, you can add documentation directly linked to your different models and components, eliminating the need for separate model documentation and making documentation instantly accessible.

Systems pharmacology is emerging as an important new field of research to increase our understanding of biological systems, diseases, and treatments. Thanks to advancements in both experimental and theoretical methods, the development of new, increasingly complex mathematical models is accelerating. This is true not only for T2DM, but for many other biological systems and diseases as well.

Admittedly, developing models of biological systems is not an easy task, and requires many iterations between experiments, model design, and analysis. However, by using sophisticated modeling tools, the pace at which these models can be developed, communicated, and used could be further increased.

I’ve highlighted some of the features *SystemModeler* can offer in this space. However, we should also not forget the integration between *SystemModeler*, *Mathematica*, and the Wolfram Language, allowing for endless kinds of model analyses, programmatic control of simulations, and so on.

If you want more information on how *SystemModeler* and *Mathematica* can be used to model biological and pharmacological systems, feel free to download a trial (*SystemModeler*, *Mathematica*), check out our *SystemModeler* features web page, or contact us for more examples.

Download a brief MP4 demonstration of the Glucose-Insulin System.

REF1: C. Dalla Man, R. A. Rizza, and C. Cobelli, “Meal Simulation Model of the Glucose-Insulin System,” *IEEE Transactions on Biomedical Engineering*, 54(10), 2007 pp. 1740–9.

REF2: E. Nyman, C. Brännmark, R. Palmér, et al., “A Hierarchical Whole-Body Modeling Approach Elucidates the Link between in Vitro Insulin Signaling and in Vivo Glucose Homeostasis”, *The Journal of Biological Chemistry*, 286(29), 2011 pp. 26028–41.

REF3: C. Brännmark, E. Nyman, S. Fagerholm, et al., “Insulin Signaling in Type 2 Diabetes: Experimental and Modeling Analyses Reveal Mechanisms of Insulin Resistance in Human Adipocytes”, *The Journal of Biological Chemistry*, 288(14), 2013 pp. 9867–80.

The Wolfram|Alpha applications are universal apps, and utilize Windows’ distinct style while bringing to Windows users some of the features people have come to expect from Wolfram|Alpha: a custom keyboard for easily entering queries, a large selection of examples to explore Wolfram|Alpha’s vast knowledgebase, history to view your recent queries, favorites so you can easily answer your favorite questions, the ability to pin specific queries to the start menu, and more.

We’re also happy to announce the release of several of our Course Assistant Apps on Windows 8.1 devices:

- Algebra: Windows Phone Store or Windows Store
- Calculus: Windows Phone Store or Windows Store
- Multivariable Calculus: Windows Phone Store or Windows Store
- Linear Algebra: Windows Phone Store or Windows Store
- Pre-Algebra: Windows Phone Store or Windows Store
- Precalculus: Windows Phone Store or Windows Store
- Statistics: Windows Phone Store or Windows Store

These apps also feature our custom keyboards for the quick entry of your homework problems. View Step-by-step solutions to learn how to solve complex math queries, plot 2D or 3D functions, explore topics applicable to your high school and college math courses, and much more.

]]>If you need some help getting into the holiday spirit, check out these examples:

To win, tweet your submissions to @WolframTaP by 11:59pm PDT on Thursday, January 1. So that you don’t waste valuable code space, we don’t require a hashtag with your submissions. However, we do encourage you to share your code with your friends by retweeting your results with hashtag #HolidayWL.

]]>

*Machine gun with a squirrel on top*

Let’s start smaller than a human, with a gray squirrel from the original story. Put this squirrel on a machine gun, fire it downward at the full automatic setting, and see what happens. I’ll be using Wolfram *SystemModeler* to model the dynamics of this system.

*Model of a machine gun*

The image above shows the model of a machine gun. It contains bullet and gun components that are masses that are influenced by gravity. They are easily constructed by combining built-in mechanical components:

*Mass influenced by the Earth’s gravitational force*

The magazine component is a little more advanced because it ejects the mass of the bullet and the bullet casing as each shot is fired. It does this by taking the initial mass of the full magazine and subtracting the mass of a cartridge multiplied by the number of shots fired, which is given by the shot counter component.

Combining this together with a simple model of a squirrel, a sensor for the position above ground, and a crash detector that stops the simulation when everything crashes on the ground, I now have a complete model.

To get a good simulation, I need to populate the model with parameters for the different components. I will use a gray squirrel, which typically weighs around 0.5 kg (around 1.1 pounds).

Then I need some data for our machine gun. I’ll use the ubiquitous AK-47 assault rifle. Here is some basic data about this rifle:

The thrust generated by the gun can be calculated from the mass of the bullet, the velocity of the bullet when leaving the muzzle, and how often the gun is fired:

I can then estimate the percentage of each firing interval that is used to actually propel the bullet through the barrel. I am going to make the assumption that the average speed in the barrel is equal to half the final speed:

The force during this short time can then be calculated using the thrust:

Now I have all the parameters I need to make our squirrel fly on a machine gun:

Now we simulate the squirrel on the machine gun with a single bullet in the gun:

Seeing the height over time, I conclude that the squirrel reached a height of about 9 centimeters (3.5 inches) and experienced a flight time of only 0.27 seconds.

To put it another way:

That didn’t get the squirrel very far above the ground. The obvious solution to this? Fire more bullets from the gun. A standard magazine has 30 rounds:

This gives a flight time of almost 5.8 seconds, and the squirrel reached the dizzying height of 17.6 meters (58 feet). Well, it would be dizzying for humans; for squirrels, it’s probably not so scary.

Now we’re getting somewhere:

I have shown that a squirrel can fly on a machine gun. Let’s move on to a human, going directly for the standard magazine size with 30 bullets:

One gun is not enough to lift a human very far. I need more guns. Let’s do a parameter sweep with the number of guns varying from 1 to 80:

This shows some interesting patterns. The effect from 50 guns and above can be easily explained. More guns means more power, which means higher flight. The simulations with 15 and 32 guns are a little more interesting, though. Let’s look a little closer at the 15 guns scenario. The red dots show the firing interval, meaning the guns shoot one bullet each every 0.1 seconds:

You can see that the craft manages to take off slightly, starts to fall down again, gets off another shot, but then falls farther than the height it had gained. You can also look at the velocity over time:

For the first shot, the craft starts at a zero velocity standing still on the ground. It gains velocity sharply, but before getting off the next shot, the velocity falls below zero. This means that during one firing cycle, there is a net loss in velocity, resulting in the eventual falling down, even though there are bullets left in the gun. It could then start over from standstill on the ground, doing tiny jumps up and down.

The scenario with 32 guns exhibits yet another behavior. The start looks similar to the behavior with 15 guns, where it gains some altitude, but then falls back down because it loses net velocity during each firing cycle. But then at around 2.5 seconds it starts to gain altitude, until all the ammunition is spent at 3 seconds.

This can be explained if you look at the mass of the magazine over time:

You can see that at each shot, the magazine loses weight because it ejects a bullet and a bullet casing. After a while, this makes the whole craft light enough to gain altitude. This indicates there is some limit to how many bullets you can carry for each machine gun and still be able to fly, which is another interesting parameter you can vary. Let’s try to fly with the following magazine sizes for an AK-47, assuming I create my own custom magazines:

Because more guns means more power, I will use a large number of guns, 1,000:

When using 1,000 guns, it turns out it is not a good idea to bring 165 bullets for each gun:

This is because if you bring too many bullets, the craft becomes too heavy to gain any altitude. Now that I have found a reasonable (if there can be anything reasonable about trying to fly with machine guns) number of bullets to bring along, let’s see the achieved heights when varying the number of guns. I would expect that with more guns, we will gain more height and flight time.

Here is the maximum height achieved with the different number of guns:

It turns out that increasing the number of guns drastically (from 1,500 to 50 million) only gives a marginal increase in the top height achieved. This is because as the number of guns increases, the part of the human carried by each gun decreases, until each gun only carries its own weight plus very little additional mass. This makes the total craft approach the same maximum height as a single gun without any extra weight, and adding more guns will give no more advantage.

In closing, the best machine gun jetpack you can build with AK-47s consists of at least around 5,000 machine guns loaded with 145 bullets each.

*How high you can fly using machine guns*

Download this post, as a Computable Document Format (CDF) file, and its accompanying models.

]]>For many years, Wolfram Research has promoted and supported initiatives that encourage computation, programming, and STEM education, and we are always thrilled when efforts are taken by others to do the same. Code.org, in conjunction with Computer Science Education Week, is sponsoring an event to encourage educators and organizations across the country to dedicate a single hour to coding. This hour gives kids (and adults, too!) a taste of what it means to study computer science—and how it can actually be a creative, fun, and fulfilling process. Millions of students participated in the Hour of Code in past years, and instructors are looking for more engaging activities for their students to try. Enter the Wolfram Language.

Built into the Wolfram Language is the technology from Wolfram|Alpha that enables natural language input—and lets students create code just by writing English.

In addition to natural language understanding, the Wolfram Language also has lots of built-in functions that let you show core computation concepts with tiny amounts of code (often just one or two functions!).

With our newly released cloud products, you can get started for free!

To support the Hour of Code, Wolfram is putting together a workshop for instructors and parents to learn more about programming activities in the Wolfram Language. The workshop takes place December 4, 4–5pm EST. Register for the free event here. During the workshop, we will introduce the basics of the Wolfram Language and walk through several resources to get students coding, as well as demo our upcoming Wolfram Programming Lab offering.

Learning and experimenting with programming in the Wolfram Language doesn’t have to stop with the Hour of Code. Have students create a tweet-length program with Wolfram’s Tweet-a-Program. Compose a tweet-length Wolfram Language program and tweet it to @WolframTaP. Our Twitter bot will run your program in the Wolfram Cloud and tweet back the result.

Learn more about the Wolfram Language with the Wolfram Language Code Gallery. Covering a variety of fields, programming styles, and project sizes, the Wolfram Language Code Gallery shows examples of what can be done with the knowledge-based Wolfram Language—including deployment on the web or elsewhere.

There are other training materials and resources for learning the Wolfram Language. Find numerous free and on-demand courses available on our training site. The Wolfram Demonstrations Project is an open source database of close to 10,000 interactive apps that can be used as learning examples.

As sponsors of organizations like Computer-Based Math™, which is working toward building a completely new math curriculum with computer-based computation at its heart, and the *Mathematica* Summer Camp, where high school students with limited programming experience learn to code using *Mathematica*, we are acutely aware of how important programming is in schools today.

Congrats on getting your student or child involved with the Hour of Code, and we look forward to seeing what they create!

]]>