As more technology is folded into medical environments all over the world, Wolfram’s European branch has taken on work with the United Kingdom’s National Health Service (NHS) in an effort to partially automate the process of cancer diagnosis. The task is to use machine learning to avoid checking thousands of similar-looking images of people’s insides by hand for signs of cancer.

In the modern age, we have computers to take a lot of intellectual drudgery off our hands, but not all of it. Everyone knows what it’s like to have to do something that’s really important to get right, but also really time-consuming. Sometimes the work can be split up among many people, but often it has to be consistently and thoroughly accomplished by one expert in particular. With image analysis for signs of cancer, if you also happen to be the most qualified person to do the job, you cannot ask someone else to take over—nor can you go on autopilot. Even if you’re very motivated by the importance of your task, the boredom will get you eventually, making it more and more difficult to maintain the level of quality that the job warrants. It’s just human nature.

For example: Famously, in the year 1873 the amateur mathematician William Shanks (1812–1882) had calculated *π* to an unprecedented 707 decimal places—calculating mathematical constants was a hobby of his. Unfortunately, when a mechanical calculator was used to check his results 71 years later, it turned out that only the first 527 were correct. If even someone as highly motivated as Shanks can make mistakes in repetitive tasks, anyone can.

Computing *π* is something we can safely let a computer handle because it will always outperform a human. However, some jobs can only be automated with machine learning algorithms, which cannot guarantee correct results. So we’re back to the dilemma we started with: what do we do with tasks that are both very important and very tedious?

In a Wolfram Technical Services project with the NHS and their service provider CorporateHealth International—funded by Innovate UK—we are exploring a way to review videos of the insides of people to check them for signs of bowel cancer. These videos are made by a pill camera that travels through your digestive system and continuously sends images to a recorder you wear on your body. The procedure, as explained in this video about the HI-CAP Project and another video about data science in endoscopy, is significantly easier, cheaper and more comfortable than going to the hospital to have a surgeon poke around your insides with an endoscope. For this reason and others, it has the potential to save many lives by detecting tumors early when they can still be treated easily.

The ease of gathering the data does not directly translate to ease of analysis. Each video consists of thousands of frames, and some polyps or tumors will only appear on a single frame and may not even stand out from the background all that much. This means that a small army of nurses—employed by CorporateHealth International—is currently needed to analyze every single frame of each video, which is a laborious process, as you can imagine.

To alleviate this workload, we work together with the Computer Vision group from the University of Barcelona, where neural networks are being developed for exactly this task of polyp identification. Currently, this network has been implemented in TensorFlow, but we plan to port it over to the Wolfram neural network framework (using some intermediary format like ONNX) to make it part of a larger data-processing pipeline for pill camera videos.

It is not enough to simply train a network and test it on a validation set before it can be put into practice. If the people who actually have to review the videos (and therefore bear responsibility for that analysis) are not convinced of the quality of the computer’s results, they will double-check everything by hand regardless, or even just return to the tools they are currently using. You can’t blame them for wanting to be thorough.

For this reason, we are experimenting with different ways to present computer results to nurses, allowing corrections where necessary. This means playing around with the order in which the frames are presented (e.g., chronological vs. ordering by classification); how the computer classification is presented (a number, a class, a heat map on the image, etc.); and what kind of actions the nurse can take to correct the result so it can then be fixed in the next training round of the AI.

The goal is to use the Wolfram dynamic interactivity language to build a tool that allows users to slowly build experience in such a way that they start trusting AI results more and more—in particular, the parts of the video where a computer indicates no risk factors. If a few frames are unjustly highlighted as polyps because it’s a little overcautious, it’s not much work to correct the result manually. On the other hand, if the AI tells the user that 99% of the video is free of polyps and the user doesn’t trust that verdict, they will still check the entire video and the addition of an AI to the process will not have saved much time at all.

In complex tasks like polyp detection, computers cannot provide completely authoritative computations like the digits of *π*; their role is closer to that of a second opinion from another specialist. Unlike other specialists, though, we cannot directly communicate with a computer and ask it why it made a certain decision. The computer is a sort of “silent expert,” if you will. While the technology is promising, it is still a work in progress with questions yet to be explored. The best we can do is to interrogate the internals of the neural network to try and understand how it works, making it important to think carefully about how this silent expert is incorporated into a decision-making process that ultimately affects people’s lives.

Get full access to the latest Wolfram Language functionality with a Mathematica 12.1 or Wolfram|One trial. |

Version 12.1 of the Wolfram Language introduces the long-awaited `Video` object. The `Video` object is completely (and only) out-of-core; it can link to an extensive list of video containers with almost any codec. Most importantly, it is bundled with complete stacks for image and audio processing, machine learning and neural nets, statistics and visualization and many more capabilities. This already makes the Wolfram Language a powerful video computation platform, but there are still more features to explore.

A video file typically has a video and an audio track. Here is a `Video` object linked to a video file:

✕ Video["ExampleData/Caminandes.mp4"] |

In Version 12.1, by default, the `Video` object is displayed as a small thumbnail and can be played in an external player. There are other appearances to enable in-notebook players, like the `Video` object with a basic player:

✕ Video["ExampleData/Caminandes.mp4", Appearance -> "Basic"] |

Now you can inspect the `Video` object:

✕ Duration[Video["ExampleData/Caminandes.mp4", Appearance -> Automatic, AudioOutputDevice -> Automatic, SoundVolume -> Automatic]] |

✕ Information[ Video["ExampleData/Caminandes.mp4", Appearance -> Automatic, AudioOutputDevice -> Automatic, SoundVolume -> Automatic]] |

Most video containers support multiple video, audio and subtitle tracks. Having multiple audio or subtitle tracks in a single file is more common than having more than one video track.

This is an example of a `Video` object linking to a file with multiple audio and subtitle tracks:

✕ Information[Video["ExampleData/bullfinch.mkv"]] |

There are several parts of a video you may be interested in extracting. Use `VideoFrameList` and `VideoExtractFrames` to extract specific video frames. You can also use `VideoFrameList` to sample the video uniformly or randomly with frames:

✕ VideoFrameList[ Video["ExampleData/Caminandes.mp4", Appearance -> Automatic, AudioOutputDevice -> Automatic, SoundVolume -> Automatic], 3] |

Use this function to create a thumbnail grid (a group of smaller images that summarizes the whole video):

✕ VideoFrameList[ Video["ExampleData/Caminandes.mp4", Appearance -> Automatic, AudioOutputDevice -> Automatic, SoundVolume -> Automatic], 12] // ImageCollage |

You can also trim a segment of a video:

✕ VideoTrim[ Video["ExampleData/Caminandes.mp4", Appearance -> Automatic, AudioOutputDevice -> Automatic, SoundVolume -> Automatic], {30, 60}] |

Or extract only the audio track from a video to analyze it:

✕ Audio[Video["ExampleData/Caminandes.mp4", Appearance -> Automatic, AudioOutputDevice -> Automatic, SoundVolume -> Automatic]] |

✕ Spectrogram[%] |

In Version 12.1, we have introduced `VideoTimeSeries`, which works on frames of a video file to perform any computation—either one frame at a time or a list of frames all at once. This is a powerful tool capable of analysis like in the examples below.

Compute the mean color of each frame over time:

✕ VideoTimeSeries[Mean, Video["ExampleData/Caminandes.mp4", Appearance -> Automatic, AudioOutputDevice -> Automatic, SoundVolume -> Automatic]] // ListLinePlot[#, PlotStyle -> {Red, Green, Blue}] & |

Count the number of objects (cars, for example) detected in each frame of a video:

✕ v = Video["http://exampledata.wolfram.com/cars.avi"]; |

✕ ts = VideoTimeSeries[Point[ImagePosition[#, Entity["Word", "car"]]] &, v] |

Plot the number of objects (again, using cars as an example) detected in each frame:

✕ TimeSeriesMap[Length @@ # &, ts] // ListLinePlot |

Highlight the position of all detected objects (cars) on a sample frame:

✕ HighlightImage[ VideoExtractFrames[v, 1], {AbsolutePointSize[3], Flatten@Values[ts]}] |

We can also use the multiframe version of the function to perform any analysis that requires multiple frames.

By looking at consecutive frames from a pixabay video and computing the difference between four views, we can find the transition times from one view to another and then use those times to extract one frame per scene:

✕ v = Video["Musician.mp4"] |

✕ diffs = VideoTimeSeries[ImageDistance @@ # &, v, Quantity[2, "Frames"], Quantity[1, "Frames"]] |

✕ ListLinePlot[diffs, PlotRange -> All] |

✕ times = FindPeaks[diffs, Automatic, Automatic, 150]["Times"] |

✕ VideoExtractFrames[v, Prepend[times, 0]] |

The Wolfram Language already included a variety of image and audio processing functions. `VideoFrameMap` is a function that takes one frame or a list of video frames, filters them and writes them to a new video file. Let’s use the bullfinch video:

✕
v = Video["ExampleData/bullfinch.mkv"]; VideoFrameList[v,3] |

We can start with a color negation as a simple “Hello, World!” example:

✕ VideoFrameMap[ColorNegate, v] // VideoFrameList[#, 3] & |

Or posterize frames to create a cartoonish effect:

✕ f = With[{tmp = ColorQuantize[#, 16, Dithering -> False]}, tmp - EdgeDetect[tmp]] &; |

✕ VideoFrameMap[f, v] // VideoFrameList[#, 3] & |

Use a neural net to perform semantic segmentation on the previously used video of cars:

✕
v = Video["http://exampledata.wolfram.com/cars.avi"]; |

✕
segment[img_] := Block[{net, encData, dec, mean, var, prob}, net = NetModel["Dilated ResNet-38 Trained on Cityscapes Data"]; encData = Normal@NetExtract[net, "input_0"]; dec = NetExtract[net, "Output"]; {mean, var} = Lookup[encData, {"MeanImage", "VarianceImage"}]; Colorize@ NetReplacePart[ net, {"input_0" -> NetEncoder[{"Image", ImageDimensions@img, "MeanImage" -> mean, "VarianceImage" -> var}], "Output" -> dec}][img]] |

✕
VideoFrameList[VideoFrameMap[segment, v], 3] |

Next is a video stabilization example, which is a vastly simplified version of this Version 12.0 product example. The input video is another pick from pixabay:

✕
v = Video["soap_bubble.mp4"] |

Here is the mask over the ground to make sure the shaking soap bubble movement does not affect our stabilization algorithm:

✕ mask = CloudGet["https://wolfr.am/Mt580rl0"]; |

Next is a routine to find correspondence and geometric transformation between every two consecutive frames, iteratively composed with the previous transformation to get a stabilization all the way to the initial frame:

✕ f = Identity; VideoFrameMap[ Module[{tmp}, tmp = Last@ FindGeometricTransform[##, TransformationClass -> "Rigid"] & @@ ImageCorrespondingPoints[Sequence @@ #, Sequence[ MaxFeatures -> 25, Method -> "ORB", Masking -> mask]]; f = Composition[tmp, f]; ImagePerspectiveTransformation[#[[2]], f, Sequence[ DataRange -> Full, Padding -> "Fixed"]]] &, v, Quantity[2, "Frames"], Quantity[1, "Frames"]]; |

Let’s switch the topic to generation of video. `Manipulate` has been a core way of creating animations in the Wolfram Language for over a decade. In Version 12.1, `Manipulate` expressions can easily be converted to video.

This is a `Manipulate` from the Wolfram Demonstrations Project:

✕
m = ResourceData["Demonstrations Project: Day and Night World Clock"] |

And a video generated from it:

✕ Video[m] |

A video can also be generated from a `Manipulate` and a `Sound` or `Audio` object:

✕ Export["file.mp4", {"Animation" -> m, "Audio" -> ExampleData[{"Audio", "PianoScale"}]}, "Rules"] // Video |

The Wolfram Language by default uses the operating system as well as a limited version of FFmpeg for decoding and encoding a large number of multimedia containers and codecs. `$VideoEncoders`, `$VideoDecoders`, `$AudioEncoders`, etc. list supported encoders and decoders.

Codec support can be expanded even further by installing FFmpeg (Version 4.0.0 or higher). This is the number of decoders and the list of MP4 video decoders on macOS with FFmpeg installed:

✕ Length /@ $VideoDecoders |

✕ $VideoDecoders["MP4"][[All, 1]] |

Video computation in the Wolfram Language is only at its beginning stages. The new capabilities featured here are only part of an already powerful collection of video basics, and we are actively designing and developing updates to existing functions and additional capabilities for future versions, with machine learning and neural net integration at the top of the list. Let us know what you think in the comments—bugs, suggestions and feature requests are always welcome.

After working our way through chemical reactions, solutions and structure and bonding, we close out our step-by-step chemistry series with quantum chemistry. Quantum chemistry is the application of quantum mechanics to atoms and molecules in order to understand their properties.

Have you ever wondered why the periodic table is structured the way it is or why chemical bonds form in the first place? The answers to those questions and many more come from quantum chemistry. Wolfram|Alpha and its step-by-step chemistry offerings won’t make the wave-particle duality any less weird, but they will help you connect chemical properties to the underlying quantum mechanical behavior.

The step-by-step solutions provide stepwise guides that can be viewed one step at a time or all at once while working through a problem. Read on for example problems covering orbital diagrams, frequency and wavelength conversions, and mass-energy equivalence.

One fundamental aspect of chemistry is understanding where electrons live in atoms. Building orbital diagrams provides a good way to visualize this information. The step-by-step solution provides a general framework for solving this class of problems in the Plan step. Details of how to represent the information graphically, along with explanations of core electrons, are provided. An explanation of how many electrons a given orbital set can hold is available via the “Show intermediate steps” button.

Build the orbital diagram for elemental iron.

For this class of problem, just enter “orbital diagram for elemental iron”.

Electromagnetic radiation is central to many techniques in analytical chemistry. Converting frequency and wavelength is a critical skill for understanding theoretical models and interpreting experimental spectra. The photon wavelength calculator provides instructions for interconversion of the frequency and wavelength of electromagnetic radiation.

A sodium streetlight gives off yellow light with a wavelength of 598 nm. What is the frequency of this light?

The calculator can be fed known information directly via “photon wavelength lambda=598 nm”.

The nuclear binding energy is useful when tracking energy changes in nuclear reactions. Converting between mass and energy is a key step in computing nuclear binding energies. The relativistic energy calculator provides instructions for converting between mass and energy.

The mass defect for a He nucleus is 0.0304 u. What is the binding energy for this nuclide in joules per nucleus and MeV per nucleus?

The calculator can be fed known information directly via “relativistic energy m=0.0304 u”.

Test your problem-solving skills by using the Wolfram|Alpha tools described to solve these word problems on quantum chemistry. Answers will be provided at the end of this post!

- Use an orbital diagram to predict the electron configuration of the P
^{3–}anion. - The Trinity test released 5.5 × 10
^{26}MeV. What mass is equivalent to this energy?

Here are the answers to last week’s challenge problems on structure and bonding.

Recall that oxidation state and oxidation number are the same. Additionally, recall that Wolfram|Alpha computes all oxidation numbers in a molecule. Note that “hydrogen oxidation state lithium aluminum hydride” actually returns results for both hydrogen (H_{2}) and lithium aluminum hydride.

Wolfram|Alpha determines the hybridization for all elements except hydrogen (it only has one orbital and therefore cannot hybridize) in a molecule. So you would just need to determine that S is the central atom.

Wolfram|Alpha generates orbital diagrams for neutral atoms in their ground state. However, the neutral atom diagram can be used to figure out where additional electrons will go or which electrons might be removed the easiest. In this case, three extra electrons need to be added to make the trianion.

The electron configuration for the phosphorus trianion is 3s^{2 }3p^{6}.

The mass-energy equivalence calculator can be used to solve this, but now the energy must be passed in rather than the mass.

We hope you’ve enjoyed reading our step-by-step chemistry series, and that our review of chemical reactions, solutions and structure and bonding, along with today’s post on quantum chemistry, have been useful in your studies. New step-by-step solution offerings for chemistry are always rolling out; equilibrium constant expressions, rate-of-reaction expressions, electron configurations, valence electrons, reaction thermochemistry and solution pH are just some of the areas on the to-do list. So stay tuned and check back frequently!

]]>We’re back this week with more chemistry, to explore molecular structure and bonding with Wolfram|Alpha and its step-by-step chemistry offerings. Read more on chemical reactions and solutions from previous weeks, and join us next week for our final installment on quantum chemistry!

Structure and bonding in chemistry refer to where the atoms in a molecule are and what holds those atoms together. Molecules are held together by chemical bonds between the atoms comprising the molecule. Understanding the interplay between molecular structure and the electrons involved in bonding is what facilitates the design of new molecules, the control of chemical reactions and a better understanding of the molecules around us.

To master structure- and bonding-related calculations, the step-by-step solutions provide stepwise guides that can be viewed one step at a time or all at once. Read on for example problems covering Lewis structures, oxidation numbers and orbital hybridization.

Molecular species are not visible to the naked eye, so being able to represent them in a pictoral form is fundamental to communicating chemical information. One of the most common depictions is the Lewis structure. The step-by-step solution (introduced in 2013) walks you through counting the valence electrons, assigning them to each atom and determining the required number of bonds.

What is the Lewis structure of nitrogen dioxide, NO_{2}?

In this case, you can simply enter your query, “What is the Lewis structure of NO2”.

Redox reactions are a huge class of chemical reactions involving the reduction of one reactant and the oxidation of another. In order to identify the reducing and oxidizing agents, the oxidation numbers for each element in a compound must be computed. The step-by-step solution walks you through partitioning bonding electrons and accounting for the electronegativity of each element.

Assign oxidation numbers to all of the elements in Na_{2}SO_{4}.

For this type of problem, you can ask for “Na2SO4 oxidation numbers”.

Atomic orbitals of similar energy and the same symmetry can mix to form hybrid orbitals. These hybrid orbitals directly affect the three-dimensional arrangement of atoms in a molecule. The step-by-step solution explains how to determine orbital hybridization from the structure diagram and steric numbers.

What is the hybridization on each atom in succinylacetone?

Finding the hybridization is easy when you enter “succinylacetone hybridization”.

Test your problem-solving skills by using the Wolfram|Alpha tools described to solve these word problems on structure and bonding. Answers will be provided in the next blog post in this series.

- What is the oxidation state of hydrogen in lithium aluminum hydride?
- What is the orbital hybridization of the central atom in SF
_{6}?

Here are the answers to last week’s challenge problems on chemical solutions.

The volume-to-mass conversions need to be done in two separate Wolfram|Alpha queries.

Then pass the results into a mass fraction query.

First, look up the cryoscopic constant for ethylene glycol.

Next, plug the retrieved information into the freezing-point depression calculator.

Join us next week for our final installment on quantum chemistry. And as always, if you have suggestions for other step-by-step content (in chemistry or other subjects), please let us know! You can reach us by leaving a comment below or sending in feedback at the bottom of any Wolfram|Alpha query page.

]]>

My name is Tigran Ishkhanyan, and I am a special functions specialist in the Algorithms R&D department at Wolfram Research, working on general problems of the theory and advanced methods of special functions. I joined Wolfram at the beginning of 2018 when I was working on my PhD project in mathematical physics at the University of Burgundy, France, and at the Institute for Physical Research, Armenia.

My PhD project had two major directions: improvement of the theory of Heun functions and their application in quantum mechanics, specifically in the problems of quantum control in two-level systems and relativistic/nonrelativistic wave equations. I came up with the idea of implementing Heun functions into the Wolfram Language when I found out that this functionality had not yet been introduced.

Every high-school student is familiar with simple functions such as `Exp`, `Log`, `Sin` and others—the so-called elementary functions. These functions are well studied and we know all their properties, but from time to time we are able to implement into the Wolfram Language something completely new and insightful like the `ComplexPlot3D` function that might be useful for educational and scientific purposes.

For example, here is the familiar sinusoidal plot for `Sin`:

✕
Plot[Sin[x], {x, -6 \[Pi], 6 \[Pi]}, PlotStyle -> Red] |

And here is a `Plot` of the same function in the complex plane:

✕
ComplexPlot3D[Sin[z], {z, -4 \[Pi] - 2 I, 4 \[Pi] + 2 I}, PlotLegends -> Automatic] |

The special functions group is another subset of mathematical functions coming after the elementary ones. Special functions have been widely used in mathematical physics and related problems during the last few centuries. For example, the Bessel functions that describe the Fraunhofer diffraction and many other phenomena are special functions. In particular, the oscillatory behavior of `BesselJ` makes it suitable for modeling the oscillations of drums:

✕
Plot[Evaluate[Table[BesselJ[n, x], {n, 1, 3}]], {x, -10, 10}, Filling -> Axis] |

In general, the Bessel-type functions, orthogonal polynomials and others are grouped in the class of hypergeometric functions: they are particular or limiting cases of different hypergeometric functions. The class of hypergeometric functions has a well-defined hierarchy, with the `Hypergeometric2F1` and `HypergeometricPFQ` functions standing at the top of this class. The systematic treatment of these functions was first given by Carl Friedrich Gauss.

From the mathematical point of view, the general theory of hypergeometric functions is well developed. These functions had a significant impact in science (please explore the documentation pages of the hypergeometric functions for examples of applications).

There is also a group of advanced special functions. The Mathieu, spheroidal, Lamé and Heun functions are more general than the `Hypergeometric2F1` function, so they are potent enough to solve more complex physical problems like the Schrödinger equation with a periodic potential:

✕
sol = DSolveValue[-w''[z] + Cos[z] w[z] == ℰ w[z], w[z], z] |

We have the Mathieu and spheroidal functions in the Wolfram Language, but what we didn’t yet have was the class of Heun (and as a particular case, the Lamé, or ellipsoidal, spherical harmonics) functions. We have implemented the missing group of Heun functions to achieve greater completion in covering the area of named special functions, as most of them are either particular or limiting cases of Heun functions. Its rising popularity in the literature indicates that the Heun class of functions is probably the next generation of special functions that will serve as a framework for future scientific developments. (For some nice references, please check the bibliography section of the Heun Project.)

There are two major directions of development for mathematical functions in the Wolfram Language: improved documentation for the functionality that is already in the system and implementation of new features, including new functions, methods and techniques of calculations.

In the first direction, we have recently standardized and significantly improved the documentation pages for the 250+ mathematical functions based on a large collection of more than 5,000 examples so that documentation pages now look like small, well-structured handbooks:

In the direction of introducing new features, we have implemented powerful asymptotic tools like `Asymptotic`, `AsymptoticDSolveValue` and `AsymptoticIntegrate`. For Version 12.1, we have introduced 10 new Heun functions that are the most general special functions at the moment.

I will take a short detour and discuss the relation between mathematical functions and differential equations, since this provides the foundation for my approach to the Heun and other special functions.

Many classical elementary and special functions are particular solutions of differential equations. Indeed, many of these functions were first introduced in an attempt to solve differential equations that arose in physics, astronomy and other fields. Thus, they may be viewed as being generated by the associated differential equations.

For example, the exponential function is generated by a simple first-order differential equation:

✕
DSolveValue[{w'[z] == w[z], w[0] == 1}, w[z], z] |

Similarly, the following linear second-order differential equation generates the Legendre polynomials:

✕
DSolveValue[ w''[z] + (2 z )/(z^2 - 1) w'[z] - (n (n + 1))/(z^2 - 1) w[z] == 0, w[z], z] |

I am a big fan of the idea of working directly with the differential equations instead of their particular solutions; this approach is much more beneficial, as differential equations are considered to be large data structures, and we are able to mine a lot of additional information about mathematical functions from their generating differential equations.

Now, the classification of linear differential equations is tightly connected with the structure of their singularities or singular points that might either be regular or irregular: these are the points in the complex plane where the coefficients of the differential equations diverge.

For the famous Bessel differential equation:

✕
BesselEq = w''[z] + 1/z w'[z] + (z^2 - n^2)/z^2 w[z]; |

… that defines the Bessel functions:

✕
DSolveValue[BesselEq == 0, w[z], z] |

… the point is a regular singular point.

We may generate the solution of a linear differential equation at regular singular points using the Frobenius method, i.e. the power-series method that generates infinite-term expansions with coefficients that obey recurrence relations uniquely defined by the differential equation. The powerful `AsymptoticDSolveValue` function gives exactly these Frobenius solutions:

✕
AsymptoticDSolveValue[BesselEq == 0, w[z], {z, 0, 4}] |

Here the first Frobenius solution (the regular one at the singular point) is called `BesselJ` while the second one (the singular one) is called `BesselY`. Interestingly, this is a rather common situation in the theory of special functions. Of course there are exceptions to this rule, but usually special functions are Frobenius solutions of their generating equations at some regular singular points. For the Gauss hypergeometric equation that is the most general differential equation with three regular singular points located at , and :

✕
HypergeometricEq = w''[z] + (c/z + (1 + a + b - c)/(z - 1)) w'[z] + (a b)/(z (z - 1)) w[z]; |

One of these Frobenius solutions (the regular one) is called `Hypergeometric2F1 `and is one of the most famous functions in physics:

✕
DSolveValue[HypergeometricEq == 0, w[z], z] |

Naturally, the second solution in this output (i.e. the singular one with the pre-factor power function) is the second Frobenius solution of the Gauss hypergeometric equation.

The `Hypergeometric2F1`` `function is an infinite series; the coefficients of this series obey a two-term recurrence relation of the form :

✕
Series[Hypergeometric2F1[a, b, c, x], {x, 0, 3}] |

… and there is an exact closed-form expression for the *n*th coefficient of the expansion. This is a common feature for all the hypergeometric functions.

But an important remark is that for advanced special functions (like the Heun functions), the coefficients of their Frobenius expansions obey recurrence relations of at least three terms. There are no general closed-form expressions for these functions. We do not know their explicit forms and obviously are forced to work with their generating equations that have one more singular point. This additional regular singular point leads to a significant complication of the solutions.

At last, after this brief diversion into the theory of special functions, we are ready to proceed and present the Heun functions.

Heun’s general differential equation is a second-order linear ordinary differential equation with four regular singular points located at , , and on the complex plane:

✕
HeunEq = w''[ z] + (\[Gamma]/z + \[Delta]/(z - 1) + ( 1 + \[Alpha] + \[Beta] - \[Gamma] - \[Delta])/(z - a)) w'[ z] + (\[Alpha] \[Beta] z - q)/(z (z - 1) (z - a)) w[z]; |

The general Heun equation is a generalization of the Gauss hypergeometric equation with one more additional regular singular point located at (which is complex), so this equation is a direct generalization of the hypergeometric one with just one more regular singular point. This equation was first written in 1889 by Karl Heun, who was a German mathematician.

There is only one book and one chapter in the Digital Library of Mathematical Functions, plus around three hundred articles on different properties and applications of these general special functions. The theory of Heun functions is poorly developed, and a lot of important questions are still open but are being actively investigated.

The general Heun equation has six parameters. Four of them () are the characteristic exponents of Frobenius solutions at different singular points:

✕
AsymptoticDSolveValue[HeunEq == 0, w[z], z -> ∞] // FullSimplify |

The parameter stands for the third regular singular point, while the parameter —referred to as an accessory or spectral parameter—is an extremely important parameter that is not available in the case of hypergeometric functions.

In analogy with the hypergeometric equation, the regular Frobenius solution of the general Heun equation at a regular singular origin is called `HeunG`. It has the value of 1 at the origin and branch-cut discontinuities in the complex plane running from to and from to `DirectedInfinity`:

✕
DSolveValue[HeunEq == 0, w[z], z] |

The following shows a plot of the Heun functions for a range of values for the parameter :

✕
{a, \[Alpha], \[Beta], \[Gamma], \[Delta]} = {4 + I, -0.6 + 0.9 I, -0.7 I, -0.18 - 0.03 I, 0.3 + 0.6 I}; |

✕
Plot[Evaluate[ Table[Abs[ HeunG[a, q, \[Alpha], \[Beta], \[Gamma], \[Delta], z]], {q, -20, -3, 1}]], {z, -3/10, 9/10}, PlotStyle -> Table[{Hue[i/20], Thickness[0.002]}, {i, 20}], PlotRange -> All, Frame -> True, Axes -> False] |

`HeunG` is simplified to `Hypergeometric2F1` for the following sets of the parameters:

A small but important remark here is that, although the closed forms of the Heun functions are unknown, different features of these functions might be revealed from the differential equations. For example, the transformation group of the `HeunG` function has 192 members (in total, 192 different local solutions for the general Heun equation, written in terms of a single `HeunG` function).

Unlike the hypergeometric functions whose derivatives are hypergeometric functions with shifted parameters, the derivatives of the Heun functions are special functions of a more complex class solving more complex differential equations. These derivatives were implemented as separate functions in Version 12.1. The derivative of `HeunG` is `HeunGPrime`:

✕
D[HeunG[a, q, \[Alpha], \[Beta], \[Gamma], \[Delta], z], z] |

This pair of functions can be used to calculate the higher derivatives of `HeunG` using the differential equation to eliminate derivatives of order higher than one:

✕
D[HeunG[a, q, \[Alpha], \[Beta], \[Gamma], \[Delta], z], {z, 2}] // Simplify |

Another feature is that indefinite integrals of Heun functions cannot be expressed in terms of elementary or other special functions:

✕
Integrate[HeunG[a, q, \[Alpha], \[Beta], \[Gamma], \[Delta], z], z] |

Like the `Hypergeometric2F1` function, `HeunG` has confluent cases when one or more of the regular singular points in the general Heun equation coalesce, generating equations with a different structure of singularities. We recall that `Hypergeometric2F1` has one confluent case: the `Hypergeometric1F1` function. `HeunG` has four confluent modifications called `HeunC`, `HeunD`, `HeunB` and `HeunT` solving the single-, double-, bi- and tri-confluent Heun equations, respectively.

`HeunC` has an invaluable importance as it generalizes the `MathieuC` and `MathieuS` functions, as well as others like the `BesselI` and `Hypergeometric2F1` functions:

A noteworthy example is that `HeunC` solves the generalized spheroidal equation in its general form without specification of the parameter :

✕
sol = DSolveValue[(1 - z^2) w''[z] - 2 z w'[z] + (\[Lambda] + \[Gamma]^2 (1 - z^2) - m^2/(1 - z^2)) w[ z] == 0, w[z], z, Assumptions -> {\[Gamma] > 0, m > 0}] |

✕
Plot[Abs[sol /. {m -> 4/3, \[Gamma] -> 7/2} /. {C[1] -> 1/3, C[2] -> 1/3} /. \[Lambda] -> {-2, -1, 0, 1, 2}] // Evaluate, {z, -3/4, 3/4}] |

`HeunD` is the standard series solution of the double-confluent Heun equation at the ordinary point :

✕
Plot3D[Abs[ HeunD[q, 0.2 + I, -0.6 + 0.9 I, -0.7 I, 0.3 + 0.6 I, z]], {q, -20, 2}, {z, 1/2, 2}, ColorFunction -> Function[{q, z, HD}, Hue[HD]], PlotRange -> All] |

The `HeunB` function solves the bi-confluent Heun equation:

✕
sol = DSolve[ y''[z] + (\[Gamma]/z + \[Delta] + \[Epsilon] z) y'[ z] + (\[Alpha] z - q)/z y[z] == 0, y[z], z] |

It has the following approximations around :

✕
terms=Normal@Table[Series[HeunB[1/31,9/10,1/10,1/10,3/2,z],{z,0,m}],{m,1,5,2}] |

Here is a plot of the approximations:

✕
Plot[{HeunB[1/31, 9/10, 1/10, 1/10, 3/2, z], terms}, {z, -6, 3}, PlotRange -> {-4, 8}, PlotLegends -> {"HeunB[q, \[Alpha], \[Gamma], \[Delta], \[Epsilon], \ z]", "1st approximation", "2nd approximation", "3rd approximation"}] |

`HeunB` is truly useful, as different problems of classical and quantum physics are solved using this function. For example, the whole family of doubly anharmonic oscillator potentials (or, in fact, an arbitrary potential up to sixth-order polynomial form):

✕
V[x_] := \[Mu] x^2 + \[Lambda] x^4 + \[Eta] x^6 Plot[V[x] /. {\[Mu] -> -7, \[Lambda] -> -5, \[Eta] -> 1}, {x, -3, 3}] |

… is solved in terms of the `HeunB` function:

✕
DSolve[-w ''[z] + V[z] w[z] == ℰ w[z], w[z], z] |

… while the problem of normalizable bound states is still unsolved.

The last confluent Heun function, the `HeunT` function, which might be considered as a generalization of the `Airy` functions, is the solution of the tri-confluent Heun equation:

✕
DSolve[ y''[ z] + (\[Gamma] + \[Delta] z + \[Epsilon] z^2) y'[ z] + (\[Alpha] z - q) y[z] == 0, y[z], z] |

`HeunT`` solves the classical anharmonic oscillator problem (in fact, the quartic potential):`

✕
sol = DSolve[ u''[z] + (Subscript[\[Lambda], 1] + Subscript[\[Lambda], 2] z^2 + Subscript[\[Lambda], 4] z^4) u[z] == 0, u[z], z] |

We are able to simulate the dynamics of the oscillator using `HeunT` functions:

✕
{Subscript[\[Lambda], 1], Subscript[\[Lambda], 2], Subscript[\[Lambda], 4]} = {1, 1/2, 1/4}; Plot[{u[z] /. sol /. {C[1] -> 1, C[2] -> 1}}, {z, 0, 9/2}] |

Surprisingly (or not?) the “primes” of the Heun functions are independent actors and have important applications in science.

The Wolfram Language also has the `MeijerG` superfunction, with a powerful tool set and wide variety of features:

✕
MeijerG[{{}, {}}, {{v}, {-v}}, z] |

Unfortunately, the `MeijerG` representations of special functions are limited to the hypergeometric class of functions and are not applicable in the Heun case (as well as Mathieu and spheroidal cases).

These and a lot of other interesting examples on the properties and applications of the Heun functions are noted in the documentation pages.

Heun functions have a range of applications in contemporary physics and are powerful enough to generate solutions for a significant set of unsolved problems from quantum mechanics, the theory of black holes, conformal field theory and others. They are being successfully applied in real physical problems at a rapid rate: during the last decade, the number of publications related to the theory of Heun functions tripled in comparison with all other publications until 2010, according to arXiv.

Specifically, the powerful apparatus of the Heun functions allows derivation of new infinite classes of integrable potentials for relativistic and nonrelativistic wave equations used in different problems of quantum control and engineering (please see the recent paper by A. M. Ishkhanyan for different examples).

Heun functions appear in the theory of Kerr–de Sitter black holes and may be used for analysis in more complex geometries (the papers by R. S. Borissov and P. P. Fiziev and H. Suzuki, E. Takasugi and H. Umetsu discuss these problems).

The relationship between the Heun class of equations and Painlevé transcendents leads to new results for the two-dimensional conformal field theory based on the analysis of the solutions of Heun equations (see the papers of B. C. da Cunha and J. P. Cavalcante and F. Atai and E. Langmann).

The aforementioned examples as well as others indicate that the Heun functions are important in and popular for solving absolutely different problems in contemporary physics.

At Wolfram, we are in a constant search for fresh ideas and methods that make the Wolfram Language one of the most famous, popular, powerful and user-friendly tools for scientists working in different areas of contemporary science.

From time to time, the mathematical toolset has to be updated to meet new problems and challenges. Twentieth-century quantum mechanics is closely related to the hypergeometric class of functions, but the set of problems solvable with these special functions is largely exhausted, so a new generation of functions is needed. This is why for Version 12.1 of the Wolfram Language, we implemented the Heun functions and plan to continually improve the coverage of advanced special functions to meet more complex scientific challenges in the future.

Last week, we kicked off a four-part series on Wolfram|Alpha’s step-by-step chemistry offerings with chemical reactions. Future posts will cover chemical structure and bonding along with quantum chemistry. We continue this week with chemical solutions, another foundational component of all chemistry classes.

From the blood in your veins to the oceans covering the planet, solutions are everywhere! Understanding their chemical properties is essential to sustaining life, creating new materials and treating illness. As such, disciplines ranging from biology to material science to the health professions all must be comfortable doing solution-related computations.

To master such calculations, the step-by-step results provide stepwise guides that can be viewed one step at a time or all at once. Read on for example problems covering solute concentration, solution preparation, p*K*_{a} and colligative properties.

Analysis of chemical solutions begins with determining the concentration of the components in said solution. Step-by-step results are available for computing the amount fraction, mass fraction and molality, in addition to molarity. In all cases, a general framework for solving these concentration problems is provided via the Plan step. Details of which formula to use and how to compute the necessary information are highlighted.

A 355 mL soft-drink sample contains 0.133 mol of sucrose. What is the molar concentration of sucrose in the sample?

For this problem, enter “molarity of 0.133 mol sucrose in 355 mL water”.

Chemical laboratories around the world generate solutions with a desired concentration on a daily basis. Step-by-step results are available for preparing solutions by employing stock solutions of higher concentration and the dilution formula or using the definition of molarity. In both cases, a general framework for solving these types of problems is provided via the Plan step. Details of which formula to use and how to compute the necessary information are highlighted.

How many milliliters of concentrated HCl are required to make 250 mL of a 2.00 M HCl solution?

To find out how many milliliters you need, enter “prepare 250 mL of 2.00 M HCl from concentrated HCl” since Wolfram|Alpha knows the molarity of concentrated HCl.

Understanding how tightly a proton is bound to another atom is central to understanding and predicting chemical reactions involving acids and bases. The experimental value containing this information is the acidity constant, *K*_{a}. Acidity constants can span many orders of magnitude, so it is easier to look at the p*K*_{a} value. The p*K*_{a} calculator provides easy interconversion of the p*K*_{a} and the acidity constant.

Hydrocyanic acid has an acidity constant value of 6.2 × 10^{–10}. What is the corresponding p*K*_{a} value?

The calculator can be fed known information directly via “pKa Ka=6.2×10^-10”.

The physical properties of a solution differ from the physical properties of the pure solute and solvent composing a solution. Solution properties that primarily depend on the number of solute particles rather than the chemical nature of those particles are known as colligative properties. Step-by-step results are available to compute boiling-point elevations and freezing-point depressions along with the van 't Hoff factor employed in all colligative property formulas.

What is the boiling point of a 0.33 m solution containing a nonvolatile solute in toluene (*K*_{b} = 3.4 K kg/mol), assuming ideal solution behavior?

The necessary calculator can be fed known information directly via “boiling point elevation m=0.33 molal, i=1, Kb=3.4 K kg/mol”.

Test your chemical solution problem-solving skills by using the Wolfram|Alpha tools described to solve these word problems. Answers will be provided in the next blog post in this series.

- A good ratio for the salt bath used in old-fashioned ice-cream makers is five cups ice to one cup salt. What is the mass fraction of the resulting mixture?
- What is the molality of ethylene glycol for a solution that freezes at –5.00 °C?

Here are the answers to last week’s challenge problems on chemical reactions.

One might be tempted to do two Wolfram|Alpha queries, unless you recall that the atom count and molecular mass are computed during a mass composition calculation.

The molecular mass is 151.165 u. Hydrogen has the largest number of atoms but contributes the smallest total mass. This is a result of the atomic mass of hydrogen being so small.

Similar to the first problem, one might first think it is necessary to submit multiple Wolfram|Alpha queries. But in fact, both answers can be computed at once. Note that Wolfram|Alpha figures out the chemical formulas, does the chemical conversions and balances the chemical equation automatically.

The limiting reagent is oxygen and the theoretical yield of the product is 11.36 grams.

See you next week, when we continue with structure and bonding. If you have suggestions for other step-by-step content (in chemistry or other subjects), please let us know! You can reach us by leaving a comment below or sending in feedback at the bottom of any Wolfram|Alpha query page.

]]>*Mathematica 12 has powerful functionality for solving partial differential equations (PDEs) both symbolically and numerically. This article focuses on, among other things, the finite element method (FEM)–based solver for nonlinear PDEs that has been newly implemented in Version 12. After briefly reviewing basic syntax of the Wolfram Language for PDEs, including how to designate Dirichlet and Neumann boundary conditions, we will delineate how Mathematica 12 finds the solution of a given nonlinear problem with FEM. We then show some examples in physics and chemistry, such as the Gray–Scott model and the time-dependent Navier–Stokes equation. More information can be found in the Wolfram Language tutorial “Finite Element Programming,” on which most of this article is based.*

Wolfram Research社の旗艦製品であるMathematicaは，5,000 を超える組み込み関数を有するWolfram Languageを駆動する．数理モデリング，解析の基本となる常・偏微分方程式の分野においては，これらをシンボリックに，あるいは数値的に解くための強力なソルバを搭載している．最近は有限要素法(FEM) を利用した数値的求解機能が大幅に強化され，偏微分方程式(PDE)を任意の領域上で解いたり，固有値・固有関数を求めたりすることが可能となった．ここでは，最新のバージョン12における非線形偏微分方程式のFEMによる求解を中心に，現実的な問題に応用する上での流れを例とともに紹介する．なお，有限要素法を用いて非線形PDEを解くワークフローの詳細，コードはすべて公開されている．MathematicaのWolframドキュメント内で，チュートリアル“FiniteElementProgramming”を参照いただきたい．

Wolfram Languageにおいて微分方程式を数値的に解く際の関数(コマンド)は，`NDSolve`，あるいは`NDSolveValue`のふたつある．これら二つは出力のフォーマットが若干異なるだけで，中の処理はまったく同じものなので，以下，本文中では表記が短い`"``NDSolve"`, コード例中では出力の扱いが簡便な`"``NDSolveValue"` で表記する．Mathematica上でFEMを利用するには，パッケージをロードする :

✕
Needs["NDSolve`FEM`"] |

あとは，`NDSolve`にPDE，領域，初期・境界条件を与えるだけである．たとえば，単位円上のポアソン方程式–∇^{2}*u* = 1を例にとると，境界条件として*x* ≥ 0にある境界で*u* = 0とすると，

✕
eqn = -Inactive[Div][Inactive[Grad][u[x, y], {x, y}], {x, y}] == 1; Subscript[\[CapitalOmega], D] = Disk[]; Subscript[\[CapitalGamma], D] = DirichletCondition[u[x, y] == 0, x >= 0]; usol = NDSolveValue[{eqn, Subscript[\[CapitalGamma], D]}, u, {x, y} \[Element] Subscript[\[CapitalOmega], D]] |

で解が得られ，

✕
Plot3D[usol[x, y], {x, y} \[Element] Subscript[\[CapitalOmega], D]] |

とプロットできる．

`NDSolve`の中の有限要素法が現在適用可能な偏微分方程式は次の形をもたなければならない :

ここで，解くべき従属変数*u*が^{n}上の1次元関数のときは，*m*, *d*, *a*, *f*はスカラー，α, γ, βは*n*次元ベクトル，*c*は*n***n*行列である．また，*c*, *a*, *f*, α, γ, βは*t*, *x* ∈ ^{n}, *u*, ∇*u*などに依存してよいが，*m*, *d*は*x*に対してのみの依存性をもつ．複数の従属変数*u* ∈ ^{d}に対する式を連立させる場合は，式 (1)でγ, *f*は*d*次元のベクトル，その他の係数はベクトルを成分にもつ行列となる．ただし，微分演算子の作用は，係数行列やベクトル*u*との演算の際は，行列・ベクトルの成分内での演算に受け継がれる．たとえば，∇(*u*_{1}, …, *u*_{d})^{T} = (∇*u*_{1}, …, ∇*u*_{d})^{T} や，

といった具合である．

自然科学から工学的応用において現れる多くのPDEは，式(1)の特別な場合である．たとえば，波動方程式は，*m*と*c*以外の係数をすべてゼロにした場合であるし，非圧縮性の流体の速度，圧力場*u* = (*v*, *p*)^{T} ∈ ^{4}を記述するNavier–Stokesの式

は，

により，式(1)の形に表現できる．以下，しばらくは時間に依存しない問題を考え，空間次元に関するFEMを扱う．時間に依存する問題については 3節の最後で簡単に説明し，4.3節と4.4節で例をあげる．

重要なのは，FEMが適用可能と判断されるためには，`NDSolve`に与える式が各係数の依存性を含めて式(1)の形(“Coefficient Form”)として認識されることである．簡単な例として，

を考えてみよう．これは，式(1)で*c* = –∇*u*, *f* = –4として他の係数をすべてゼロとしたものに相当する．しかし，`NDSolve`に与えるPDEとして，

✕
Div[{{Derivative[1][u][x]}}.Grad[u[x], {x}], {x}] + 4 == 0 |

を入力すると，`NDSolve`が処理を開始する前にPDEが評価され，結果として式(1)第1項は2*u*´ (*x*)*u*´´ (*x*)とみなされてしまう．これは式(1)の Coefficient Formの形ではないため，FEMにより解くことができない(*u*´´ (*x*)が*u*´ (*x*)の係数と認識され，係数が2階導関数に依存することになってしまう)．`NDSolve`に式(1)の形のまま渡すには，`Inactive`，あるいは`Inactivate`を用いて，

✕
Inactive[Div][{{Derivative[1][u][x]}}.Inactive[Grad][ u[x], {x}], {x}] + 4 == 0 |

のように∇の評価を保留しておけばよい．

任意の次元の任意の領域が指定可能である．上のポアソン方程式の例のように単純な図形であれば，`Disk`や`Polygon`などを組み合わせて作ることもできるし，等式や不等式で表される領域であれば，`ParametricRegion`, `ImplicitRegion`などが利用できる．さらに，写真などから作成した領域指定画像を`ImageMesh`により`NDSolve`で利用可能な領域データに変換することも可能である．

境界∂Ω上での関数値を直接与えるディリクレ境界条件は，

✕
NDSolveValue[{PDE (s) for f[x, y], DirichletCondition[f[x, y] == bc, predicate]}, f, {x, y} \[Element] \[CapitalOmega]] |

のように，PDEとともに指定するだけである．ここでbcは境界での値を与える関数，`predicate`は*f*(*x*, *y*)=*bc*が満たされるべき境界を指定する．`predicate`を`True`のみにすれば，∂Ω全体が指定される．

一般化されたノイマン境界条件(ロビン(Robin)境界条件)は，`NeumannValue`で指定する．ロビン境界条件は境界を外向き垂直に貫く流束(flux)の成分を次の形で規定する：

である．は∂Ω上の外向き法線(単位)ベクトル，右辺の*g*–*qu*がユーザが与える値である．ただし，`NeumannValue`は`DirichletCondition`とは指定方法が異なるので注意が必要である．これは，有限要素近似において，PDEにテスト関数ϕをかけて領域Ωで積分して弱形式を得ることに由来している．式(1)第1項にϕをかけて積分すると， ·(–*c*∇*u* – α*u* + γ)の項は，

となる．境界∂Ω上の積分の被積分関数が，ちょうどロビン境界条件で指定されるべきものに相当している．このため，この項を*g*–*qu*の積分で置き換えることで，`NDSolve`がこの境界条件を正しく扱うことができる．

たとえば，単位円境界の*x* ≤ 0の領域において*u*(*x*,*y*) = 0, *x* ≥ 1/2においては ·∇*u* = *xy*^{2}というディリクレ条件とノイマン条件が課せられたラプラス方程式–∇^{2}*u* = 0を解くには，

✕
\[CapitalOmega] = Disk[]; Subscript[\[CapitalGamma], D] = DirichletCondition[u[x, y] == 0, x <= 0]; usol = NDSolveValue[{-Div[Grad[u[x, y], {x, y}], {x, y}] == NeumannValue[x*y^2, x >= 1/2], Subscript[\[CapitalGamma], D]}, u, {x, y} \[Element] \[CapitalOmega]] |

とすればよい．(この場合のPDEはあいまいさなくCoefficient Formと認識されるので微分演算子を`Inactive`にする必要はないが，`Inactive[Div]`, `Inactive[Grad]`としてももちろんかまわない．)

`Div`の前の負号は式(1)中の*c*を1とするためのもので，これによりノイマン条件 ·*c*∇*u* = ·∇*u* = *xy*^{2}がそのまま`NeumannValue`に入力できる．解をプロットすれば，次のようになる．

✕
Plot3D[usol[x, y], {x, y} \[Element] \[CapitalOmega]] |

留意すべきは，`NeumannValue`が式(1)のPDEをベースにした式(4)の形で決められるため，ノイマン条件を「手で」調整する必要がある場合があることである．たとえば，ポアソン方程式–∇^{2}*u* + 1/5 = 0に対して，ノイマン条件 ·∇*u* = *xy*^{2}(*x* ≥ 1/2)をディリクレ条件*u*(*x*,*y*) = 0(*x* ≤ 0)とともに課すにあたり，`NDSolve`に与えるPDE自体を–5 ∇^{2}*u* + 1 = 0としたとしよう．すると，`NDSolve`内では式(1)において*c* = 5と認識するため，·*c*∇*u*に相当する`NeumannValue`は5*xy*^{2}としなければならない．言われてみれば当然のことにも見えるが，ディリクレ条件と異なり，ノイマン(ロビン)条件をPDEから独立して指定するわけではないことに注意が必要である．–∇^{2}*u* + 1/5 = 0の場合と–5 ∇^{2}*u* + 1 = 0の場合を以下に示しておく．

入力式が–∇^{2}*u* + 1/5 == 0の場合

✕
\[CapitalOmega] = Disk[]; Subscript[\[CapitalGamma], D] = DirichletCondition[u[x, y] == 0, x <= 0]; |

✕
usol = NDSolveValue[{-Div[Grad[u[x, y], {x, y}], {x, y}] + 1/5 == NeumannValue[x y^2, x >= 1/2], Subscript[\[CapitalGamma], D]}, u, {x, y} \[Element] \[CapitalOmega]] |

✕
Plot3D[usol[x, y], {x, y} \[Element] \[CapitalOmega]] |

入力式が–5 ∇^{2}*u* + 1 == 0の場合

✕
usol = NDSolveValue[{-5 Div[Grad[u[x, y], {x, y}], {x, y}] + 1 == NeumannValue[5 x y^2, x >= 1/2], Subscript[\[CapitalGamma], D]}, u, {x, y} \[Element] \[CapitalOmega]] |

✕
Plot3D[usol[x, y], {x, y} \[Element] \[CapitalOmega]] |

のようにすればよい．ロビン条件3*u* + ·∇*u* = *xy*^{2}のような場合も同様で，`NDSolve`内のPDE左辺が–∇^{2}*u* + 1/5であれば，右辺は`5*NeumannValue[1/2*x*y ^{2} – 3/2*u[x, y], x≥1/2]`とする．

FEMを適用するには対象領域内にメッシュを生成する必要があるが，これについてはここでは深くは立ち入らない．興味ある方のために簡単にまとめておくと，2 次元領域についてはTriangle，3 次元領域にはTetGenとよばれるツールを利用している．TriangleはDelaunay triangulations, constrained Delaunay triangulations, conforming Delaunay triangulationsを行い，TetGenはconstrained Delaunay tetrahedralization, boundary conforming Delaunayメッシュなどの3次元領域の四面体メッシュを生成することができる．Wolfram Languageはこれらを必要なときに自動的に利用するが，ユーザが柔軟にカスタマイズして使うこともできる．詳細は，Triangleについてはここ，そしてTetGenについてはここの説明を参照されたい．

線形PDEの場合は，PDEの弱形式から離散化を経て連立1次方程式を解くが，非線形PDEを解く際にもこれを利用する．基本的な流れは

- シード(種)となる解候補の近辺で非線形PDEを線形化
- 線形化された式を離散化して解く
- シードと得られた解の差が許容する誤差以内なら終了
- 得られた解を新たなシードとして1の線形化作業へ戻る

である．つまり，非線形代数方程式のNewton–Raphson法による求解と同様のプロセスをたどる．詳細は上述のWolframドキュメント内のチュートリアルに記載があるが，簡単にまとめると以下のようになる．

まず式(1)の時間微分に関する部分を取り除くと，

であるが，

とすると，

と単純な形となる．この非線形PDEを線形化するわけだが，1 変数非線形方程式の解を数値的に求めるときのように，ある適当な関数*u*_{0}をシードとして，そこから∇^{2}*·Γ* (*u*^{∗}) – *F* (*u*^{∗}) = 0となる真の解*u*^{∗}ヘ漸近的に近づいていく．*u*^{∗}と*u*_{0}の差を*r* = *u*^{∗} – *u*_{0}とすると，

*Γ*, *F*を*u*_{0}のまわりでテイラー展開して，*O*(*r*^{2})の高次の項を無視する近似をすると，

となる．*Γ* 'や*F* 'などの微分は，∂*Γ*/∂∇*u*, ∂*Γ*/∂*u*, ∂*F*/∂∇*u*, ∂*F*/∂*u* などを計算して得られる．これらを*u*_{0}において評価すると，式(9)は離散化した各点(節点)での*u*に対する連立1次方程式となる．ここで初期・境界条件も合わせて連立させることで閉じた連立方程式となり，これにより*r*が得られる．

シード*u*_{0}としてはデフォルトで *u*(*x*) = 0, ∀ *x* ∈ Ωとされるが，これは`NDSolve`のオプションのひとつとして，たとえば`InitialSeeding→{u[x,y]==x+Exp[-Abs[y]]}`などと指定することができる．線形化による漸近的求解で意図せぬ局所解に陥る可能性を考えると，問題の背景からシードをうまく与えることも有用である．また，式(13)から残差rを求める際には，左辺に現れるヤコビアン ∇·*Γ* '(*u*_{0}) – *F* '(*u*_{0})の計算量が大きく，これが全体の計算時間に大きく影響する．このため，Wolfram Languageでは非線形FEMを適用する際にはNewton–Raphson法そのものではなく，Affine covariant Newton法を用いた上で，許容できる範囲内で前のステップで用いたヤコビアンを再利用するBroyden法を使い，ヤコビアンの計算回数を大幅に減らす策をとっている．

時間についての積分は，空間次元について離散化して連立方程式(行列)を得たあと，これを時間に関する常微分方程式とみなすことで種々の計算法が適用できる．線の方法(method of lines)や，場合によっては時間方向にもFEMを適用することも可能である．

電流のまわりには磁場が生じる．モーターのような構成で，電流をコイルに流したときどのような磁場分布が生まれるか，特に，固定子と回転子を構成する強磁性体の透磁率が磁場に依存するような非線形材料の場合で計算してみる．基礎となるモデル式は，磁場とベクトルポテンシャルの関係と，Ampereの法則である．これらをひとつにまとめて，さらにクーロンゲージをとれば，

となる．電流は*z*方向のみに成分をもつとし，また問題を簡単にするためベクトルポテンシャルのx成分，*y*成分は定数であり，透磁率はz方向に対して一定であると仮定する．すると，式(10)で意味があるのは*z*成分のみとなり，スカラー量*u* = *A _{z}*についてのPDEとなる;

透磁率μ(B)は，下の実測データよりフィッティングした式を用いた．

✕
ListPlot[BHData,PlotLabel->"Measured magnetic susceptibility"] |

✕
Clear[a1, a2, b1, b2, c1, c2]; model = a1 Exp[-(x - b1)^2/(c1^2)] + a2/(1 + Exp[(x - b2)^2/(c2^2)])^2; fitData = FindFit[BHData, model, {a1, b1, c1, a2, b2, c2}, x]; fit = Function[x, Evaluate[model /. fitData]] |

✕
Show[ListPlot[BHData], Plot[fit[x], {x, 0, 3}], PlotLabel -> "Fitted curve for magnetic susceptibility"] |

次の図はモータの断面を模式的に表したもので，黄色とオレンジ色の部分に画面に垂直方向に電流を流すとの仮定のもとで，磁場の強度分布を非線形PDE式(11)により計算してみる．

✕
mesh["Wireframe"[ "MeshElementStyle" -> (Directive[FaceForm[#], EdgeForm[]] & /@ {Blue, Red, Gray, Orange, LightOrange, Yellow})]] |

透磁率の設定，電流を流す要素の指定．`NDSolve`では，モーター固定子の外側では磁場がゼロになるというディリクレ境界条件を課している．

✕
B2Norm = Sqrt[ Total[Grad[ u[x, y], {x, y}]^2] + $MachineEpsilon];(* norm of grad u *) \[Mu]Air = 4 \[Pi]*10^-7; \[Nu] = Piecewise[{ {-1/fit[B2Norm], ElementMarker == 2 || ElementMarker == 3} }, -1/\[Mu]Air]*IdentityMatrix[2]; |

✕
jz = Piecewise[{ {10, ElementMarker == 4}, {-10, ElementMarker == 6} }, 0]; |

✕
usol = NDSolveValue[{Inactive[Div][(\[Nu]\!\(\* TagBox[".", "InactiveToken", BaseStyle->"Inactive", SyntaxForm->"."]\)Inactive[Grad][u[x, y], {x, y}]), {x, y}] == jz, DirichletCondition[u[x, y] == 0, x^2 + y^2 >= 0.95]}, u, {x, y} \[Element] mesh] |

得られた結果をモータ構造のワイヤフレームとともに表示．

✕
{minsol, maxsol} = MinMax[usol["ValuesOnGrid"]]; Show[ ContourPlot[usol[x, y], {x, y} \[Element] mesh, PlotRange -> All, ColorFunction -> "TemperatureMap", Contours -> Range[minsol, maxsol, (maxsol - minsol)/15]], ToBoundaryMesh[mesh]["Wireframe"], VectorPlot[ Evaluate[{{0, 1}, {-1, 0}}.Grad[usol[x, y], {x, y}]], {x, y} \[Element] mesh, StreamPoints -> Coarse]] |

定常状態にある非圧縮性流体を記述するNavier–Stokes方程式

は，第1式2項めの対流項により本質的に非線形である(ここでは密度ρ = 1，外力場はゼロとした)．ここで，*u*はベクトルであるので，2 次元であれば，第1式は微分演算子∇が作用するのが*u _{x}*か

✕
\[Nu] =.; navierstokes = {\[Rho]*{{u[x, y], v[x, y]}}.Inactive[Grad][ u[x, y], {x, y}] + Inactive[ Div][{{-\[Nu], 0}, {0, -\[Nu]}}.Inactive[Grad][ u[x, y], {x, y}], {x, y}] + Derivative[1, 0][p][x, y], \[Rho]*{{u[x, y], v[x, y]}}.Inactive[Grad][v[x, y], {x, y}] + Inactive[ Div][{{-\[Nu], 0}, {0, -\[Nu]}}.Inactive[Grad][ v[x, y], {x, y}], {x, y}] + Derivative[0, 1][p][x, y], Derivative[0, 1][v][x, y] + Derivative[1, 0][u][x, y]}; |

✕
bcs = {DirichletCondition[u[x, y] == 2, y == 1], DirichletCondition[u[x, y] == 0, y != 1], DirichletCondition[v[x, y] == 0, True], DirichletCondition[p[x, y] == 0, x == 0 && y == 0]}; |

✕
op = navierstokes /. {\[Rho] -> 1, \[Nu] -> 1/1000}; {uVel, vVel, pressure} = NDSolveValue[{op == {0, 0, 0}, bcs}, {u, v, p}, {x, y} \[Element] Rectangle[{0, 0}, {1, 1}], Method -> {"FiniteElement", "InterpolationOrder" -> {u -> 2, v -> 2, p -> 1}, "MeshOptions" -> {"MaxCellMeasure" -> 0.0001}}]; |

得られる速度場を可視化してみよう．

✕
Show[ StreamPlot[{uVel[x, y], vVel[x, y]}, {x, 0, 1}, {y, 0, 1}, Axes -> None, Frame -> None, StreamPoints -> {Automatic, Scaled[0.02]}], ToBoundaryMesh[uVel["ElementMesh"]]["Wireframe"]] |

圧力分布は以下のようである．

✕
Plot3D[pressure[x, y], {x, 0, 1}, {y, 0, 1}, PlotRange -> {-0.5, 1.5}, PlotPoints -> 80, Boxed -> True] |

反応拡散系とよばれる，化学反応と物質の拡散による複数の物質の濃度変化をモデル化した非線形連立PDE(Gray–Scottモデル)を計算してみるのが次の例である．外部から原料となる化学物質Uが，別の物質Vで満たされた反応容器内に連続的に導入され，自己触媒反応

を経て，Uが最終生成物Pに変化し，Pは系外へ排出されるとする．UとVの濃度*u*, *v*の時間変化は

によって記述されるとするのがこのモデルである．*D _{u}*,

✕
eqn = { D[u[t, x, y], t] + Inactive[Plus][ Inactive[ Div][{{-c1, 0}, {0, -c1}}.Inactive[Grad][ u[t, x, y], {x, y}], {x, y}], (v[t, x, y]^2 + f)* u[t, x, y]] == f, D[v[t, x, y], t] + Inactive[Plus][ Inactive[ Div][{{-c2, 0}, {0, -c2}}.Inactive[Grad][ v[t, x, y], {x, y}], {x, y}], (-u[t, x, y]*v[t, x, y] + f + k)*v[t, x, y]] == 0 } //. {c1 -> 2.*10^-5, c2 -> c1/4, f -> 0.04, k -> 0.06}; |

✕
ics = {u[0, x, y] == 1/2, v[0, x, y] == If[x^2 + y^2 <= 0.025, 1., 0.]}; bcs = {DirichletCondition[u[t, x, y] == 0, True], DirichletCondition[v[t, x, y] == 0, True]}; |

✕
{ufun, vfun} = NDSolveValue[{eqn, bcs, ics}, {u, v}, {x, y} \[Element] Disk[], {t, 0, 3000}, Method -> {"TimeIntegration" -> {"IDA", "MaxDifferenceOrder" -> 2}, "PDEDiscretization" -> {"MethodOfLines", "DifferentiateBoundaryConditions" -> True, "SpatialDiscretization" -> {"FiniteElement", "MeshOptions" -> {"MaxCellMeasure" -> 0.002}}}}]; |

✕
{vmin, vmax} = MinMax[vfun["ValuesOnGrid"]]; frames = Table[ Rasterize[ ContourPlot[vfun[t, x, y], {x, -1, 1}, {y, -1, 1}, Contours -> Range[vmin, vmax, (vmax - vmin)/4], PlotRange -> All], RasterSize -> Large], {t, 100, 2000, 20}]; |

✕
ListAnimate[frames] |

時間に依存する非圧縮性流体の流れを記述するのは，式(12)に時間変化の項を付け加えた

である．二枚の無限に広い平行平板の間の空間を流体が流れる状況で，この空間に無限に長い円柱を流れと垂直方向に置いたときの流速の分布を計算してみる．平板と円柱に垂直な面を*xy*平面として，未知数は速度(*u*,*v*)と圧力*p*となる．Wolfram Languageコードは以下のとおりである．

Navier–Stokes方程式

✕
transientnavierstokes = {\[Rho]*{{u[t, x, y], v[t, x, y]}}.Inactive[ Grad][u[t, x, y], {x, y}] + Inactive[ Div][{{-\[Mu], 0}, {0, -\[Mu]}}.Inactive[Grad][ u[t, x, y], {x, y}], {x, y}] + Derivative[0, 1, 0][p][t, x, y] + \[Rho]* Derivative[1, 0, 0][u][t, x, y], \[Rho]*{{u[t, x, y], v[t, x, y]}}.Inactive[Grad][ v[t, x, y], {x, y}] + Inactive[ Div][{{-\[Mu], 0}, {0, -\[Mu]}}.Inactive[Grad][ v[t, x, y], {x, y}], {x, y}] + Derivative[0, 0, 1][p][t, x, y] + \[Rho]* Derivative[1, 0, 0][v][t, x, y], Derivative[0, 0, 1][v][t, x, y] + Derivative[0, 1, 0][u][t, x, y]}; |

流域のサイズの設定と，流入口での速度分布の設定．ある時刻で不連続に流速がゼロから非ゼロに変化しないように，滑らかな流速変化を与える`rampFunction`を定義する．流域のサイズと流速等から，ここでの流れのレイノルズ数は約200である．

✕
rules = {length -> 2.2, height -> 0.41}; \[CapitalOmega] = RegionDifference[Rectangle[{0, 0}, {length, height}], Disk[{1/5, 1/5}, 1/20]] /. rules; rmf = RegionMember[\[CapitalOmega]]; rampFunction[min_, max_, c_, r_] := Function[t, (min*Exp[c*r] + max*Exp[r*t])/(Exp[c*r] + Exp[r*t])] ramp = rampFunction[0, 1, 4, 5]; GraphicsRow[{ Show[BoundaryDiscretizeRegion[\[CapitalOmega]], VectorPlot[{4*1.5*y*(0.41 - y)/0.41^2, 0}, {x, 0, 2.2}, {y, 0, 0.41}, VectorScale -> Small, VectorStyle -> Red, VectorMarkers -> Placed["Arrow", "Start"], VectorPoints -> Table[{0, y}, {y, 0.05, 0.35, 0.075}], ImageSize -> Large]] , Plot[ramp[t], {t, -1, 10}, PlotRange -> All, ImageSize -> Large, AspectRatio -> 1/5] }] |

境界条件と初期条件

✕
op = transientnavierstokes /. {\[Mu] -> 10^-3, \[Rho] -> 1}; bcs = { DirichletCondition[ u[t, x, y] == ramp[t]*4*1.5*y*(height - y)/height^2, x == 0], DirichletCondition[u[t, x, y] == 0, 0 < x < length], DirichletCondition[v[t, x, y] == 0, 0 <= x < length], DirichletCondition[p[t, x, y] == 0, x == length]} /. rules; ic = {u[0, x, y] == 0, v[0, x, y] == 0, p[0, x, y] == 0}; |

*t* = 0 から10までの速度分布の変化を，*t*をモニターしながら`NDSolve`により計算．一般的なPC(3.1GHz Intel Core i5，メモリ16GB)で要6分程度．

✕
Dynamic["time: " <> ToString[CForm[currentTime]]] AbsoluteTiming[{xVel, yVel, pressure} = NDSolveValue[{op == {0, 0, 0}, bcs, ic}, {u, v, p}, {x, y} \[Element] \[CapitalOmega], {t, 0, 10}, Method -> {"TimeIntegration" -> {"IDA", "MaxDifferenceOrder" -> 2}, "PDEDiscretization" -> {"MethodOfLines", "DifferentiateBoundaryConditions" -> True, "SpatialDiscretization" -> {"FiniteElement", "InterpolationOrder" -> {u -> 2, v -> 2, p -> 1}, "MeshOptions" -> {"MaxCellMeasure" -> 0.0002}}}}, EvaluationMonitor :> (currentTime = t;)];] |

各点での速度の絶対値をもとに色付けし，アニメーションを作成．

✕
{minX, maxX} = MinMax[Sqrt[xVel["ValuesOnGrid"]^2 + yVel["ValuesOnGrid"]^2]]; mesh = xVel["ElementMesh"]; frames = Table[ Rasterize[ ContourPlot[ Norm[{xVel[t, x, y], yVel[t, x, y]}], {x, 0, 2.2}, {y, 0, 0.41}, PlotRange -> All, AspectRatio -> Automatic, ColorFunction -> "TemperatureMap", Contours -> Range[minX, maxX, (maxX - minX)/7], Axes -> False, Frame -> None, RegionFunction -> Function[{x, y, z}, rmf[{x, y}]]], RasterSize -> 4*{360, 68}, ImageSize -> 2*{360, 68}], {t, 4, 10, 1/20}]; |

✕
ListAnimate[frames] |

領域に対して生成されたメッシュは以下のように確認できる．

✕
ToElementMesh[\[CapitalOmega], "MaxCellMeasure" -> 0.0002]["Wireframe"] |

ここまで見てきたように，Mathematica 12(Wolfram Language 12)では有限要素法の適用範囲が大幅に広がり，Navier–Stokes方程式を始めとする多くの非線形偏微分方程式の求解が可能となった．シンボリックな計算に強みを持つWolfram Languageにより，個々のPDEの形によらず，高度な一般性を保ったまま統一的な処理を効率的に実行することが可能となっている．FEM関連の内部処理の詳細が公開されていることは前述したが，同時に弾性解析，音響解析，熱・振動伝導解析など多くの分野における応用例も，Mathematicaチュートリアルで詳しく解説されているので，参考にしていただければ幸いである．

Please note: where possible, I have taken data from before the DHSC’s plan was published.

What is the computational thinking process? Simply put, it is a sequence of four steps that you can take in order to solve a problem. The aim is not just to obtain a solution, but to ensure that the right choices were made, the right tools were used and the right outcomes were achieved along the way. The steps are as follows: you define explicitly the problem you wish to solve, abstract it to a computational form, compute an answer, then interpret the result:

If, upon interpretation, the result does not meet your initial requirements, you then repeat the process—climbing the solution helix—until you are satisfied with the result:

In early March, the DHSC published a coronavirus action plan (“A Guide to What You Can Expect across the UK”). The plan has three main phases: contain, delay and mitigate, with each new phase superseding the last. There is also a background project ongoing. At the time of writing this post, the UK has progressed to the delay phase.

Problem: how can we prevent the infection from spreading?

The first phase makes the assumptions that cases are few and scattered (as indeed they were at the time) and that transmissions must be made through close contact (i.e. minimal airborne transmission). It also assumes that the virus has an incubation period of up to two weeks.

Once an infected person has been identified, the spread of infection can be monitored by means of contact tracing (finding out whom the person has been in contact with and, if necessary, isolating them). This can be modelled with a transmission model.

Here’s an example of a stochastic transmission model:

- “Feasibility of Controlling COVID-19 Outbreaks by Isolation of Cases and Contacts,” by Joel Hellewell, Sam Abbott, et al.

This model suggests that contact tracing and isolation are enough to contain a new outbreak, but that the likelihood of succeeding decreases if new cases aren’t detected and isolated quickly. This matches what the DHSC recommends:

- Detect and isolate cases as they emerge
- Repatriate British nationals and their dependants from affected areas overseas (and isolate, if infected)
- Enforce health measures at ports; ask for declaration from incoming planes or vessels that all passengers are well
- Give medical professionals and police the power to detain and direct individuals at risk
- Inform all medical professionals of the steps they should take if they identify an infected patient
- Inform the public of the steps they should take if they suspect they have become infected (wash hands, cough into elbow, etc.)
- Strategically stockpile medicines and protective equipment

If outbreaks aren’t contained and the basic reproduction number (the average number of people infected by a typical individual with the infection) is sufficiently high, widespread infection may be unavoidable. This brings us to phase two: delay.

Problems: How can we reduce pressure on the NHS? How can we delay the peak of infections?

The second phase assumes that isolating infected individuals hasn’t worked; the spread of the virus is inevitable. With infections set to rise, hospitals are going to see a far higher intake of patients with respiratory problems. If this occurs during the winter months—where such problems are already more common—it could overrun the system. Finding a vaccine is critical if the burden on the system is to be relieved entirely.

Modelling how the virus spreads is now critical. With little hard data to go on, it makes sense to look at how similar viruses such as seasonal influenza or SARS have been modelled in the past. Typical models include:

(Please note: model types are not mutually exclusive.)

Here are a few examples from Wolfram Community:

- “Agent-Based Epidemic Simulation,” by Jon McLoone
- “Epidemic Simulation with a Polygon Container,” by Francisco Rodríguez
- “Epidemiological Models for Influenza and COVID-19,” by Robert Nachbar
- “Agent-Based Network Models for COVID-19,” by Christopher Wolfram

Here’s an excellent analysis of epidemics with SIR models from 3Blue1Brown:

- “Simulating an Epidemic,” by Grant Sanderson

Together, these models suggest that isolation strategies can reduce the peak infection rate—or “flatten the curve”—and that many small meetings are better than a few large ones. This matches what the DHSC recommends:

- Continue detecting and isolating cases as they emerge
- Enforce social distancing (close public places, encourage working from home, cancel large gatherings, advise against all but essential travel)
- Increase publicity of advice to individuals about protecting themselves and others

Many businesses depend on customer travel (e.g. the tourism industry) or have procedures that cannot be carried out from home (e.g. retail, dining). With social distancing enforced, a prolonged pandemic could lead to a significant proportion of the population being unemployed. This brings us to phase three: mitigation.

Problems: How can we save as many lives as possible? How can we ensure the country keeps running?

The third phase assumes that the virus is now widely established, or is continuing for longer than initially anticipated. The government must now decide on their priorities for the health of the country, both medically and economically.

Hospitals, for example, must work out how they can use their resources strategically to minimise the number of casualties. HM Revenue and Customs (HMRC) must decide how to tackle wide-scale unemployment.

Being up to date with the latest data is now essential: How do age and health correlate with susceptibility to the virus or its severity? Who is most at risk? Which industries will be most affected by unemployment? Which areas of the country?

Optimisation methods such as linear programming can be used to help assign medical equipment to the right places. Existing income schemes can be modified to work on a wider scale.

Here are some data visualisations comparing age and income with fatalities:

- “Study: Elderly Most at Risk from the Coronavirus,” by Niall McCarthy
- “COVID-19 Case-Fatality Ratio, Income and Age: Simple Visualization,” by Mads Bahrami

Here’s an example supply strategy by the US Department of Health and Human Services:

- “Strategies for Optimizing the Supply of Facemasks: COVID-19,” Centers for Disease Control and Prevention

These studies suggest that fatalities increase with age and low median household income, implying that medical care should be concentrated on the elderly, and that a temporary income scheme may be needed. This matches what the DHSC recommends:

- Delay non-urgent care; direct emergency services to concentrate on most urgent cases only
- Support businesses facing short-term cash flow issues
- Support early discharge from hospitals; encourage home care
- Further increase publicity of advice to individuals about protecting themselves and others
- Draw on existing stockpiles of medicines, medical devices and clinical consumables, and employ a distribution strategy
- Call medical leavers and retirees back to duty
- Reduce focus on wide-scale measures (such as intensive contact tracing)

This post has demonstrated how the DHSC’s plan aligns with the computational thinking process. Most important of all, it has shown how getting the right assumptions before tackling a problem is essential to getting a meaningful answer. For more information about how the process can be applied at the industry, government, education or individual level, go to computationalthinking.org.

If you’d like to try out the process for yourself, a number of high-school problem-solving modules are available at this Wolfram Community page.

]]>If you’re studying chemistry or are in a discipline requiring chemistry prerequisite courses, then you know how expensive the required textbooks can be. To combat this, the chemical education community has developed open educational resources to provide free chemistry textbooks. However, although free textbooks keep cash in your wallet, they don’t include solution guides for all the homework problems.

Luckily, the Step-by-Step Solutions feature of Wolfram|Alpha has got your back! Whether you’re studying remotely or collaborating via video conferencing, Wolfram|Alpha helps you learn and apply the problem-solving frameworks for chemical word problems. The step-by-step solutions provide stepwise solution guides that can be viewed one step at a time or all at once. The guides not only hone efficient problem solving, but also facilitate digging deeper into concepts that might still be murky.

Over the next few weeks, we’ll be exploring some of the popular topics that middle-school, high-school and college students encounter in their chemistry courses and final exams: chemical reactions, structure and bonding, chemical solutions, and finally, quantum chemistry. Read on for example problems in chemical reactions and their step-by-step solutions!

A fundamental aspect of chemistry is balancing chemical equations. If chemical equations are the language in which chemical processes are expressed, then balancing chemical equations is the corresponding grammar. The step-by-step solution walks you through a robust algebraic approach to identifying the stoichiometric coefficients.

Write the balanced equation for the reaction of copper with nitric acid to produce copper nitrate, nitrogen oxide and water.

For this class of problem, just enter “balance copper + nitric acid -> copper nitrate + nitrogen dioxide + water”.

After balancing the related chemical equations, the next step in planning a laboratory experiment is computing how much of each reactant must be measured out. To do this, one needs the molar mass for each reactant. Step-by-step solutions are available for the molecular mass and relative molecular mass in addition to the molar mass. In all cases, a general framework for solving these types of problems is provided via the Plan step. Details of which formula to use and how to gather the necessary information are provided.

Calculate the molar mass of silver sulfate, Ag_{2}SO_{4}.

In this case, just enter “molar mass silver sulfate”.

One way to analyze individual chemicals is to compute and compare the mass and atom percentages. The step-by-step solution provides a general framework for solving this class of problem in the Plan step. Details of the relevant equations, as well as how to compute the necessary intermediate values, are provided. Ways in which you can check your work during the calculations are also available via the “Show intermediate steps” buttons.

Antihemophilic factor is a coagulant with the formula C_{11794}H_{18314}N_{3220}O_{355}S_{83}. What is its percent composition?

For the answer, just enter “antihemophilic factor elemental composition”.

Chemical conversions crop up in nearly every chemistry homework or research problem. As such, step-by-step solutions are available for converting among moles, mass, volume, molecules and atoms. Unit conversions and dimensional analysis details are provided.

How many atoms are in five milliliters of a 1.5 mM magnesium hydroxide solution?

To solve this, just enter “convert 5 mL of 1.5 mM magnesium hydroxide to atoms”.

After running a chemical reaction, one often wants to know how the reaction went by computing the reaction yields. Step-by-step solutions are available for computing the amount of reactants needed and the theoretical yield in addition to the percent yield. The use of stoichiometric factors to generate the desired values is explained in detail.

Upon reaction of 1.274 grams of copper sulfate with excess zinc metal, 0.392 grams of copper metal was obtained according to the following equation: CuSO_{4}(aq)+Zn(s)⟶Cu(s)+ZnSO_{4}(aq). What is the percent yield?

To find the percent yield, just append the mass values to the corresponding chemical species and ask for the stoichiometry, “1.274 g CuSO4 + Zn -> 0.392 g Cu + ZnSO4 stoichiometry”.

Test your chemical reaction problem-solving skills by using the Wolfram|Alpha tools described to solve these word problems. Answers will be provided in the next blog post in this series.

- Compute the molecular mass of acetaminophen. Is the element with the largest atom count also the element with the largest mass percent?
- What is the limiting reactant and theoretical yield when 24.8 grams of white phosphorus and 0.200 moles of oxygen react to form 10.0 grams of phosphorus pentoxide?

Whether you’re studying for upcoming final exams, puzzling out homework or just looking for a refresher, chemical reactions are one of many chemistry topics covered by the Wolfram|Alpha knowledgebase. Next week we’ll cover step-by-step solutions for chemical solutions, followed by structure and bonding, and then quantum chemistry. If you have suggestions for other step-by-step content (in chemistry or other subjects), please let us know! You can reach us by leaving a comment below or sending in feedback at the bottom of any Wolfram|Alpha query page.

]]>Computational thinking is an increasingly relevant and important skill to develop. The ability to break down problems into their component parts, and to piece together a solution quickly and accurately, is important for a variety of careers and pursuits in the 21st century. Even more important, perhaps, is that this skill enables you to express ideas clearly enough so that even a computer can understand them.

My role at Wolfram Research focuses on building programs to explore and learn computational thinking and coding using Wolfram technologies, primarily for high-school and college students. After mentoring at the Wolfram High School Summer Camp in 2019, I worked to build the Wolfram Emerging Leaders Program, a semester-long deep dive into computational thinking for Summer Camp alumni.

The Wolfram High School Summer Camp is a two-week, project-based program aimed at talented high schoolers from around the world. The program usually takes place in person, but in response to the COVID-19 pandemic, 2020’s Summer Camp will be a fully digital experience. Wolfram employees are seasoned experts at working and teaching remotely, and we’re looking forward to using that expertise while running our summer programs! The first few days are devoted to getting to know the Wolfram Language. Through a combination of traditional classroom lessons, active-learning exercises and coding challenges, students quickly learn how to translate their ideas into something computable—the core skill of computational thinking.

Once students have grasped the fundamentals of the Wolfram Language and developed their computational thinking skills in a controlled environment, they are unleashed on an independent project for the duration of the camp. With the support of an expert mentor, students take their projects from a short description, worked out and agreed upon with Stephen Wolfram, to a finished product.

Our students come from a variety of educational backgrounds: some have taken computer science classes at school; some have learned to code on their own; some have never coded at all before applying to the camp. And for many students, it’s the first time they’ve had an opportunity to immerse themselves in a project for any length of time. Oftentimes, it’s also the first chance they’ve had to control project outcomes.

The students create amazingly high-quality projects at the intersection of their own passions and rapidly expanding Wolfram Language skills. In 2019, projects ranged from automatically identifying and displaying meter in Latin poetry, to tracking movement in a squash game, to tricking neural networks into identifying images incorrectly. The variety in projects was further evidence for me of the extraordinary talent pool the camp brings in.

The Wolfram Emerging Leaders Program, affectionately nicknamed WELP, takes a selection of the Summer Camp students and asks them to join us for 14 weeks to complete a remote group project. If the Wolfram High School Summer Camp is a crash course in computational thinking, then the Wolfram Emerging Leaders Program is a deep dive into project work, team management and long-term development.

The first thing we do is speak to all the students about their interests and split them into groups consisting of three to five students. The focus of the program is entirely on the students and getting them thinking, so they are given deliberately vague descriptions of why they are placed together.

WELP is broken into three stages, with each roughly correlating to IDEO’s design thinking steps: ideation, iteration and implementation. Design thinking empowers students not to be judged on their ideas, creating a sense of openness that allows even late-in-the-game ideas to come forth as the strongest project goals. This is in comparison to some traditional classroom settings, where students stick with less-thought-out ideas simply because of the fear of “being wrong.”

The objective of this stage is for students to come up with a core goal for their projects, and to figure out what they want to achieve. At the end of this phase, the groups should have a concise problem statement: a sentence or two describing the issues they want to solve. This generally starts by looking at problems they see in the real world and the capabilities of Wolfram technologies in addressing those.

After project statements are approved, the teams move on to iteration—generating solutions to the problem identified in the previous phase. This phase motivates students to use divergent thinking skills to come up with dozens of potential solutions, then to work their way down to a single solution that carries through to the end of the program.

In order to create a final project draft—or minimum viable product (MVP)—students are encouraged to do “quick and dirty” coding to create short-form, experimental solutions for several of their ideas.

The MVPs go a long way toward addressing the problem, but by the end of this phase, the groups have fully realized projects. By this point they are more sure of their ability to execute on their solutions than they had been in the ideation stage.

By the end of the 2019 program, four out of five groups had produced high-quality computational essays. The fifth group had created a series of science communication videos in which they explored a variety of topics in a livecoding format.

One group identified gerrymandering, the deliberate redistricting of voting zones to advantage one political party over another, as an issue they wanted to explore. By generating hundreds of randomly districted graphs, this group used several established methods of detecting gerrymandering to attempt to find a baseline for what maps could reasonably be achieved by randomness, and what should be looked at more closely as an attempt to gerrymander.

Another group decided that they wanted to learn new data science skills, working their way through the data science pipeline of gathering, cleaning, analyzing and predicting with data.

The students learned a lot from this project, not only about the theory behind data analysis and prediction but also about the challenges of applying theory to a real problem.

This group decided to study the spread of disease in a closed population. The traditional way of addressing this problem is setting up a series of coupled equations that model the number of susceptible, infected and recovered individuals in a population.

The group decided to use non-deterministic cellular automata to model their spread of disease. They also made the interesting leap of using parameters for distances, vaccination rates and other measures found in the real world.

This group wrote some very nifty code to produce visualizations of how topics progress through a text. Originally intended to track lectures or podcasts, the group successfully extrapolated into more generalized text documents. The program takes a text and outputs a series of visualizations showing how the topics progress.

With this fairly unique use of natural language processing, the students managed to create useful and interesting visualizations that allow the user to see the story play out.

I really enjoyed mentoring the 2019 WELP students as part of this program. Seeing them work hard to deliver a project was gratifying, and I was proud to help them further develop both personally and academically.

As demonstrated by the 2019 projects, students learn several important skills over the course of this program. First, their Wolfram Language coding skills and content knowledge improve dramatically, which is most noticeable from MVP to final project. Second, they gain further experience in working as part of a remote team—a skill that they will use with increasing regularity in the workplace and further education. In future years, when the Summer Camp will be held in person again, WELP may be their first experience working as part of a remote team. Third, their computational thinking skills grant insights for deeper thinking and problem solving.

Several of the WELP students from 2019 will be attending the Summer Camp in 2020 as teaching assistants, so it will be interesting to see these new skills in action. They will be helping to deliver casual learning opportunities specifically designed to improve the computational thinking skills of a new camp generation. Want to see more from WELP students and other Wolfram-sponsored student programs? Browse Wolfram Community’s group for student leadership programming, including projects from the Wolfram Student Ambassadors Program and more.

Want to increase your programming and computational thinking skills? Apply to the Wolfram High School Summer Camp, a unique opportunity for entrepreneurial and technical high-school students to explore science and technology.

]]>