We can guess if you’re reading the Wolfram Blog that you’re probably a Wolfram Language user, whether as a recreational programmer, a physics professor or a highpowered data scientist. And let’s be honest about another thing: if you’re using it to solve algebraic integrals or analyze SARSCoV2 genetic sequences or some other complex subject, you’re likely a bigbrained person. I mean, you’re investigating the very nature of the universe in all its facets, right?
So what’s the problem? Well, there isn’t one—except maybe all those people (wellmeaning friends, family members or even colleagues)—who have no understanding of what you do. In their minds, it’s all just a big combination of ones and zeros with no place for the good, the true or the beautiful.
“Where’s the art?” they say. “Where’s the music? The poetry? How come your fancypants computational language can’t do anything with that?”
And they could very well be right… except for one critical factor: they’re not. If you’re talking with a liberal arts major—or worse, an angry mob of them, with metaphorical torches and pitchforks—you can find common ground: by using the very foundations of classical learning contained in the trivium and the quadrivium.
You might think I’m wrong… but I’m not. The proof is in the evidence, so let’s take a close look at some Wolfram Community posts that exemplify classical liberal arts subjects.
The idea of the trivium—grammar, logic and rhetoric—dates back to Plato’s dialogues, but the term itself was not used until the eighthcentury Carolingian Renaissance. (And for someone like my dad, who wrote his dissertation about a seventeenthcentury Andrew Marvell poem, it’s pretty much the beall and endall of higher education.) Wolfram Community has posts, however, that directly relate to all three subject areas.
Wordle took the world by storm at the end of 2021 and remains popular, even after losing some of its luster when it was sold to the New York Times earlier this year. The premise is simple enough: you have six guesses to figure out the word of the day. A yellow letter means you guessed the right letter, but it’s in the wrong position. A green result means both the right letter and the right spot.
The result is a game that forces you to think about how words are constructed. For example, what’s the best word to use for your first guess? And things get tougher as you eliminate more letters with each guess. How many consonant pairs remain? How many vowels? How can you best utilize your remaining guesses?
Unfortunately, you can only play the official Wordle game once a day. Many people want to play more often than that, including David Reiss, so one slow Sunday evening he decided to create his own facsimile version: MWordle. Using only a couple hundred lines of code, Reiss demonstrated the power of the Wolfram Language as well as its flexibility for recreational computing.
I’d always thought of computer logic and rhetorical logic as two completely separate things. After all, computers ultimately use zeros and ones, while building a logically sound argument depends on kairos—appropriateness to audience—and based on that, deciding whether to use ethos, pathos or logos for your appeal. That is to say, rhetorical arguments seemed to be much gooier than computer logic.
Then I came across this Wolfram Community post about ternary logic tables. Instead of just true or false, there is a third logical condition, which can be described as a “strong logic of indeterminacy” (Kleene) or a “logic of paradox” (Priest). The only consideration I’d had about a condition other than true or false was in an episode of Futurama, when the robot Bender wakes up from a nightmare and moans, “Ones and zeros everywhere… and then I thought I saw a two.” Who’d a thunk, right?
Another interesting thing about this post is it was written when Taha Shaikh was a student at the 2017 Wolfram High School Summer Camp. That was another thing that I didn’t know about Wolfram when I started working here: our significant allocation of resources devoted to training the next generation of scientists. From Wolfram U open online courses to Daily Study Groups to summer camps, Wolfram is always working to help grow the number of people able to perform computational analysis.
As useful as it is to know how to construct a sound argument, it’s as or more important to know how to critically analyze other rhetorical objects: a journal article, a news story, an advertisement. The lack of sound critical thinking is one of the greatest weaknesses in modern American discourse because people are more concerned with how “things” make them feel—and most often in lieu of hard evidence as to why that’s the case—as opposed to understanding how those rhetorical objects are constructed.
One of the great strengths of computational analysis is the ability to identify and filter various types of information. Sure, an investigation may have to do with algebraic integrals, but as Aryan Dawer’s post shows, it also includes analysis of rhetorical and visual strategies in television advertisements. By combining dialogue and image data as it appears in the ad’s timeline, the resulting analysis reached interesting, empirical conclusions about how fear and happiness are used at different points to promote the purchase of products.
The comments at the bottom of his post also illustrate another strength of Wolfram Community: that these computational explorations are starting points for wide discussions. One commenter pointed out that adding some statistical analysis could result in a journalworthy article. Even better, he points to a similar example using computational analysis to examine gender bias in movies.
Another paradox with someone getting on their high horse about the trivium—à la “Who needs math when you know what rhetoric is and how it works?”—is that the quadrivium, the upper division of liberal arts, is all about numbers: arithmetic, geometry, music and astronomy.
Gosper, a longtime Wolfram Community member and contributor, first tells the readers of this post to watch the following Derek Muller video “How Imaginary Numbers Were Invented”:
To be honest, Gosper and Muller pretty much lost me after “Mathematics began as a way to quantify our world…”. But that’s likely because in the quadrivium, arithmetic is the study of numbers in the abstract, so I don’t have much to hold onto with the discussion of imaginary and real numbers.
Again, however, I think what’s key here is the way Wolfram Community isn’t an insular, circumscribed world cut off from outside discourse and discussion. Instead, Gosper uses his post to respond to Muller’s video and notes that “Derek wisely avoided the following wrinkle: [a] cubic can have all three roots real, yet inexpressible with real radicals!” Gosper then goes on to explore the issue laid out in his title: no way to express these real numbers with real radicals.
As a teacher, I always encouraged my students to frame their points/ideas within the larger overall discussions relating to their subjects. Otherwise, they risked reinventing the wheel or taking a position already easily proven (or disproven). The production of new knowledge is often a collaborative process, which Gosper shows twice: first by responding to Muller’s video and by doing so on Wolfram Community.
Geometry is the the study of numbers in space, which first leads me to think of using rulers and compasses to draw endless triangles, quadrilaterals and circles. Instead, what Hoffman does in his Wolfram Community post is first use the Wolfram Language to generate a variety of custom clock faces. Then, he combines and “melts” them to create his final image: an homage to Surrealist artist Salvador Dalí.
Behind Hoffman’s image itself, however, is the reason he made it: to enter into Wolfram’s 2022 Computational Art Contest. Much as poetry slams—competitive poetry contests than began in the 1980s—reinvigorated moribund poetry readings, computational art contests like this one allow for the creative—and competitive!—expression of math beyond numbers on a page.
Music has the power to provoke strong emotional responses in listeners, but at its heart, it’s the study of numbers in time. That means the Wolfram Language has the power to create musical audio output such as Chaib’s post that includes 30 different musical scales.
Beyond scales alone, however, there are multiple examples at Wolfram Community where people have used the Wolfram Language to create their own musical compositions. One example is “Generating Music with Expressive Timing and Dynamics,” which algorithmically generates original music using handcrafted rules and a custom neural network, and also includes dynamics in loudness and humanlike timing mistakes.
Another thing to note is a comment at the bottom of Chaib’s Wolfram Community post from Daniel Lichtblau, who works in Wolfram’s Research and Development division. More than just offering praise for Chaib’s code, Lichtblau suggests he add some documentation and submit it to the Wolfram Function Repository. And in 2019, that’s what he did, so now anyone can use the MusicalScaleSample function.
Astronomy incorporates the first three elements in the quadrivium by studying numbers in both time and space. And Jeff Bryant’s post about the appearance of Comet C/2020 F3 (NEOWISE) showcases what the Wolfram Language can do in multiple ways.
First, he used the Wolfram Language to import a picture of the comet he took at the ChampaignUrbana Astronomical Society’s Prairie Winds Observatory at the beginning of his Wolfram Community post. Then, he used multiple resource functions to visualize the comet’s orbit when it was closest to Earth. Finally, he called WolframAlpha for a “more earthly view” of the comet’s path using a star chart. Numbers in space and time, indeed!
But this capability is not limited to comets. As an example of how Wolfram Community members build on each other’s work, this post modifies Bryant’s code to produce a sixmonth star chart for the UK. And it’s not limited to the Earth, either. In another post, Bryant plots a close encounter between a comet and Mars.
For the sake of full disclosure, I’ll be the first to admit that I’m an anomaly compared to most of my Wolfram colleagues. Instead of a science background, I have a couple of degrees in English… and the last math class I took was when I was a freshman in college. So I have a lot more in common with liberal arts majors than bigbrain scientists.
That said—and while I first thought I would be a stranger in a strange land when I arrived at Wolfram—that’s not turned out to be the case. Instead, after I discovered the full range of classical liberal arts subjects on display at Wolfram Community, I saw the connections to the world I’ve lived in for most of my life.
So I’m living proof there’s hope for those people who don’t understand what you’re doing with the Wolfram Language… you just have to lead them down the garden path on their terms.
This Wolfram U computational explorations course examines a range of disciplines not traditionally associated with coding. 
In many physics experiments, a voltage or current is desired that quickly rises to a particular value, stays there for a duration of time and then declines rapidly, giving the socalled flattop profile or square wave.
This has multiple applications in many physics and electrical engineering–related systems, including radar, kicker magnets for accelerators and really any time a pulsed uniform voltage or current is needed. In my case, I needed this capability for a metal vapor vacuum arc plasma source that I’m using to study the properties of metallic plasmas in strong magnetic fields.
In this blog post, I’ll walk you through some pulseforming network theory along with how I used the Wolfram Language to quickly and easily design a costeffective pulseforming network by using circuit theory, the interactive Manipulate function and data from an electronics vendor to explore practical design options. This will also show off the Quantity function in the Wolfram Language, which has proven helpful and easy to use.
When people consider pulsedpower applications, the natural and easy solution that comes to mind is to connect a capacitor in series with the load, charge it and then discharge it. Assuming the capacitor’s inductance and series resistance are low, a large current can be created, but that current will rapidly (and exponentially) decay.
✕

What do we do to “flatten” the peak? Increasing the resistance or capacitance will stretch the previous figure horizontally or vertically, but varying the resistance or capacitance will not alter the shape of the discharge curve.
It is worth pointing out that there is one (conceptually) simple solution here: to use a very, very large capacitor. This capacitor will discharge only a minor fraction of its stored energy over the desired pulse width and use a switch to disconnect the circuit after the desired duration.
While this indeed would produce a very flat profile, it requires capacitances so large that only electrolytic doublelayer capacitors (also called supercapacitors) would work. Supercapacitors often have maximum voltages of around 2.7 V, requiring a large number of them in series to get up to a larger voltage. Placing capacitors in series adds the respective equivalent series resistance (ESR) of each capacitor as well as their inductances, often severely limiting the peak current.
Semiconductor switches are also limited in their ability to stop flowing currents, although topoftheline transistors, like the IXTN660N04T4 (~$21), can switch around 700 A at approximately 40 V. That may work for some applications like an electromagnet system that requires modest voltages but high currents, but for most applications, this will be prohibitively expensive and still have poor performance.
Getting back to the question “How do we ‘flatten’ the peak of a capacitor’s discharge curve?”, the answer is simply to use inductors. Many inductors are merely wound coils of wire, and inductors tend to resist the change in current through them. They do this by storing the electrical energy in the form of a magnetic field. This is commonly compared to a capacitor, which stores electrical energy in an electrical field.
One of the most famous and important circuits of all time is the resistor–inductor–capacitor (RLC) circuit. If the resistance, inductance and capacitance are tuned properly, you can get resonant behavior in which the capacitor and inductor are alternately charging and discharging.
WolframAlpha has some powerful functionality that simulates an RLC circuit and computes its properties:
✕

This sinusoidal chargedischarge curve is really important and very useful to what we are building up to. In mathematics, you may have heard of a Fourier series, the idea behind which is that any harmonic function can be closely approximated by a series of superimposed sinusoidal functions.
The pulse shape we are trying to generate is a square wave, and therefore we can use the Fourier series deconstruction of a square wave (or at least the first N terms) to determine a number of RLC circuits in series that approximate a square wave when discharged:
✕

One important note is that using the Fourier series approximation to produce a perfect square wave has one serious downside: the Gibbs phenomenon, which more or less says that the edges of the approximation will have significant overshoot, and adding more terms does not improve this issue. Some bright physicist determined that looking at the Fourier series of a trapezoidal rather than a square wave suitably solved these issues.
All right, so that’s probably enough theory. To recap, simply discharging a capacitor into a resistive load will give us a sharp rise with exponential decay, and adding an inductor will turn that discharge curve into a sinusoidal shape. Superimposing multiple sinusoidal discharges can create a pretty good approximation to an ideal “square” discharge curve.
How should we arrange these capacitors and inductors? As it turns out, there are a lot of different forms of pulseforming networks, usually known by a letter. Many of these have a particular advantage; for example, the type D pulseforming network uses capacitors of identical capacitance:
✕

Given these variations, you’ll have to consider your application in great detail to decide which one is best, and that usually comes down to a decision of practicality (i.e. cost and ease of construction).
Depending on the voltage, current, rise time and pulsewidth requirements of a particular application, difficulties can be found in a number of places. For fast rise times, the issue may be switching (a topic outside the scope of this blog), or even the inductance of the capacitors. For highvoltage applications involving high charge transfer, finding suitable capacitors may be very difficult and expensive. It really depends on the application as well as available capacitor and switching technology. One last note: for most practical purposes, there are few benefits to using more than five sections.
For the remainder of the post, we’ll talk about pulseforming networks in the context of a particular project: an upgrade I am making to a metal vapor vacuum arc plasma source I built to study various aspects of metal plasmas.
For this application, the switching is performed by the initiation of a vacuum arc by a Marx generator—if you’d like an article about Marx generators, mention it in the comments—so switching is not an issue. The main design considerations would be that there should be as fast a rise time as possible; that the current should exceed 100 A throughout the pulse; and that the homogeneity should be reasonably high. The voltage is modest—no more than 800 V—and many film capacitors exist that can satiate the desired low ESR and equivalent series inductance (ESL) requirements.
The major design difficulty will be the proper arrangement and choice of capacitors and inductors (necessarily a tradeoff between complexity, cost and performance), and the major implementation difficulty will be in making the inductors. Commercially available inductors that can handle the expected peak currents—possibly in excess of a kiloampere—are prohibitively expensive, but making a large coil of thick wire will produce an inductor with highpeak currenthandling capabilities on a budget, so long as the needed inductance is low. The desired rise time for this application is less than 500 ns and the pulse width, counted as the period where the current exceeds 100 A, is desired to be around 500 ms.
One final caveat: a little while ago, I picked up two giant power film capacitors in an auction. They are individually able to deliver a surge current of over 20,000 A, and for cost reasons, I’d like to use them rather than get allnew capacitors.
✕

As far as I am aware, there is only one report on the internet of someone making a DIY pulseforming network to construct an amateur radar assembly.
While the application considered here is not too complex, there are many circumstances, including very fast rise times, large charge transfer and high voltages, that make professional pulseforming network design challenging. That being said, there’s no reason why they are any more difficult to build than various other resonant circuits.
A word of caution: any of the things discussed here could be potentially dangerous if mishandled, so don’t play around with high voltage. The voltages discussed here are potentially lethal and should only be handled by competent, cautious and safetyaware people.
When considering the aforementioned requirements, the natural first choice is the type B pulseforming network. It is typically the choice when you don’t want/need mutual inductance between the various inductors, and it has the neat feature of having two capacitors (the leftmost ones) that have reasonably similar capacitances, where my two 250 µF capacitors could go:
✕

This is where the first difficulty is encountered: capacitors are typically rated at a particular granular capacitance: 250 µF, for example, not 263 µF. There are capacitors that have odd capacitances, but they are sufficiently unusual that we have to design around the available capacitances:
✕

This diagram shows the ideal ratios of the capacitors, and right off the bat it doesn’t look to be too bad of a fit. The middle capacitor would need to be 300 µF, followed by 350 µF and 800 µF, respectively, but it’s not that off. If you wanted to get really precise values, you could use multiple capacitors in parallel to form an equivalent capacitor of some fractionally higher value, but for this project you’ll soon see why that introduces unreasonable complexity and cost.
So how would this circuit perform? When using the previous diagram along with known load parameters, you get this circuit I simulated using Falstad:
✕

When charged and discharged, it produces the following discharge pattern:
✕

While there is good homogeneity, there are four downsides to this arrangement:
Now we can get into the real “meat” of this blog: an interesting electrical engineering problem that is both theoretical and practical. Can the Wolfram Language help us make a costeffective decision?
To do this, let’s get some realworld data about the price and capability of existing capacitors. Then we can create a circuit simulator for these pulseforming networks to give us some key information about the performance, cost and complexity of a circuit. Then we can have it simulate all possible circuits (within reason) and give us the topranked ones.
Easier said than done, but it’s a nice computational approach and might give us an unexpected result. It would be too tedious to go through and try to figure this out manually, but it will likely take a modern processor seconds or minutes to step through all the permutations.
I’ve found that the online electronics distributor DigiKey has some of the best search tools, and they allow you to download large tables of data about their products. I went to their page on film capacitors and filtered out those with a maximum working voltage under 800 V, as well as capacitors with a capacitance under 500 µF. I then downloaded a CSV file of the remaining 151 capacitors.
One of the best but rarely mentioned aspects of the Wolfram Language is that it’s great for scraping and homogenizing data. That’ll be handy as we import the raw capacitor data and clean it up:
✕

Here are the fields and the first capacitor:
✕

Let’s first filter out capacitors that aren’t in stock or require more than a fourminimum order quantity:
✕

✕

Only 46 capacitors are left. What about price? If a capacitor is too expensive, we shouldn’t consider it:
✕

Let’s consider capacitors with a price below $125 because we will probably need two or three of them:
✕

✕

As a final step to pare down the list of capacitors, let’s allow only one capacitor per capacitance class. The ESR of all of these options is quite low, so it’s not an important factor:
✕

✕

Finally, let’s get rid of the data we don’t really care about and get our final dataset:
✕

✕

An interesting way to visualize the cost effectiveness of these capacitors is to examine their capacitance per dollar. In this respect, one capacitor in particular has a large advantage: the B25690A0128K903 offers 1200 µF for only $60.76:
✕

In order to optimize for complexity and cost, we’re going to consider a twosection type B pulseforming network, with one caveat: to get very fast rise times, the two 250 µF capacitors will be attached in series with the load along with a currentlimiting resistor. The circuit diagram looks like this:
✕

The load is on the left, connected via a currentlimiting resistor (500 MOhms) to the two 250 µF capacitors I already have. On the right is the behemoth 1200 µF capacitor (that we established as the most cost effective), along with a 130 MOhm currentlimiting resistor and 250 µH inductor, which shapes the rise of the pulse. When discharged, this circuit produces the following output over one millisecond:
✕

Is this a square wave? Not really, but it keeps the current at a single value (510 A) +/– 2.5% for one millisecond; take a look at the current rise:
✕

The current rises to the central value in under 180 ns. And this configuration costs all of $61… I think we have a winner.
The last thing we’ll need to do is design a custom inductor. The design we figured out calls for a 250 µH inductor, and we’d like it to be able to handle some very serious current (1 kA+) for 30 ms, in order to have some safety margin. Looking at an American wire gauge (AWG) chart, we can see that any thick copper wire above 8 gauge will do:
✕

I have some short spare lengths of #0 AWG wire, so that’s what I’ll be using to make this custom pulse inductor.
At its core, an inductor is most commonly a coil of wire. There are publicly available formulas to calculate the inductance of a coil of wire, and we can use them to figure out how to make a 250 µH inductor. WolframAlpha has a feature that allows you to calculate the inductance of a coil of wire:
✕

After some playing with it, we can get the parameters needed for a coil to supply the required inductance. The end result looks like this:
✕

I hope you enjoyed this post. And as for my experiment to study the properties of metallic plasmas in strong magnetic fields that required a pulseforming network? The results were primarily visual spectra of the metallic plasmas and measurements of their helical paths, which can have unusual instabilities and selfdefocusing due to collisions. The pulseforming network produced a reasonably homogeneous stream of plasma flow that reduced noise and uncertainty in the experiment.
I think pulseforming network design is interesting because of the union of mathematical theory, physics and electrical engineering. The computational approach to finding optimal electrical components is powerful, and I’ve used it often in other scientific projects.
For those who are interested, here are links to two helpful sites I used for this project:
Visit Wolfram Community or the Wolfram Function Repository to embark on your own computational adventures! 
Based on the number of new builtin functions the clear winner for the largest new framework in Version 12.3 is the one for trees. We’ve been able to handle trees as a special case of graphs for more than a decade (and of course all symbolic expressions in the Wolfram Language are ultimately represented as trees). But in Version 12.3 we’re introducing trees as firstclass objects in the system.
The fundamental object is Tree:
✕
Tree[a, {b, Tree[c, {d, e}], f, g}] 
Tree takes two arguments: a “payload” (which can be any expression), and a list of subtrees. (And, yes, trees are by default rendered slightly green, in a nod to their botanical analogs.)
There are a variety of “*Tree” functions for constructing trees, and “Tree*” functions for converting trees to other things. RulesTree, for example, constructs a tree from a nested collection of rules:
✕
RulesTree[a > {b, c > {d, e}, f, g}] 
And TreeRules goes the other way, converting a tree to a nested collection of rules:
✕
TreeRules[%] 
ExpressionTree creates a tree from the structure of an expression:
✕
ExpressionTree[Integrate[1/(x^2  1), x]] 
In a sense, this is a direct representation of a FullForm expression, as shown, for example, in TreeForm. But there are also ways to turn an expression into a tree. This takes the nodes of the tree to contain full subexpressions—so that the expressions on a given level in the tree are essentially what a function like Map would consider to be the expressions at that level (with Heads → True):
✕
ExpressionTree[Integrate[1/(x^2  1), x], "Subexpressions"] 
Here’s another version, now effectively removing the redundancy of nested subexpressions, and treating heads of expressions just like other parts (in “Sexpression style”):
✕
ExpressionTree[Integrate[1/(x^2  1), x], "Atoms"] 
Why do we need Tree when we have Graph? The answer is that there are several special features of trees that are important. In a Graph, for example, every node has a name, and the names have to be unique. In a tree, nodes don’t have to be named, but they can have “payloads” that don’t have to be unique. In addition, in a graph, the edges at a given node don’t appear in any particular order; in a tree they do. Finally, a tree has a specific root node; a graph doesn’t necessarily have anything like this.
When we were designing Tree we at first thought we’d have to have separate symbolic representations of whole trees, subtrees and leaf nodes. But it turned out that we were able to make an elegant design with Tree alone. Nodes in a tree typically have the form Tree[payload, {child_{1}, child_{2}, …}] where the child_{i} are subtrees. A node doesn’t necessarily have to have a payload, in which case it can just be given as Tree[{child_{1}, child_{2}, …}]. A leaf node is then Tree[expr, None] or Tree[None].
One very nice feature of this design is that trees can immediately be constructed from subtrees just by nesting expressions:
✕
Tree[{\!\(\* GraphicsBox[ NamespaceBox["Trees", DynamicModuleBox[{Typeset`tree = HoldComplete[ Tree[$CellContext`a, { Tree[$CellContext`b, None], Tree[$CellContext`c, { Tree[$CellContext`d, None], Tree[$CellContext`e, None]}]}]]}, { {RGBColor[0.6588235294117647, 0.7294117647058823, 0.7058823529411765], AbsoluteThickness[1], Opacity[0.7], StyleBox[{ LineBox[{{0.4472135954999579, 1.7888543819998317`}, {0., 0.8944271909999159}}], LineBox[{{0.4472135954999579, 1.7888543819998317`}, { 0.8944271909999159, 0.8944271909999159}}], LineBox[{{0.8944271909999159, 0.8944271909999159}, { 0.4472135954999579, 0.}}], LineBox[{{0.8944271909999159, 0.8944271909999159}, { 1.3416407864998738`, 0.}}]}, GraphicsHighlightColor>RGBColor[ 0.403921568627451, 0.8705882352941177, 0.7176470588235294]]}, {GrayLevel[0], EdgeForm[{GrayLevel[0], Opacity[0.7]}], StyleBox[{InsetBox[ FrameBox["a", Background>RGBColor[ 0.9607843137254902, 0.9882352941176471, 0.9764705882352941], FrameStyle>Directive[ RGBColor[0.6588235294117647, 0.7294117647058823, 0.7058823529411765], AbsoluteThickness[1]], ImageSize>Automatic, RoundingRadius>4, StripOnInput>False], {0.4472135954999579, 1.7888543819998317}], InsetBox[ FrameBox["b", Background>RGBColor[ 0.9607843137254902, 0.9882352941176471, 0.9764705882352941], FrameStyle>Directive[ RGBColor[0.6588235294117647, 0.7294117647058823, 0.7058823529411765], AbsoluteThickness[1]], ImageSize>Automatic, RoundingRadius>0, StripOnInput>False], {0., 0.8944271909999159}], InsetBox[ FrameBox["c", Background>RGBColor[ 0.9607843137254902, 0.9882352941176471, 0.9764705882352941], FrameStyle>Directive[ RGBColor[0.6588235294117647, 0.7294117647058823, 0.7058823529411765], AbsoluteThickness[1]], ImageSize>Automatic, RoundingRadius>4, StripOnInput>False], {0.8944271909999159, 0.8944271909999159}], InsetBox[ FrameBox["d", Background>RGBColor[ 0.9607843137254902, 0.9882352941176471, 0.9764705882352941], FrameStyle>Directive[ RGBColor[0.6588235294117647, 0.7294117647058823, 0.7058823529411765], AbsoluteThickness[1]], ImageSize>Automatic, RoundingRadius>0, StripOnInput>False], {0.4472135954999579, 0.}], InsetBox[ FrameBox["e", Background>RGBColor[ 0.9607843137254902, 0.9882352941176471, 0.9764705882352941], FrameStyle>Directive[ RGBColor[0.6588235294117647, 0.7294117647058823, 0.7058823529411765], AbsoluteThickness[1]], ImageSize>Automatic, RoundingRadius>0, StripOnInput>False], {1.3416407864998738, 0.}]}, GraphicsHighlightColor>RGBColor[ 0.403921568627451, 0.8705882352941177, 0.7176470588235294]]}}]], BaseStyle>{ FrontEnd`GraphicsHighlightColor > RGBColor[ 0.403921568627451, 0.8705882352941177, 0.7176470588235294]}, FormatType>StandardForm, FrameTicks>None, ImageSize>{69.1171875, Automatic}]\), \!\(\* GraphicsBox[ NamespaceBox["Trees", DynamicModuleBox[{Typeset`tree = HoldComplete[ Tree[$CellContext`a, { Tree[$CellContext`b, None], Tree[$CellContext`c, { Tree[$CellContext`d, None], Tree[$CellContext`e, None]}]}]]}, { {RGBColor[0.6588235294117647, 0.7294117647058823, 0.7058823529411765], AbsoluteThickness[1], Opacity[0.7], StyleBox[{ LineBox[{{0.4472135954999579, 1.7888543819998317`}, {0., 0.8944271909999159}}], LineBox[{{0.4472135954999579, 1.7888543819998317`}, { 0.8944271909999159, 0.8944271909999159}}], LineBox[{{0.8944271909999159, 0.8944271909999159}, { 0.4472135954999579, 0.}}], LineBox[{{0.8944271909999159, 0.8944271909999159}, { 1.3416407864998738`, 0.}}]}, GraphicsHighlightColor>RGBColor[ 0.403921568627451, 0.8705882352941177, 0.7176470588235294]]}, {GrayLevel[0], EdgeForm[{GrayLevel[0], Opacity[0.7]}], StyleBox[{InsetBox[ FrameBox["a", Background>RGBColor[ 0.9607843137254902, 0.9882352941176471, 0.9764705882352941], FrameStyle>Directive[ RGBColor[0.6588235294117647, 0.7294117647058823, 0.7058823529411765], AbsoluteThickness[1]], ImageSize>Automatic, RoundingRadius>4, StripOnInput>False], {0.4472135954999579, 1.7888543819998317}], InsetBox[ FrameBox["b", Background>RGBColor[ 0.9607843137254902, 0.9882352941176471, 0.9764705882352941], FrameStyle>Directive[ RGBColor[0.6588235294117647, 0.7294117647058823, 0.7058823529411765], AbsoluteThickness[1]], ImageSize>Automatic, RoundingRadius>0, StripOnInput>False], {0., 0.8944271909999159}], InsetBox[ FrameBox["c", Background>RGBColor[ 0.9607843137254902, 0.9882352941176471, 0.9764705882352941], FrameStyle>Directive[ RGBColor[0.6588235294117647, 0.7294117647058823, 0.7058823529411765], AbsoluteThickness[1]], ImageSize>Automatic, RoundingRadius>4, StripOnInput>False], {0.8944271909999159, 0.8944271909999159}], InsetBox[ FrameBox["d", Background>RGBColor[ 0.9607843137254902, 0.9882352941176471, 0.9764705882352941], FrameStyle>Directive[ RGBColor[0.6588235294117647, 0.7294117647058823, 0.7058823529411765], AbsoluteThickness[1]], ImageSize>Automatic, RoundingRadius>0, StripOnInput>False], {0.4472135954999579, 0.}], InsetBox[ FrameBox["e", Background>RGBColor[ 0.9607843137254902, 0.9882352941176471, 0.9764705882352941], FrameStyle>Directive[ RGBColor[0.6588235294117647, 0.7294117647058823, 0.7058823529411765], AbsoluteThickness[1]], ImageSize>Automatic, RoundingRadius>0, StripOnInput>False], {1.3416407864998738, 0.}]}, GraphicsHighlightColor>RGBColor[ 0.403921568627451, 0.8705882352941177, 0.7176470588235294]]}}]], BaseStyle>{ FrontEnd`GraphicsHighlightColor > RGBColor[ 0.403921568627451, 0.8705882352941177, 0.7176470588235294]}, FormatType>StandardForm, FrameTicks>None, ImageSize>{68.625, Automatic}]\), Tree[{CloudGet["http://wolfr.am/VAsaSro1"]}]}] 
By the way, we can turn this into a generic Graph object with TreeGraph:
✕
TreeGraph[%] 
Notice that since Graph doesn’t pay attention to ordering of nodes, some nodes have effectively been flipped in this rendering. The nodes have also had to be given distinct names in order to preserve the tree structure:
✕
Graph[CloudGet["http://wolfr.am/VAsb0AqA"], VertexLabels > Automatic] 
If there’s a generic graph that happens to be a tree, GraphTree converts it to explicit Tree form:
✕
GraphTree[KaryTree[20]] 
RandomTree produces a random tree of a given size:
✕
RandomTree[20] 
One can also make trees from nesting functions: NestTree produces a tree by nestedly generating payloads of child nodes from payloads of parent nodes:
✕
NestTree[{f[#], g[#]} &, x, 2] 
OK, so given a tree, what can we do with it? There are a variety of tree functions that are direct analogs of functions for generic expressions. For example, TreeDepth gives the depth of a tree:
✕
TreeDepth[CloudGet["http://wolfr.am/VAsbf4XX"]] 
TreeLevel is directly analogous to Level. Here we’re getting subtrees that start at level 2 in the tree:
✕
TreeLevel[CloudGet["http://wolfr.am/VAsbnJeT"], {2}] 
How do you get a particular subtree of a given tree? Basically it has a position, just as a subexpression would have a position in an ordinary expression:
✕
TreeExtract[CloudGet["http://wolfr.am/VAsbnJeT"], {2, 2}] 
TreeSelect lets you select subtrees in a given tree:
✕
TreeSelect[CloudGet["http://wolfr.am/VAsbnJeT"], TreeDepth[#] > 2 &] 
TreeData picks out payloads, by default for the roots of trees (TreeChildren picks out subtrees):
✕
TreeData /@ % 
There are also TreeCases, TreeCount and TreePosition—which by default search for subtrees whose payloads match a specified pattern. One can do functional programming with trees just like with generic expressions. TreeMap maps a function over (the payloads in) a tree:
✕
TreeMap[f, CloudGet["http://wolfr.am/VAsbCysJ"]] 
TreeFold does a slightly more complicated operation. Here f is effectively “accumulating data” by scanning the tree, with g being applied to the payload of each leaf (to “initialize the accumulation”):
✕
TreeFold[{f, g}, CloudGet["http://wolfr.am/VAsbCysJ"]] 
There are lots of things that can be represented by trees. A classic example is family trees. Here’s a case where there’s builtin data we can use:
✕
Entity["Person", "QueenElizabethII::f5243"][ EntityProperty["Person", "Children"]] 
This constructs a 2level family tree:
✕
NestTree[#[EntityProperty["Person", "Children"]] &, Entity["Person", "QueenElizabethII::f5243"], 2] 
By the way, our Tree system is very scalable, and can happily handle trees with millions of nodes. But in Version 12.3 we’re really just starting out; in subsequent versions there’ll be all sorts of other tree functionality, as well as applications to parse trees, XML trees, etc.
We introduced Tree as a basic construct in Version 12.3. In 13.0 we’re extending Tree and adding some enhancements. First of all, there are now options for tree layout and visualization.
For example, this lays out a tree radially (note that knowing it’s a tree rather than a general graph makes it possible to do much more systematic embeddings):
✕

This adds options for styling elements, with one particular element specified by its tree position being singled out as blue:
✕

One of the more sophisticated new “tree concepts” is TreeTraversalOrder. Imagine you’re going to “map across” a tree. In what order should you visit the nodes? Here’s the default behavior. Set up a tree:
✕

Now show in which order the nodes are visited by TreeScan:
✕

This explicitly labels the nodes in the order they are visited:
✕

This order is by default depth first. But now TreeTraversalOrder allows you to ask for other orderings. Here’s breadthfirst order:
✕

Here’s a slightly more ornate ordering:
✕

Why does this matter? “Traversal order” turns out to be related to deep questions about evaluation processes and what I now call multicomputation. In a sense a traversal order defines the “reference frame” through which an “observer” of the tree samples it. And, yes, that language sounds like physics, and for a good reason: this is all deeply related to a bunch of concepts about fundamental physics that arise in our Physics Project. And the parametrization of traversal order—apart from being useful for a bunch of existing algorithms—begins to open the door to connecting computational processes to ideas from physics, and new notions about what I’m calling multicomputation.
]]>
One of the things we want to do with Wolfram Language is to make it as easy as possible to connect with pretty much any external system. And in modern times an important part of that is being able to conveniently handle cryptographic protocols. And ever since we started introducing cryptography directly into the Wolfram Language five years ago, I’ve been surprised at just how much the symbolic character of the Wolfram Language has allowed us to clarify and streamline things to do with cryptography.
A particularly dramatic example of this has been how we’ve been able to integrate blockchains into Wolfram Language (and Version 12.2 adds bloxberg with several more on the way). And in successive versions we’re handling different applications of cryptography. In Version 12.2 a major emphasis is symbolic capabilities for key management. Version 12.1 already introduced SystemCredential for dealing with local “keychain” key management (supporting, for example, “remember me” in authentication dialogs). In 12.2 we’re also dealing with PEM files.
If we import a PEM file containing a private key we get a nice, symbolic representation of the private key:
✕
private = First[Import["ExampleData/privatesecp256k1.pem"]] 
Now we can derive a public key:
✕
public = PublicKey[%] 
If we generate a digital signature for a message using the private key
✕
GenerateDigitalSignature["Hello there", private] 
then this verifies the signature using the public key we’ve derived:
✕
VerifyDigitalSignature[{"Hello there", %}, public] 
An important part of modern security infrastructure is the concept of a security certificate—a digital construct that allows a third party to attest to the authenticity of a particular public key. In Version 12.2 we now have a symbolic representation for security certificates—providing what’s needed for programs to establish secure communication channels with outside entities in the same kind of way that https does:
✕
Import["ExampleData/client.pem"] 
We first introduced blockchain functionality into Wolfram Language in Version 11.3 (2018), and in each successive version we’re adding more and more blockchain integration. Version 12.3 adds connectivity to the Tezos blockchain:
✕
BlockchainBlockData[1, BlockchainBase > "Tezos"] 
✕
<"BlockHash" > "BKp9B8Z4zNpMDeSaFe2ZU6tywVUhady46Jji1oizxkRc4WGwpkf", "BlockNumber" > 1460305, "PreviousBlockHash" > "BL8qGr1awP9RdeCMRExVvyiVadHJvzo9AJGMTfwnWEZDh8BAZf1", "Protocol" > "PtEdo2ZkT9oKpimTah6x2embF25oss54njMuPzkJTEi5RqfdZFA", "NextProtocol" > "PtEdo2ZkT9oKpimTah6x2embF25oss54njMuPzkJTEi5RqfdZFA", "Timestamp" > DateObject[{2021, 5, 6, 15, 23, 25.`}, "Instant", "Gregorian", 4.`], "ValidationPass" > 4, "OperationsHash" > "LLoatzgmfL7B8dz5tdJu6RncRTXmPSBHxNpwbAuKyNc4daJw21m9V", "Fitness" > {"01", "00000000000c4851"}, "ContextHash" > "CoVdfkgRAo5QFd5iqhoKX34s4KcaZSTgSee7TsPwmCFaLch7TSnu", "Priority" > 0, "Nonce" > "cbbfffdbf95d0300", "Signature" > DigitalSignature[ Association[ "Type" > "EllipticCurve", "CurveName" > "prime256v1", "R" > ByteArray[{63, 206, 128, 26, 8, 98, 13, 127, 155, 77, 28, 109, 127, 131, 181, 72, 12, 233, 255, 113, 50, 41, 68, 60, 176, 134, 219, 28, 96, 233, 234, 145}], "S" > ByteArray[{203, 169, 245, 106, 175, 234, 118, 156, 176, 232, 249, 67, 153, 193, 64, 177, 95, 75, 47, 32, 23, 90, 41, 184, 8, 242, 92, 126, 135, 75, 109, 18}], "SignatureType" > "Deterministic", "HashingMethod" > None]], Sequence[ "ConsumedGas" > 791549625, "Baker" > "tz3RB4aoyjov4KEVRbuhvQ1CKJgBJMWhaeB8", "BlockReward" > Quantity[40000000, "Mutez"], "BlockFees" > Quantity[452892, "Mutez"], "TotalTransactions" > 61, "TransactionList" > { "op6VomseCtH7rizEpty4e1kATAP2wZ95Zc2TnciUHgHDxsjAks4", "opTjVuymKZuSgAvLq7QzfAn6pqgSJzFyYQx2ZYKA8J9AKYjGbUE", "opNNDhSkp2j1s9MNyaH7nW4T8gREpPnDN7zPwnoxS2Q74EzRJYS", "op9Sy24XKKJxwJ1UyBZ4didiyi6TtgDj2rMstCnYmPs9MLyCGra", "oozW8QwRnGvde4yqf1P4xM99Nzb5o1tg4RHRJg3uVEWjYVG1Afw", "op6EFngrBovURGURpKNsbz22r5bcRPaM4XbYCarStWG25X2kuJH", "ooFkZjFM33UjXDLvkhkD1SNUguh8rnnLfmwFefS7YWNR2tCpfVA", "onqdtnVexw5jsxy4P1FTbYRwBcNH1DrppjWgryCv32qqqw1Y7aP", "ooCgPPZi75TbErNqUEpptUvx6mU9T8B1DCsaU4eJYtbrishZqGw", "onudTXp25r5Zf2aZyMMKUAAFzqVQX24jwPNCN2TFPVQEA6hiEMk", "op8eRJ5T3yMLDSzSbH4EA1VpA91VaMECCy6uGASfE1aNVWuu8tb", "onpbU1aussshJatN8xfvqFWn3hNuDUZRD8SSem1XMrkcnnkRPwR", "onwvab7jYybdAX8fmG7ntMXVgQoT1UXi8hHJDiuMu9DmHT7hsDV", "ooPqN4ejaPjbuvn9aoGUzdj7Jb68Sp3CT4WV6ysvrvdEhSjw11u", "oo9YcyDUZiJN5r6nTtr5uU2d6WgFuxby8zygjNbFUAkMbcq2Kkc", "oojLDHmjFXpgcEdLrtu1ri5fkYyMLcxZVw4jxF2TZ3f7ELFkkwu", "oo5znRzP43gSP5TKwaBHD96zkt4xw3QfRqEXFiXAEuUxTn9SDeL", "ooqYjZ6boQBtVxRsBqrHxB9vmJRFkPAYysR2iaXigEQ6NepgDiV", "ooETrujpf9SuB5ErLheg653puC7NEKA1DFLEZHttEBMVa7HaNJa", "opAHbY3xtTUhekjv8Kgzvv6GS1uK3S4Arz5ygJiEVRZu8hSv7Vi", "oodaSNWZfzyUHXa94pJ9vohwZS4FbEmtuZgMZnwf9WJ2mqkDcTW", "oouk2fBfhWPirfcNzdxXKxhDtnyMnvK65hD1MnHPApFuesFuuMt", "opRcK6KBE1Y79GyGzmYsqRPZntxsV1A2gRK4iVZ91x573H1nTRp", "opZVSQx9cVe4rxAv81eurea9GisjyFJc9DdKEiPSpgCB8aUtHG9", "oogML3KCzAZZE1aaagcpXwfoa9R7zqdiiQHC9LT5j6ZcnsfY6P7", "oouk2fBfhWPirfcNzdxXKxhDtnyMnvK65hD1MnHPApFuesFuuMt", "opRcK6KBE1Y79GyGzmYsqRPZntxsV1A2gRK4iVZ91x573H1nTRp", "opZVSQx9cVe4rxAv81eurea9GisjyFJc9DdKEiPSpgCB8aUtHG9", "onvdAawno4HEbLJhS1St7j3RFaCi7rDZSeAzC1DM7Cafs1EadMJ", "onqyJ3Cin5sxuWPvZgcVqXqBeTNFijkthcrz5bS4U8ATwhzodHK", "op2hL6BVZg6y8zzQKQWzk5Bxs8Cr89QCSocFaYoNTVPvAXnQ4Hu", "onijR1dKbGM1zR9dAbasxactqGQC4z6J6b4Uzrccy4vAnCD4Fn7", "opTZK85vLQoU91TwLGvcCjFCKWeWbTxJEus4rYv2SgF97eHRZE5", "opTZK85vLQoU91TwLGvcCjFCKWeWbTxJEus4rYv2SgF97eHRZE5", "opTZK85vLQoU91TwLGvcCjFCKWeWbTxJEus4rYv2SgF97eHRZE5", "opTZK85vLQoU91TwLGvcCjFCKWeWbTxJEus4rYv2SgF97eHRZE5", "onyUVr21aiaYCL8cQFSZ8S7iMtu3qKRmycWAMwrUELJ6Y8UWkr5", "op2ZjLMAGz9N8uMwDF1eUt4SwPviLHvGEHLukB88vWVTkx4daT7", "onpABZgtFEYCNQNJPDNX56M4UEbyekKY9NibBQw9x11KTqiaCbE", "oogML3KCzAZZE1aaagcpXwfoa9R7zqdiiQHC9LT5j6ZcnsfY6P7", "opPWXEj2p6MHCGfon4nZGVnG3zBXWY2CZtYVRRjVNDBvQPLxnUU", "ooq8gZ1nKE5AqHuJYCjqEMkbYHgK2SDs9T2SZXDkJAZ196RKiBS", "onqJPSCfU3BRzLesTnHHrbQn1FLAGpRCi7TRtxxiHbip9iHMWbU", "onwg53FYDrfnnya3saqbrAEZUbnpTruMPTpqoKVuRazsf1Viwk5", "onhFQfHozg6342FWHmneswFZRdoEuwvtfhmcpQQg1U9FsginVJv", "opTn7nNd5vUBsex4a2bfR3RrZjUmkMeiAWEDttrrSEF3T1dGNZm", "ooWkoM9xWNAd8EqbETpr1k5fEAuiQt598ugmLM54ZVNxjCbtGbp", "oogDaA91FQsyYy3P9kYcJjS3ujmpHb94JGgjA6HpPSkPzgpB15H", "ooaQLZaBXPXTe6aafhe96naZBGcoZi11FbzMhn385FseNEem2xF", "opL6WmqQuWGyN9cNSmCwD8Ma6MV4Ts9So85WgcWp5Ky5tJkmLmh", "ootzhwxQhEwnvrxzTeBWErHdC5m7BRzTfwDa2pSpaSkvRAwzU5K", "oosyhGiANK4fZxc7hZ2j3NVUNiaTXQDPTKPK4GENmL8fNbqUsvB", "ooLh1CUUtYveZTsG3GeA5rHmbsx9nQFTz9ynHGaEgJoC6dJYnti", "oo5qw19WGmppfrnJZLyeRULUsAfjnffaQSLTqUHfLvKpXR3MqYd", "onpj8MabGeVVHM4SkFewBuTzh7ZCzaS5nAHXzrzNCVtSEw2XzuX", "ong2v5BqSaqatFHaxszrGydxBD12d5nURnXHHVwaijpvRow84Af", "opW8UBt6o18DYbtoRX9zJQtzyD8jgU4y3siQpezuHUHCKFwNU7C", "onojr4LVUpN9wCuLsBx3zTvAm7BFiaX6UuQPam6S2GgrQnD4c5Z", "op2QbyAJAMc2XYEksdMKvEUgYgLV2gKUqbXahUjDUeruvixepqT", "opWUQsURSggjD9fcSFvRipqfcCeWgZUt3sSNzNwEs59iu7uGdH3", "onwfFre2mrardqbuM2UE8vJpF9H2hFWZorsPgNgak865KTJEF4k", "opYvNapz6ofynufp1nkr1rRHwM3zS3Zt6ej3byMDPC5HR6hoB4a", "ooRgJQGMHvbVtQkCpC1SoJRz4rCkWxoXFStBxARH5dAVYdnYKFn", "oooFkkTvXw4geySWJbsnyGtv7uWSUCfZikqobG6VcFwXPqV41R9", "oosNr1vxSqq2CJtBx6w89JMwBQykN1eCZjRRMhcVBBYQB1e13iq", "opTmRk498HdVZ9QQxMPRVbrastMCZmYTgZS7A6p8xK2J9Sh8J26", "ooB3dC8hsr9ZXNDVHD6KtAkDKEke2UxS8BfmijiT3eWDYynWiuh", "onpQSBSjVeSx1zRpFhcwCGmhhawy2Za6ur18RoSqTFKcFEFWmpg"}, "TransactionListDetails" > Association[ "Endorsement" > { "op6VomseCtH7rizEpty4e1kATAP2wZ95Zc2TnciUHgHDxsjAks4", "opTjVuymKZuSgAvLq7QzfAn6pqgSJzFyYQx2ZYKA8J9AKYjGbUE", "opNNDhSkp2j1s9MNyaH7nW4T8gREpPnDN7zPwnoxS2Q74EzRJYS", "op9Sy24XKKJxwJ1UyBZ4didiyi6TtgDj2rMstCnYmPs9MLyCGra", "oozW8QwRnGvde4yqf1P4xM99Nzb5o1tg4RHRJg3uVEWjYVG1Afw", "op6EFngrBovURGURpKNsbz22r5bcRPaM4XbYCarStWG25X2kuJH", "ooFkZjFM33UjXDLvkhkD1SNUguh8rnnLfmwFefS7YWNR2tCpfVA", "onqdtnVexw5jsxy4P1FTbYRwBcNH1DrppjWgryCv32qqqw1Y7aP", "ooCgPPZi75TbErNqUEpptUvx6mU9T8B1DCsaU4eJYtbrishZqGw", "onudTXp25r5Zf2aZyMMKUAAFzqVQX24jwPNCN2TFPVQEA6hiEMk", "op8eRJ5T3yMLDSzSbH4EA1VpA91VaMECCy6uGASfE1aNVWuu8tb", "onpbU1aussshJatN8xfvqFWn3hNuDUZRD8SSem1XMrkcnnkRPwR", "onwvab7jYybdAX8fmG7ntMXVgQoT1UXi8hHJDiuMu9DmHT7hsDV", "ooPqN4ejaPjbuvn9aoGUzdj7Jb68Sp3CT4WV6ysvrvdEhSjw11u", "oo9YcyDUZiJN5r6nTtr5uU2d6WgFuxby8zygjNbFUAkMbcq2Kkc", "oojLDHmjFXpgcEdLrtu1ri5fkYyMLcxZVw4jxF2TZ3f7ELFkkwu", "oo5znRzP43gSP5TKwaBHD96zkt4xw3QfRqEXFiXAEuUxTn9SDeL", "ooqYjZ6boQBtVxRsBqrHxB9vmJRFkPAYysR2iaXigEQ6NepgDiV", "ooETrujpf9SuB5ErLheg653puC7NEKA1DFLEZHttEBMVa7HaNJa", "opAHbY3xtTUhekjv8Kgzvv6GS1uK3S4Arz5ygJiEVRZu8hSv7Vi", "oodaSNWZfzyUHXa94pJ9vohwZS4FbEmtuZgMZnwf9WJ2mqkDcTW"}, "Reveal" > { "oouk2fBfhWPirfcNzdxXKxhDtnyMnvK65hD1MnHPApFuesFuuMt", "opRcK6KBE1Y79GyGzmYsqRPZntxsV1A2gRK4iVZ91x573H1nTRp", "opZVSQx9cVe4rxAv81eurea9GisjyFJc9DdKEiPSpgCB8aUtHG9", "oogML3KCzAZZE1aaagcpXwfoa9R7zqdiiQHC9LT5j6ZcnsfY6P7"}, "Delegation" > { "oouk2fBfhWPirfcNzdxXKxhDtnyMnvK65hD1MnHPApFuesFuuMt", "opRcK6KBE1Y79GyGzmYsqRPZntxsV1A2gRK4iVZ91x573H1nTRp", "opZVSQx9cVe4rxAv81eurea9GisjyFJc9DdKEiPSpgCB8aUtHG9", "onvdAawno4HEbLJhS1St7j3RFaCi7rDZSeAzC1DM7Cafs1EadMJ"}, "Transaction" > { "onqyJ3Cin5sxuWPvZgcVqXqBeTNFijkthcrz5bS4U8ATwhzodHK", "op2hL6BVZg6y8zzQKQWzk5Bxs8Cr89QCSocFaYoNTVPvAXnQ4Hu", "onijR1dKbGM1zR9dAbasxactqGQC4z6J6b4Uzrccy4vAnCD4Fn7", "opTZK85vLQoU91TwLGvcCjFCKWeWbTxJEus4rYv2SgF97eHRZE5", "opTZK85vLQoU91TwLGvcCjFCKWeWbTxJEus4rYv2SgF97eHRZE5", "opTZK85vLQoU91TwLGvcCjFCKWeWbTxJEus4rYv2SgF97eHRZE5", "opTZK85vLQoU91TwLGvcCjFCKWeWbTxJEus4rYv2SgF97eHRZE5", "onyUVr21aiaYCL8cQFSZ8S7iMtu3qKRmycWAMwrUELJ6Y8UWkr5", "op2ZjLMAGz9N8uMwDF1eUt4SwPviLHvGEHLukB88vWVTkx4daT7", "onpABZgtFEYCNQNJPDNX56M4UEbyekKY9NibBQw9x11KTqiaCbE", "oogML3KCzAZZE1aaagcpXwfoa9R7zqdiiQHC9LT5j6ZcnsfY6P7", "opPWXEj2p6MHCGfon4nZGVnG3zBXWY2CZtYVRRjVNDBvQPLxnUU", "ooq8gZ1nKE5AqHuJYCjqEMkbYHgK2SDs9T2SZXDkJAZ196RKiBS", "onqJPSCfU3BRzLesTnHHrbQn1FLAGpRCi7TRtxxiHbip9iHMWbU", "onwg53FYDrfnnya3saqbrAEZUbnpTruMPTpqoKVuRazsf1Viwk5", "onhFQfHozg6342FWHmneswFZRdoEuwvtfhmcpQQg1U9FsginVJv", "opTn7nNd5vUBsex4a2bfR3RrZjUmkMeiAWEDttrrSEF3T1dGNZm", "ooWkoM9xWNAd8EqbETpr1k5fEAuiQt598ugmLM54ZVNxjCbtGbp", "oogDaA91FQsyYy3P9kYcJjS3ujmpHb94JGgjA6HpPSkPzgpB15H", "ooaQLZaBXPXTe6aafhe96naZBGcoZi11FbzMhn385FseNEem2xF", "opL6WmqQuWGyN9cNSmCwD8Ma6MV4Ts9So85WgcWp5Ky5tJkmLmh", "ootzhwxQhEwnvrxzTeBWErHdC5m7BRzTfwDa2pSpaSkvRAwzU5K", "oosyhGiANK4fZxc7hZ2j3NVUNiaTXQDPTKPK4GENmL8fNbqUsvB", "ooLh1CUUtYveZTsG3GeA5rHmbsx9nQFTz9ynHGaEgJoC6dJYnti", "oo5qw19WGmppfrnJZLyeRULUsAfjnffaQSLTqUHfLvKpXR3MqYd", "onpj8MabGeVVHM4SkFewBuTzh7ZCzaS5nAHXzrzNCVtSEw2XzuX", "ong2v5BqSaqatFHaxszrGydxBD12d5nURnXHHVwaijpvRow84Af", "opW8UBt6o18DYbtoRX9zJQtzyD8jgU4y3siQpezuHUHCKFwNU7C", "onojr4LVUpN9wCuLsBx3zTvAm7BFiaX6UuQPam6S2GgrQnD4c5Z", "op2QbyAJAMc2XYEksdMKvEUgYgLV2gKUqbXahUjDUeruvixepqT", "opWUQsURSggjD9fcSFvRipqfcCeWgZUt3sSNzNwEs59iu7uGdH3", "onwfFre2mrardqbuM2UE8vJpF9H2hFWZorsPgNgak865KTJEF4k", "opYvNapz6ofynufp1nkr1rRHwM3zS3Zt6ej3byMDPC5HR6hoB4a", "ooRgJQGMHvbVtQkCpC1SoJRz4rCkWxoXFStBxARH5dAVYdnYKFn", "oooFkkTvXw4geySWJbsnyGtv7uWSUCfZikqobG6VcFwXPqV41R9", "oosNr1vxSqq2CJtBx6w89JMwBQykN1eCZjRRMhcVBBYQB1e13iq", "opTmRk498HdVZ9QQxMPRVbrastMCZmYTgZS7A6p8xK2J9Sh8J26", "ooB3dC8hsr9ZXNDVHD6KtAkDKEke2UxS8BfmijiT3eWDYynWiuh", "onpQSBSjVeSx1zRpFhcwCGmhhawy2Za6ur18RoSqTFKcFEFWmpg"}]]> 
In addition to doing blockchain transactions and blockchain analytics with Wolfram Language, we’re also doing more and more with computational contracts—for which the fullscale computational language character of the Wolfram Language gives unique opportunities (an example being the creation of “oracles” based on our computational knowledge about the world).
In Version 12.1 we introduced ExternalStorageObject, initially supporting IPFS and Dropbox. In Version 12.3 we’ve added support for Amazon S3 (and, yes, you can store and retrieve a whole bucket of files at a time):
✕
ExternalStorageUpload["ExampleData/spikey2.png", "wolframbucket", ExternalStorageBase > "AmazonS3"] 
A necessary step in all sorts of external interactions is authentication. And in Version 12.3 we’ve added support for OAuth 2.0 workflows. You create a SecuredAuthenticationKey:
✕
SecuredAuthenticationKey[<"Name" > "Reddit", "OAuthType" > "ThreeLegged", "OAuthVersion" > "2.0", "ConsumerKey" > "" (*Your key here*), "ConsumerSecret" > "" (*Your key here*), "ResponseType" > "code", "Scopes" > {"read"}, "ScopeDelimiter" > " ", "VerifierInputFunction" > "WolframConnectorChannel", "AccessTokenURL" > "https://www.reddit.com/api/v1/access_token", "UserAuthorizationURL" > "https://www.reddit.com/api/v1/authorize", "AdditionalParameters" > < "AuthorizationRequest" > {"duration" > "permanent"}> >] 
Then you can make a request using this key:
✕
URLRead["https://oauth.reddit.com/api/search_subreddits" , Authentication > %] 
You’ll get a browser window that asks you to log in with your account—and then you’ll be off and running.
For many common external services, we have “prepackaged” ServiceConnect connections. Often these require authentication. And for OAuthbased APIs (like Reddit or Twitter) we have our WolframConnector app that brokers the external part of the authentication. A new feature of Version 12.3 is that you can also use your own external app to broker that authentication, so you’re not limited by the arrangements made with the external service for the WolframConnector app.
Under the hood for everything we’re talking about here is cryptography. And in Version 12.3 we’ve added some new cryptographic capabilities; in particular we now have support for all elliptic curves in the NIST Digital Signature FIPS 1864 standard, as well as for Edwards curves that will be part of FIPS 1865.
We’ve packaged all of this to make it very easy to create blockchain wallets, sign transactions, and encode data for blockchains:
✕
BlockchainKeyEncode[PublicKey[ Association[ "Type" > "EdwardsCurve", "CurveName" > "ed25519", "PublicByteArray" > ByteArray[{129, 57, 198, 230, 91, 48, 63, 133, 232, 63, 173, 17, 49, 237, 190, 143, 151, 108, 127, 202, 73, 93, 64, 14, 198, 177, 194, 15, 13, 79, 120, 246}], "PublicCurvePoint" > { 299335060271590132066951334928298030649197201001541795411746200767\ 72068584535, 53585483370699320092407628906864478900713122887718404470323110058\ 487397628289}]], "Address", BlockchainBase > "Tezos"] 
✕
BlockchainAddressData[%, "DelegateData", BlockchainBase > "Tezos"] 
One of the things that’s happened in the world since the release of Version 12.3 is the mainstreaming of the idea of NFTs. We’ve actually had tools for several years for supporting NFTs—and tokens in general—on blockchains. But in Version 13.0 we’ve added more streamlined NFT tools, particularly in the context of our connection to the Cardano blockchain.
The basic idea of an NFT (“nonfungible token”) is to have a unique token that can be transferred between users but not replicated. It’s like a coin, but every NFT can be unique. The blockchain provides a permanent ledger of who owns what NFT. When you transfer an NFT what you’re doing is just adding something to the blockchain to record that transaction.
What can NFTs be used for? Lots of things. For example, we issued “NFT certificates” for people who “graduated” from our Summer School and Summer Camp this year. We also issued NFTs to record ownership for some cellular automaton artworks we created in a livestream. And in general NFTs can be used as permanent records for anything: ownership, credentials or just a commemoration of an achievement or event.
In a typical case, there’s a small “payload” for the NFT that goes directly on the blockchain. If there are larger assets—like images—these will get stored on some distributed storage system like IPFS, and the payload on the blockchain will contain a pointer to them.
Here’s an example that uses several of our blockchain functions—as well as the new connection to the Cardano blockchain—to retrieve from IPFS the image associated with an NFT that we minted a few weeks ago:
How can you mint such an NFT yourself? The Wolfram Language has the tools to do it. ResourceFunction["MintNFT"] in the Wolfram Function Repository provides one common workflow (specifically for the CIP 25 Cardano NFT standard)—and there’ll be more coming.
The full story of blockchain below the “pure consumer” level is complicated and technical. But the Wolfram Language provides a uniquely streamlined way to handle it, based on symbolic representations of blockchain constructs, that can directly be manipulated using all the standard functions of the Wolfram Language. There are also many different blockchains, with different setups. But through lots of effort that we’ve made in the past few years, we’ve been able to create a uniform framework that interoperates between different blockchains while still allowing access to all of their special features. So now you just set a different BlockchainBase (Bitcoin, Ethereum, Cardano, Tezos, ARK, Bloxberg, …) and you’re ready to interact with a different blockchain.
Want to learn more about cryptography? Sign up for Wolfram U’s free Introduction to Cryptography course. 
Each issue of the Mathematical Association of America’s Math Horizons presents readers with puzzles to solve, and the April 2021 issue included the “Knightdoku” challenge created by David Nacin, a math professor at William Paterson University in Wayne, New Jersey.
In this puzzle, a simple Sudokulike problem is described based on chess knights. Each cell in the 9×9 grid may contain a knight. The initial board configuration defines the placement of a set of knights that has a specific number of knights that must be present in their neighborhood in the solution. The neighborhood of a knight is the set of cells that is reachable from the knight in the one Lshaped chess move that knights are allowed to make.
In addition to the initial placement of knights, a valid solution must obey a Sudokulike constraint. Specifically, each row, each column and each 3×3 block must have exactly three knights.
The puzzle included two board configurations to solve: a warmup board and a regular—that is, harder!—one. Here’s the warmup board:
And here’s the regular board:
The challenge for me was not just solving Nacin’s Knightdoku puzzle but to do so using the Wolfram Language, which offers a variety of ways to solve Sudoku puzzles.
Games like Sudoku are relatively straightforward to solve using Boolean constraint solvers. In essence, you boil the problem down to relations among a set of logical variables that represent possible board configurations.
For example, if we have two cells where we want to make one true and the other false, we can create four variables: two for the first cell (cell1false, cell1true) and two for the second cell (cell2false, cell2true). A valid configuration would satisfy the constraint (cell1false and cell2true) or (cell1true and cell2false). This logical expression ((cell1false&&cell2true)(cell1true&&cell2false)) can be handed to a satisfiability solver to determine if a configuration exists that satisfies the logical constraints.
First, we must create some helper functions for forming conjunctions and disjunctions from lists that will be useful later in building our logical expressions:
✕

Initial board configuration is a list of triples: {x,y,n} where {x,y} is the position on the board (using 1based indices), and n is the number of neighbors of the knight at {x,y} that contain a knight in the solution. A neighbor is defined as a cell reachable via a legal knight move.
First, we create a basic configuration for the warmup board:
✕

Then we make the regular board configuration:
✕

For convenience, we will also create some associations for use later in looking up these initial markings when we plot the solver results:
✕

We need to encode the state of the board via logical variables, so we define a set of Boolean values for the possible states of each cell (has a knight, has no knight). We use the convention that s[[i,j,1]] means {i,j} has a knight, and that s[[i,j,2]] means there is no knight:
✕

We’ll also create an association mapping coordinates to the two logical variables for that coordinate (this is mostly useful in debugging and looking at constraints):
✕

The first logical constraint we establish is necessary to guarantee that a cell is either marked or unmarked. Having a cell that is neither marked nor unmarked, or both marked and unmarked, is invalid, so we exclude them:
✕

Most of the code we write for constraints looks like this. In this case, the innermost tables set up a percell constraint. We then map AndList, the function we created earlier, over the table to form a conjunction from the columns of each row of the table, and then apply AndList one more time to conjoin the rows into one large logical expression.
For our initial configuration, we need to consider the cells that are reachable from each knight, obeying the boundaries of the board. We can write a simple function that enumerates the coordinates of the neighbors of a cell {x,y}:
✕

For a given position and number of expected knight neighbors, generate all possible valid assignments. We achieve this by taking the set of neighbors and associating with each a value of 1 or 2. The order of 1 and 2 assignments is achieved by calculating all permutations of a sequence of 1s and 2s containing the appropriate number of 1s and 2s based on the expected number of neighbors. We include the knight in the center of the neighborhood as marked (s[[x,y,1]]):
✕

Combining these is similar to what we did above, but with the addition of Or in the expression. Specifically, we need to And the neighbors together for each neighborhood, and join each possible neighborhood with an Or. Finally, we conjoin all of these And/Or expressions across all initial knight markings:
✕

We also need to add the generic board constraints that are similar to Sudoku: at most three knights per row, column and 3×3 block. These follow the same pattern as above: we create all permutations of marked/unmarked for each row, column and block, and join them with And and Or operators.
Add a constraint that at most three per row can be set:
✕

✕

Similarly, set a constraint for at most three per column:
✕

✕

Also add a constraint for 3×3 boxes:
✕

✕

Now we’re ready to solve both puzzle boards.
We can solve the system using the satisfiability solver over the set of logical variables:
✕

For visualization, we reshape the result to determine what the assignment was to each logical variable in the same shape as the board. We indicate the original knights as with a superscript indicating the number of neighbors it must have. The knights that were filled in by the solver are indicated as :
✕

✕

We can apply the same technique to the second, harder board provided by Nacin:
✕

✕

✕

If you’re interested in other examples of the Wolfram Language applied to Sudoku puzzles, check out posts such as “Solving Sudoku as an Integer Programming Problem” and “Using Recursion and FindInstance to Solve Sudoku” by Wolfram Community members. You can also find more interesting puzzles created by David Nacin at Quadrata, his puzzle blog.
Matthew Sottile is a computer scientist who works in the Center for Applied Scientific Computing at the Lawrence Livermore National Laboratory. His research is in the areas of programming languages, compilers and formal methods. Sottile has been an active Wolfram Mathematica user since 1995 and uses it for generative art, solving puzzles and studying rewriting and proof systems.
Visit Wolfram Community or the Wolfram Function Repository to embark on your own computational adventures! 
Explore the contents of this article with a free Wolfram System Modeler trial. Bowling is a simple game that consists of a ball, 10 pins and a lane. You take the ball, come to the starting line, aim between pins 1 and 3 and throw the ball. You instinctively assume that the ball and the lane are perfect and expect the ball to go straight where you aimed.
However, we’ll almost always find that this intuition is wrong. Jan Brugård and I will try to explain this phenomenon and reveal the physics behind bowling in this blog post by using Wolfram System Modeler to explore different effects.
As a curious person with zero bowling experience, I started to model the game and noticed tons of parameters to decide, such as initial ball velocity, initial ball position, rotational speeds and more. I had no clue which were important and which were not. I found plenty of interesting material on the internet, but I wanted to keep my inexperienced, childlike spirit alive. So I decided to go bowling with my wife for the first time in my life. As the saying goes, better late than never.
We found a bowling alley near where we live. Yes, we were now onsite!
✕

Right off the bat, we started playing. I chose a ball, came to the middle of the lane at the starting line, aimed at the head pin and threw the ball. It was not that fast, and it took the ball a bit more than two seconds to reach the pins. It went straight part of the way and then, to my disappointment—and against my intuition—it hooked 20 cm to the left. Why?
After our game, I went back to work and implemented a model that included the ball, the alley and the contact between them:
✕

I designed it following the United States Bowling Congress (USBC) rules, including a lane length of 18.29 m
I simulated the first version of my model and tried to replicate my firstever bowling throw. Let’s look at the resulting System Modeler animation:
As you see, contrary to my first attempt, the ball went straight the whole way, so why did my throw deviate halfway? Was it something with the ball, lane or me?
Let’s start with the ball. It may sound weird at first, but the ball is not perfect. You might think this is due to the holes, and yes, you are partly right. Then again, ball manufacturers add some counterweight to balance these holes. Yet the mass they add might also cause differences in the radius of gyration (i.e. gyradius), either intentionally or unintentionally. If any difference exists, we will likely see deviations in the bowling throws because of the tennis racket theorem (AKA the Dzhanibekov effect), as illustrated in this blog post.
Let’s add this imperfection to the ball and see what happens:
✕

✕

After checking the USBC rules again, I used the maximum radius of gyration difference allowed. It was tiny, only 0.2 mm. As you see, the ball does not go straight this time. It starts out that way, and then it hooks. This deviation is not, however, as large as the one I observed.
It is possible to see this effect more clearly by playing around with the radii of the gyration:
✕

It curves erratically. Getting such curves, however, requires designing a ball beyond the allowed limits. So, there must be something else explaining the difference between my first throw and the model. After failing with my first throw in the bowling lane, I decided to change my initial position around 20 cm to the right while keeping the ball speed roughly the same.
It went as before; however, it hooked more than the first throw and hit pin 2.
✕

How does shifting the initial position affect the result in my model? The following code illustrates this:
✕

It goes as expected, like the earlier throw. Contrary to my reallife experience, however, there is no difference in deviation. When I read more about bowling lanes, it turns out that lanes have uneven friction on purpose.
Yes, it sounds too confusing, but this increases the complexity and, thus, competition. They even name lanes based on different friction patterns, including “shark,” “bear” and “tiger.” In general, they have very low friction in the first twothirds of the lane, but it increases to almost 10 times more friction in the last part. So what happens if I introduce this to my model?
It is better to admit that the friction distribution varies a lot from lane to lane, and this makes it hard to deal with all the concepts as a rookie bowler. So, I will model this as a variable. After 1.5 seconds, the ball will reach the dry part of the lane like this:
✕

I modeled the lane in this way and sent my first shot, and this is what I got:
✕

Yes, it curves a bit more than before but still not as much as my throw. Perhaps it was just me making bad throws? There are pros out there bending these balls in official tournaments even more crazily. There had to be something I was missing. When I checked the possible reasons, I remembered how pros threw the ball: they spin it!
That also explained why I got a different trajectory when I thought I had replicated every detail, including position and arm swing, to adjust the ball velocity from my former throw. While throwing the ball, I had also spun the ball.
This angular velocity affects the trajectory too. Even if I tried to count the number of spins around the y axis by looking at the finger holes, it turned out to be impossible for me because the ball skids and rotates in the direction of the alley. Since it’s hard to observe these values, I checked them out. The spinning speed was between 1–10 rad/s, depending on the bowler. In this case, I assumed it was 5 rad/s.
The following figure explains how the spinning ball and surface work in unison. After the ball first skids, it loses a bit of energy in the dry part and starts curving as the friction increases. It gets full traction on the dry part of the lane, and because of its rotational speed, it displays mindbending curves!
✕

After adding the spinning angular velocity, I finally got results similar to those in the field:
✕

If you want to play around with the model, add any parameter that you think increases the accuracy.
Now we all know that the ball curves in most cases but not how we can have it hook as we please. Moreover, why does it seem to hook so much?
The answer to this is probably the angle of attack. It is pretty hard to throw a strike. Most bowlers realize pretty quickly that there is a narrow pocket between the 13 pocket (right hand) or the 12 pocket (left hand) to achieve a strike. Hooking the ball gives it a better angle at these pockets and increases the tolerance for error. When a ball is rolled straight, hitting the pocket must be precise. By hooking the ball, the ball hits the pins with more force, producing better carrythrough.
A straight roll—even when it hits the pocket—will tend to just tap on the pins if the ball is just slightly right of the center pocket or has an inadequate entry angle. In other words, a hooked ball can achieve strikes with less precise hits.
✕

There are many more things to consider, but let’s wrap up this story. We have evaluated the effect of parameters that I observed as a rookie on a bowling ball’s trajectory.
In the end, I think I can answer the question about why your intuition about bowling is wrong. As I mentioned earlier, neither the ball, lane, myself nor (possibly) you is likely perfect. But hopefully, playing around with the following bowling game can give you insights that move you toward perfection:
✕

Now that you’ve seen our efforts to explain how and why bowling balls roll the way they do, download the model and run it yourself using Wolfram System Modeler to replicate the strike.
Check out Wolfram’s System Modeler Modelica Library Store or sign up for a System Modeler free trial. 
Cryptography has been around since time immemorial, and in the modern technological age is an omnipresent, often invisible middleman that helps protect your data. As a field of study, it combines mathematics, computer science, physics and even linguistics. As a tool, it concerns informatics, business, finance, politics, human rights—any sector that deals with personal information or requires communication. In fact, it’s hard to imagine a sector that cryptography does not impact.
Today, I am happy to announce a new, free interactive course, Introduction to Cryptography, that will help students around the world get a grasp on the variety of topics this vast field offers. The Wolfram Language allows the course to deliver unique handson material and address questions such as “How can I secretly transmit information between two people?” and “How do cryptocurrencies operate without a central authority?”
I also invite you to start exploring the interactive course by clicking the following image.
Ever since writing was invented, people have been interested not only in using it to communicate but to conceal the content of messages from those they do not trust. For centuries, the alchemy of private communication was an art known to few. It was certainly not as important in the everyday life of most people then as it is now, but has always been an everpresent tool for gaining military or political advantage throughout the history of humanity.
Since World War II—and especially with the rise of the internet—cryptography has grown beyond encryption alone to include a group of specialpurpose algorithms. These sustain the wider infrastructure of information security, such as user and message authentication and protection from illegitimate changes to messages and eavesdropping. In the past 50 years, cryptography has become a science, the workings of which this course covers.
Students taking this course will receive an introduction to the fundamentals of cryptography within the larger context of information security. Much of the course focuses on indispensable cryptographic algorithms in wide use: hash functions, secretkey encryption, publickey cryptosystems and digital signatures. Introduction to Cryptography also discusses some more advanced applications of cryptography, such as blockchains and secure password storage.
Here’s a sneak peek at some of the course topics (shown in the lefthand column):
The course consists of 25 video lessons, averaging about 10 minutes each, supplemented with roughly 150 pages of written material in interactive notebooks. Working at a steady pace, students should be able to finish watching all videos and complete the six quizzes in six to eight hours.
I intentionally tried to keep the lessons selfcontained; however, a basic understanding of computer science, algebra and modular arithmetic will be helpful.
The rest of this blog post will describe the different sections of the course in detail.
The course has 25 lessons, beginning with the “Historic Perspective” section. It consists of three lessons that present the history of and milestones in the field, all the way from Caesar in ancient times to the twentieth century, where cryptography has played a crucial role in war outcomes and affected human lives more than ever.
The core of the course is dedicated to the fundamental tools of cryptography: hashes, secretkey ciphers, publickey encryption and signature schemes. Each concept and cryptosystem is introduced within the context of the information security objectives it is meant to achieve, where it is used in realworld scenarios and applications, and its pros and cons in comparison to other algorithms.
Each of the 25 videos is supplemented by a detailed transcript notebook displayed on the righthand side of the screen. You can copy and paste Wolfram Language input directly from the transcript notebook to the embedded scratch notebook to try the examples for yourself.
Most sections of the course end with a short, multiplechoice quiz with 10 questions. A student who reviews the section carefully should have no difficulty in doing well on the quiz.
Students will receive instant feedback about their responses to quiz questions, and they are encouraged to go back to a section’s lesson notebooks for reference and to review the material as many times as needed.
I strongly encourage students to watch all the lessons and attempt the quizzes in the recommended sequence because each course topic builds on earlier concepts and techniques. Plus, each new cryptosystem is presented with respect to the issues it solves compared to ones already discussed. You can earn a certificate of completion, pictured here, at the end of the course.
A course certificate is earned after watching all the lesson videos and passing all the quizzes. It demonstrates your understanding of the fundamentals of cryptography and will add value to your resume or social media profile.
In the modern world of digital communications and interconnected remote systems, an understanding of the fundamental concepts of cryptography is undeniably useful for students of computer science and engineering, as well as for professionals. I hope that Introduction to Cryptography will help you to achieve mastery of information security and allow you to store and transmit data in your business or applications more securely.
For students who wish to dive deeper into mathematical and technical details, almost every lesson notebook points to relevant literature on its topic. At the end of the course, I have also provided a list of textbooks that greatly aided my own path in studying cryptography and preparing this course.
I have enjoyed teaching the course and welcome any comments about it as well as suggestions for future courses.
I would like to thank Konstantin Kouptsov for being the technical editorinchief and Cassidy Hinkle, Abrita Chakravarty, Veronica Mullen, Tim Shedelbower, Joyce Tracewell and Amruta Behera for their dedicated work on various aspects (visuals, quizzes and videos) of the course.
I would also like to thank the professors at my alma mater, the Igor Sikorsky Kyiv Polytechnic Institute, for instilling a love for cryptography in me years ago. Those initial steps began the journey that ultimately led to the creation of this course.
Want more help? Register for one of Wolfram U’s Daily Study Groups. 
But what our Physics Project suggests is that underneath everything we physically experience there is a single very general abstract structure—that we call the ruliad—and that our physical laws arise in an inexorable way from the particular samples we take of this structure. We can think of the ruliad as the entangled limit of all possible computations—or in effect a representation of all possible formal processes. And this then leads us to the idea that perhaps the ruliad might underlie not only physics but also mathematics—and that everything in mathematics, like everything in physics, might just be the result of sampling the ruliad. ]]>
But what our Physics Project suggests is that underneath everything we physically experience there is a single very general abstract structure—that we call the ruliad—and that our physical laws arise in an inexorable way from the particular samples we take of this structure. We can think of the ruliad as the entangled limit of all possible computations—or in effect a representation of all possible formal processes. And this then leads us to the idea that perhaps the ruliad might underlie not only physics but also mathematics—and that everything in mathematics, like everything in physics, might just be the result of sampling the ruliad. ]]>
Math is big, and math is important. And for the Wolfram Language (which also means for Mathematica) we’re always pushing the frontiers of what’s computable in math.
One longterm story has to do with special functions. Back in Version 1.0 we already had 70 special functions. We covered univariate hypergeometric functions—adding the general _{p}F_{q} case in Version 3.0. Over the years we’ve gradually added a few other kinds of hypergeometric functions (as well as 250 other new kinds of special functions). Typical hypergeometric functions are solutions to differential equations with three regular singular points. But in Version 12.1 we’ve generalized that. And now we have Heun functions, that solve equations with four regular singular points. That might not sound like a big deal, but actually they’re quite a mathematical jungle—for example with 192 known special cases. And they’re very much in vogue now, because they show up in the mathematics of black holes, quantum mechanics and conformal field theory. And, yes, Heun functions have a lot of arguments:
✕
Series[HeunG[a, q, \[Alpha], \[Beta], \[Gamma], \[Delta], z], {z, 0, 3}] 
By the way, when we “support a special function” these days, there’s a lot we do. It’s not just a question of evaluating the function to arbitrary precision anywhere in the complex plane (though that’s often hard enough). We also need to be able to compute asymptotic approximations, simplifications, singularities, etc. And we have to make sure the function can get produced in the results of functions like Integrate, DSolve and Sum.
One of our consistent goals in dealing with superfunctions like DSolve is to make them “handbook complete”. To be sure that the algorithms we have—that are built to handle arbitrary cases—successfully cover as much as possible of the cases that appear anywhere in the literature, or in any handbook. Realistically, over the years, we’ve done very well on this. But in Version 12.1 we’ve made a new, big push, particularly for DSolve.
Here’s an example (oh, and, yes, it happens to need Heun functions):
✕
DSolveValue[(d + c x + b x^2) y[x] + a x y'[x] + (1 + x^2) y''[x] == 0, y[x], x] 
There’s a famous book from the 1940s that’s usually just called Kamke, and that’s a huge collection of solutions to differential equations, some extremely exotic. Well, we’ll soon be able to do 100% of the (concrete) equations in this book (we’re still testing the last few…).
In Version 12.0 we introduced functions like ComplexPlot and ComplexPlot3D for plotting complex functions of complex variables. In Version 12.1 we now also have complex contour plotting. Here we’re getting two sets of contours—from the Abs and the Arg of a complex function:
✕
ComplexContourPlot[ AbsArg[(z^2  I)/(z^3 + I)], {z, 3  3 I, 3 + 3 I}, Contours > 30] 
Also new in 12.1 is ComplexRegionPlot, which effectively solves equations and inequalities in the complex plane. Like here’s the (very much branchcutinformed) solution to an equation whose analog would be trivial over the reals:
✕
ComplexRegionPlot[Sqrt[z^(2 + 2 I)] == z^(1 + I), {z, 10}] 
In a very different area of mathematics, another new function in Version 12.1 is CategoricalDistribution. We introduced the idea of symbolic representations of statistical distributions back in Version 6—with things like NormalDistribution and PoissonDistribution—and the idea has been extremely successful. But so far all our distributions have been distributions over numbers. In 12.1 we have our first distribution where the possible outcomes don’t need to be numbers.
Here’s a distribution where there are outcomes x, y, z with the specified probabilities:
✕
dist = CategoricalDistribution[{x, y, z}, {.1, .2, .7}] 
Given this distribution, one can do things like generate random variates:
✕
RandomVariate[dist, 10] 
Here’s a 3D categorical distribution:
✕
dist = CategoricalDistribution[{{"A", "B", "C"}, {"D", "E"}, {"X", "Y"}}, {{{2, 4}, {2, 1}}, {{2, 2}, {3, 2}}, {{4, 3}, {1, 3}}}] 
Now we can work out the PDF of the distribution, asking in this case what the probability to get A, D, Y is:
✕
PDF[dist, {"A", "D", "Y"}] 
By the way, if you want to “see the distribution” you can either click the + on the summary box, or explicitly use Information:
✕
Information[dist, "ProbabilityTable"] 
There are lots of uses of CategoricalDistribution, for example in machine learning. Here we’re creating a classifier:
✕
cf = Classify[{1, 2, 3, 4} > {a, a, b, b}] 
If we just give it input 2.3, the classifier will give its best guess for the corresponding output:
✕
cf[2.3] 
But in 12.1 we can also ask for the distribution—and the result is a CategoricalDistribution:
✕
cf[2.3, "Distribution"] 
✕
Information[%, "ProbabilityTable"] 
Math has been a core use case for the Wolfram Language (and Mathematica) since the beginning. And it’s been very satisfying over the past third of a century to see how much math we’ve been able to make computational. But the more we do, the more we realize is possible, and the further we can go. It’s become in a sense routine for us. There’ll be some area of math that people have been doing by hand or piecemeal forever. And we’ll figure out: yes, we can make an algorithm for that! We can use the giant tower of capabilities we’ve built over all these years to systematize and automate yet more mathematics; to make yet more math computationally accessible to anyone. And so it has been with Version 12.2. A whole collection of pieces of “math progress”.
Let’s start with something rather cut and dried: special functions. In a sense, every special function is an encapsulation of a certain nugget of mathematics: a way of defining computations and properties for a particular type of mathematical problem or system. Starting from Mathematica 1.0 we’ve achieved excellent coverage of special functions, steadily expanding to more and more complicated functions. And in Version 12.2 we’ve got another class of functions: the Lamé functions.
Lamé functions are part of the complicated world of handling ellipsoidal coordinates; they appear as solutions to the Laplace equation in an ellipsoid. And now we can evaluate them, expand them, transform them, and do all the other kinds of things that are involved in integrating a function into our language:
✕
Plot[Abs[LameS[3/2 + I, 3, z, 0.1 + 0.1 I]], {z, 8 EllipticK[1/3], 8 EllipticK[1/3]}] 
✕
Series[LameC[\[Nu], j, z, m], {z, 0, 3}] 
Also in Version 12.2 we’ve done a lot on elliptic functions—dramatically speeding up their numerical evaluation and inventing algorithms doing this efficiently at arbitrary precision. We’ve also introduced some new elliptic functions, like JacobiEpsilon—which provides a generalization of EllipticE that avoids branch cuts and maintains the analytic structure of elliptic integrals:
✕
ComplexPlot3D[JacobiEpsilon[z, 1/2], {z, 6}] 
We’ve been able to do many symbolic Laplace and inverse Laplace transforms for a couple of decades. But in Version 12.2 we’ve solved the subtle problem of using contour integration to do inverse Laplace transforms. It’s a story of knowing enough about the structure of functions in the complex plane to avoid branch cuts and other nasty singularities. A typical result effectively sums over an infinite number of poles:
✕
InverseLaplaceTransform[Coth[s \[Pi] /2 ]/(1 + s^2), s, t] 
And between contour integration and other methods we’ve also added numerical inverse Laplace transforms. It all looks easy in the end, but there’s a lot of complicated algorithmic work needed to achieve this:
✕
InverseLaplaceTransform[1/(s + Sqrt[s] + 1), s, 1.5] 
Another new algorithm made possible by finer “function understanding” has to do with asymptotic expansion of integrals. Here’s a complex function that becomes increasingly wiggly as λ increases:
✕
Table[ReImPlot[(t^10 + 3) Exp[I \[Lambda] (t^5 + t + 1)], {t, 2, 2}], {\[Lambda], 10, 30, 10}] 
And here’s the asymptotic expansion for λ→∞:
✕
AsymptoticIntegrate[(t^10 + 3) Exp[ I \[Lambda] (t^5 + t + 1)], {t, 2, 2}, {\[Lambda], Infinity, 2}] 
Version 1 of Mathematica was billed as “A System for Doing Mathematics by Computer”, and—for more than three decades—in every new version of Wolfram Language and Mathematica there’ve been innovations in “doing mathematics by computer”.
For Version 12.3 let’s talk first about symbolic equation solving. Back in Version 3 (1996) we introduced the idea of implicit “Root object” representations for roots of polynomials, allowing us to do exact, symbolic computations even without “explicit formulas” in terms of radicals. Version 7 (2008) then generalized Root to also work for transcendental equations.
What about systems of equations? For polynomials, elimination theory means that systems really aren’t a different story from individual equations; the same Root objects can be used. But for transcendental equations, this isn’t true anymore. But for Version 12.3 we’ve now figured out how to generalize Root objects so they can work with multivariate transcendental roots:
✕
Solve[Sin[x y] == x^2 + y && 3 x E^y == 2 y E^x + 1 && 3 < x < 3 && 3 < y < 3, {x, y}, Reals] 
And because these Root objects are exact, they can for example be evaluated to any precision:
✕
N[First[x /. %], 150] 
In Version 12.3 there are also some new equations, involving elliptic functions, where exact symbolic results can be given, even without Root objects:
✕
Reduce[JacobiSN[x, 2 y] == 1, x] 
A major advance in Version 12.3 is being able to solve symbolically any linear system of ODEs (ordinary differential equations) with rational function coefficients.
Sometimes the result involves explicit mathematical functions:
✕
DSolve[{Derivative[1][x][t] == ((4 x[t])/t) + (4 y[t])/t, Derivative[1][y][t] == (4/t  t/4) x[t]  (4 y[t])/t}, {x[t], y[t]}, t] 
Sometimes there are integrals—or differential roots—in the results:
✕
DSolveValue[{Derivative[1][y][x] + 2 Derivative[1][z][x] == z[x], (3 + x) x^2 (y[x] + z[x]) + Derivative[1][z][x] == (1 + 3 x^2) z[x]}, {y[x], z[x]}, x] // Simplify 
Another ODE advance in Version 12.3 is full coverage of linear ODEs with qrational function coefficients, in which variables can appear explicitly or implicitly in exponents. The results are exact, though they typically involve differential roots:
✕
DSolve[2^x y[x] + ((1 + 2^x) \!\(\*SuperscriptBox[\(y\), TagBox[ RowBox[{"(", "4", ")"}], Derivative], MultilineFunction>None]\)[x])/(1 + 2^x) == 0, y[x], x] 
What about PDEs? For Version 12.2 we introduced a major new framework for modeling with numerical PDEs. And now in Version 12.3 we’ve produced a whole 105page monograph about symbolic solutions to PDEs:
Here’s an equation that in Version 12.2 could be solved numerically:
✕
eqns = {Laplacian[u[x, y],{x, y}] == 0, u[x, 0] == Sin[x] && u[0, y] == Sin[y] && u[2, y] == Sin[2 y]}; 
Now it can be solved exactly and symbolically as well:
✕
DSolveValue[eqns, u[x, y], {x, y}] 
In addition to linear PDEs, Version 12.3 extends the coverage of special solutions to nonlinear PDEs. Here’s one (with 4 variables) that uses Jacobi’s method:
✕
DSolveValue[(\!\( \*SubscriptBox[\(\[PartialD]\), \(x\)]\(u[x, y, z, t]\)\))^4 == (\!\( \*SubscriptBox[\(\[PartialD]\), \(y\)]\(u[x, y, z, t]\)\))^2 + (\!\( \*SubscriptBox[\(\[PartialD]\), \(z\)]\(u[x, y, z, t]\)\))^3 \!\( \*SubscriptBox[\(\[PartialD]\), \(t\)]\(u[x, y, z, t]\)\), u[x, y, z, t], {x, y, z, t}] 
Something added in 12.3 that both supports PDEs and provides new functionality for signal processing is bilateral Laplace transforms (i.e. integrating from –∞ to +∞, like a Fourier transform):
✕
BilateralLaplaceTransform[Sin[t] Exp[t^2], t, s] 
Ever since Version 1, we’ve prided ourselves on our coverage of special functions. Over the years we’ve been able to progressively extend that coverage to more and more general special functions. Version 12.3 has several new longsought classes of special functions. There are the Carlson elliptic integrals. And then there is the Fox Hfunction.
Back in Version 3 (1996) we introduced MeijerG which dramatically expanded the range of definite integrals that we could do in symbolic form. MeijerG is defined in terms of a Mellin–Barnes integral in the complex plane. It’s a small change in the integrand, but it’s taken 25 years to unravel the necessary mathematics and algorithms to bring us now in Version 12.3 FoxH.
FoxH is a very general function—that encompasses all hypergeometric pFq and Meijer G functions, and much beyond. And now that FoxH is in our language, we’re able to start the process of expanding our integration and other symbolic capabilities to make use of it.
Back in 1988 one of the features of Mathematica 1.0 that people really liked was the ability to do integrals symbolically. Over the years, we’ve gradually increased the range of integrals that can be done. And a third of a century later—in Version 13.0—we’re delivering another jump forward.
Here’s an integral that couldn’t be done “in closed form” before, but in Version 13.0 it can:
✕

Any integral of an algebraic function can in principle be done in terms of our general DifferentialRoot objects. But the bigger algorithmic challenge is to get a “humanfriendly answer” in terms of familiar functions. It’s a fragile business, where a small change in a coefficient can have a large effect on what reductions are possible. But in Version 13.0 there are now many integrals that could previously be done only in terms of special functions, but now give results in elementary functions. Here’s an example:
✕

In Version 12.3 the same integral could still be done, but only in terms of elliptic integrals:
As in every new version of the Wolfram Language, Version 13.0 has lots of specific mathematical enhancements. An example is a new, convenient way to get the poles of a function. Here’s a particular function plotted in the complex plane:
✕

And here are the exact poles (and their multiplicities) for this function within the unit circle:
✕

Now we can sum the residues at these poles and use Cauchy’s theorem to get a contour integral:
✕

Also in the area of calculus we’ve added various conveniences to the handling of differential equations. For example, we now directly support vector variables in ODEs:
✕

Using our graph theory capabilities we’ve also been able to greatly enhance our handling of systems of ODEs, finding ways to “untangle” them into blockdiagonal forms that allow us to find symbolic solutions in much more complex cases than before.
For PDEs it’s typically not possible to get general “closedform” solutions for nonlinear PDEs. But sometimes one can get particular solutions known as complete integrals (in which there are just arbitrary constants, not “whole” arbitrary functions). And now we have an explicit function for finding these:
✕

Turning from calculus to algebra, we’ve added the function PolynomialSumOfSquaresList that provides a kind of “certificate of positivity” for a multivariate polynomial. The idea is that if a polynomial can be decomposed into a sum of squares (and most, but not all, that are never negative can) then this proves that the polynomial is indeed always nonnegative:
✕

And, yes, summing the squares gives the original polynomial again:
✕

In Version 13.0 we’ve also added a couple of new matrix functions. There’s Adjugate, which is essentially a matrix inverse, but without dividing by the determinant. And there’s DrazinInverse which gives the inverse of the nonsingular part of a matrix—as used particularly in solving differentialalgebraic equations.
You’ve got a symbolic math expression and you want to figure out its rough value. If it’s a number you just use N to get a numerical approximation. But how do you get a symbolic approximation?
Ever since Version 1.0—and, in the history of math, ever since the 1600s—there’s been the idea of power series: find an essentially polynomiallike approximation to a function, as Series does. But not every mathematical expression can be reasonably approximated that way. It’s difficult math, but it’s very useful if one can make it work. We started introducing “asymptotic approximation” functions for specific cases (like integrals) in Version 11.3, but now in 12.1 we’re introducing the asymptotic superfunction Asymptotic.
Consider this inverse Laplace transform:
✕
InverseLaplaceTransform[1/(s Sqrt[s^3 + 1]), s, t] 
There’s no exact symbolic solution for it. But there is an asymptotic approximation when t is close to 0:
✕
Asymptotic[InverseLaplaceTransform[1/(s Sqrt[s^3 + 1]), s, t], t > 0] 
Sometimes it’s convenient to not even try to evaluate something exactly—but just to leave it inactive until you give it to Asymptotic:
✕
Asymptotic[ DSolveValue[Sin[x]^2 y''[x] + x y[x] == 0, y[x], x], {x, 0, 5}] 
Asymptotic deals with functions of continuous variables. In Version 12.1 there’s also DiscreteAsymptotic. Here we’re asking for the asymptotic behavior of the Prime function:
✕
DiscreteAsymptotic[Prime[n], n > Infinity] 
Or the factorial:
✕
DiscreteAsymptotic[n!, n > Infinity] 
We can ask for more terms if we want:
✕
DiscreteAsymptotic[n!, n > Infinity, SeriesTermGoal > 5] 
Sometimes even quite simple functions can lead to quite exotic asymptotic approximations:
✕
DiscreteAsymptotic[BellB[n], n > Infinity] 
It’s a very common calculus exercise to determine, for example, whether a particular function is injective. And it’s pretty straightforward to do this in easy cases. But a big step forward in Version 12.2 is that we can now systematically figure out these kinds of global properties of functions—not just in easy cases, but also in very hard cases. Often there are whole networks of theorems that depend on functions having suchandsuch a property. Well, now we can automatically determine whether a particular function has that property, and so whether the theorems hold for it. And that means that we can create systematic algorithms that automatically use the theorems when they apply.
Here’s an example. Is Tan[x] injective? Not globally:
✕
FunctionInjective[Tan[x], x] 
But over an interval, yes:
✕
FunctionInjective[{Tan[x], 0 < x < Pi/2}, x] 
What about the singularities of Tan[x]? This gives a description of the set:
✕
FunctionSingularities[Tan[x], x] 
You can get explicit values with Reduce:
✕
Reduce[%, x] 
So far, fairly straightforward. But things quickly get more complicated:
✕
FunctionSingularities[ArcTan[x^y], {x, y}, Complexes] 
And there are more sophisticated properties you can ask about as well:
✕
FunctionMeromorphic[Log[z], z] 
✕
FunctionMeromorphic[{Log[z], z > 0}, z] 
We’ve internally used various kinds of functiontesting properties for a long time. But with Version 12.2 function properties are much more complete and fully exposed for anyone to use. Want to know if you can interchange the order of two limits? Check FunctionSingularities. Want to know if you can do a multivariate change of variables in an integral? Check FunctionInjective.
And, yes, even in Plot3D we’re routinely using FunctionSingularities to figure out what’s going on:
✕
Plot3D[Re[ArcTan[x^y]], {x, 5, 5}, {y, 5, 5}] 
Back when one still had to do integrals and the like by hand, it was always a thrill when one discovered that one’s problem could be solved in terms of some exotic “special function” that one hadn’t even heard of before. Special functions are in a sense a way of packaging mathematical knowledge: once you know that the solution to your equation is a Lamé function, that immediately tells you lots of mathematical things about it.
In the Wolfram Language, we’ve always taken special functions very seriously, not only supporting a vast collection of them, but also making it possible to evaluate them to any numerical precision, and to have them participate in a full range of symbolic mathematical operations.
When I first started using special functions about 45 years ago, the book that was the standard reference was Abramowitz & Stegun’s 1964 Handbook of Mathematical Functions. It listed hundreds of functions, some widely used, others less so. And over the years in the development of Wolfram Language we’ve steadily been checking off more functions from Abramowitz & Stegun.
And in Version 13.0 we’re finally done! All the functions in Abramowitz & Stegun are now fully computable in the Wolfram Language. The last functions to be added were the Coulomb wavefunctions (relevant for studying quantum scattering processes). Here they are in Abramowitz & Stegun:
And here’s—as of Version 13—how to get that first picture in Wolfram Language:
✕

Of course there’s more to the story, as we can now see:
✕

In using the Wolfram Language the emphasis is usually on what the result of a computation is, not why it is that. But in Version 11.3 we introduced FindEquationalProof, which generates proofs of assertions given axioms.
AxiomaticTheory provides a collection of standard axiom systems. One of them is an axiom system for group theory:
✕
axioms = AxiomaticTheory[{"GroupAxioms", "Multiplication" > p, "Identity" > e}] 
This axiom system is sufficient to allow proofs of general results about groups. For example, we can show that—even though the axioms only asserted that e is a right identity—it is possible to prove from the axioms that it is also a left identity:
✕
FindEquationalProof[p[e, x] == x, axioms] 
This dataset shows the actual steps in our automatically generated proof:
✕
Dataset[%["ProofDataset"], MaxItems > {6, 1}] 
But if you want to prove a result not about groups in general, but about a specific finite group, then you need to add to the axioms the particular defining relations for your group. You can get these from FiniteGroupData—which has been much extended in 12.1. Here are the axioms for the quaternion group, given in a default notation:
✕
FiniteGroupData["Quaternion", "DefiningRelations"] 
To use these axioms in FindEquationalProof, we need to merge their notation with the notation we use for the underlying group axioms. In Version 12.1, you can do this directly in AxiomaticTheory:
✕
AxiomaticTheory[{"GroupAxioms", "Quaternion", "Multiplication" > p, "Identity" > e}] 
But to use the most common notation for quaternions, we have to specify a little more:
✕
AxiomaticTheory[{"GroupAxioms", "Quaternion", <"Multiplication" > p, "Inverse" > inv, "Identity" > e, "Generators" > {i, j}>}] 
But now we can prove theorems about the quaternions. This generates a 54step proof that the 4th power of the generator we have called i is the identity:
✕
FindEquationalProof[p[i, p[i, p[i, i]]] == e, %] 
In addition to doing mathematical proofs, we can now use FindEquationalProof in Version 12.1 to do general proofs with arbitrary predicates (or, more specifically, general firstorder logic). Here’s a famous example of a syllogism, based on the predicates mortal and man. FindEquationalProof gives a proof of the assertion that Socrates is mortal:
✕
FindEquationalProof[ mortal[socrates], {ForAll[x, Implies[man[x], mortal[x]]], man[socrates]}] 
I think it’s pretty neat that this is possible, but it must be admitted that the actual proof generated (which is 53 steps long in this case) is a bit hard to read, not least because it involves conversion to equational logic.
Still, FindEquationalProof can successfully automate lots of proofs. Here it’s solving a logic puzzle given by Lewis Carroll, that establishes (here with a 100step proof) that babies cannot manage crocodiles:
✕
FindEquationalProof[ Not[Exists[x, And[baby[x], manageCrocodile[x]]]], {ForAll[x, Implies[baby[x], Not[logical[x]]]], ForAll[x, Implies[manageCrocodile[x], Not[despised[x]]]], ForAll[x, Implies[Not[logical[x]], despised[x]]]}] 
One can say that the whole idea of symbolic expressions (and their transformations) on which we rely so much in the Wolfram Language originated with combinators—which just celebrated their centenary on December 7, 2020. The version of symbolic expressions that we have in Wolfram Language is in many ways vastly more advanced and usable than raw combinators. But in Version 12.2—partly by way of celebrating combinators—we wanted to add a framework for raw combinators.
So now for example we have CombinatorS, CombinatorK, etc., rendered appropriately:
✕
CombinatorS[CombinatorK] 
But how should we represent the application of one combinator to another? Today we write something like:
✕
f@g@h@x 
But in the early days of mathematical logic there was a different convention—that involved leftassociative application, in which one expected “combinator style” to generate “functions” not “values” from applying functions to things. So in Version 12.2 we’re introducing a new “application operator” Application, displayed as (and entered as \[Application] or ap ):
✕
Application[f, Application[g, Application[h, x]]] 
✕
Application[Application[Application[f, g], h], x] 
And, by the way, I fully expect Application—as a new, basic “constructor”—to have a variety of uses (not to mention “applications”) in setting up general structures in the Wolfram Language.
The rules for combinators are trivial to specify using pattern transformations in the Wolfram Language:
✕
{CombinatorS\[Application]x_\[Application]y_\[Application]z_ :> x\[Application]z\[Application](y\[Application]z), CombinatorK\[Application]x_\[Application]y_ :> x} 
But one can also think about combinators more “algebraically” as defining relations between expressions—and there’s now a theory in AxiomaticTheory for that.
And in 12.2 a few more other theories have been added to AxiomaticTheory, as well as several new properties.
]]>