Wolfram Computation Meets Knowledge

New in 13: Core Language

Two years ago we released Version 12.0 of the Wolfram Language. Here are the updates in the core language since then, including the latest features in 13.0. The contents of this post are compiled from Stephen Wolfram’s Release Announcements for 12.1, 12.2, 12.3 and 13.0.

 

Functional Programming Adverbs & More (March 2020)

We’re always working to make the Wolfram Language easier and more elegant to use, and Version 12.1 contains the latest in an idea we’ve been developing for symbolic functional programming. If you think of a built-in function as a verb, what we’re adding are adverbs: constructs that modify the operation of the verb.

A first example is OperatorApplied. Here’s the basic version of what it does:

OperatorApplied
&#10005

OperatorApplied[f][x][y]

Why is this useful? Many functions have “operator forms”. For example, instead of

Select
&#10005

Select[{1, 2, 3, 4}, PrimeQ]

you can say

Select
&#10005

Select[PrimeQ][{1, 2, 3, 4}]

and that means you can just “pick up” the modified function and do things with it:

Map
&#10005

Map[Select[PrimeQ], {{6, 7, 8}, {11, 12, 13, 14}}]

or (using the operator form of Map):

Map
&#10005

Map[Select[PrimeQ]][{{6, 7, 8}, {11, 12, 13, 14}}]

OK, so what does OperatorApplied do? Basically it lets you create an operator form of any function.

Let’s say you have a function f that—like Select—usually takes two arguments. Well, then

OperatorApplied
&#10005

OperatorApplied[f][y]

is a function that takes a single argument, and forms f[x,y] from it:

OperatorApplied
&#10005

OperatorApplied[f][y][x]

OperatorApplied allows for some elegant programming, and often lets one avoid having to insert pure functions with # and & etc.

At first, OperatorApplied may seem like a very abstract “higher-order” construct. But it quickly becomes natural, and is particularly convenient when, for example, one has to provide a function for something—like as a setting for an option, the first argument to Outer, and so on.

By default, OperatorApplied[f][y] creates an operator form to be applied to an expression which will become the first argument of f. There’s a generalized form in which one specifies exactly how arguments should be knitted together, as in:

OperatorApplied
&#10005

OperatorApplied[f, 4 -> {3, 2, 1, 4}][x][y][z][u][v]

CurryApplied is in a sense a “purer” variant of OperatorApplied, in which one specifies up front the number of arguments to expect, and then (unless specified otherwise) these arguments are always used in the order they appear. So, for example, this makes a function that expects two arguments:

CurryApplied
&#10005

CurryApplied[f, 2][x][y]

CurryApplied
&#10005

CurryApplied[f, 2][x][y][z][u][v]

Needless to say—given that it’s a purer construct—CurryApplied is itself curryable: it has an operator form in which one just gives the number of arguments to expect:

CurryApplied
&#10005

CurryApplied[2][f][x][y][z][u][v]

In Version 12.1, there’s another convenient adverb that we’ve introduced: ReverseApplied. As its name suggests, it specifies that a function should be applied in a reverse way:

ReverseApplied
&#10005

ReverseApplied[f][x, y, z]

This is particularly convenient when you’re doing things like specifying sorting functions:

Sort
&#10005

Sort[{5, 6, 1, 7, 3, 7, 3}, ReverseApplied[NumericalOrder]]

All of this symbolic functional programming emphasizes the importance of thinking about symbolic expressions structurally. And one new function to help with this is ExpressionGraph, which turns the tree structure (think TreeForm) of an expression into an actual graph that can be manipulated:

ExpressionGraph
&#10005

ExpressionGraph[{{a, b}, {c, d, e}}]

ExpressionGraph
&#10005

ExpressionGraph[{{a, b}, {c, d, e}}, VertexLabels -> Automatic]

While we’re talking about the niceties of programming, one additional feature of Version 12.1 is TimeRemaining, which, as the name suggests, tells you how much time you have left in a computation before a time constraint hits you. So, for example, here TimeConstrained said the computation should be allocated 5 seconds. But after the Pause used about 1 second, there was a little less than 4 seconds remaining:

TimeConstrained
&#10005

TimeConstrained[Pause[1]; TimeRemaining[], 5]

If you’re writing sophisticated code, it’s very useful to be able to find out how much “temporal headroom” you have, to see for example whether it’s worth trying a different strategy, etc.

Language Innovations & Extensions (March 2020)

Nothing has been a big success:

{a, b, Nothing, c, Nothing}
&#10005

{a, b, Nothing, c, Nothing}

Before Nothing, you always had to poke at a list from the outside to get elements in it deleted. But Nothing is a symbolic way of specifying deletion that “works from the inside”.

Pretty much as soon as we’d invented Nothing, we realized we also wanted another piece of functionality: a symbolic object that would somehow automatically disgorge its contents into a list. People had been using idioms like Sequence@@… to do this. But Sequence is a slippery construct, and this idiom is fragile and ugly.

The functionality of our auto-inserter was easy to define. But what were we going to call it? For several years this very useful piece of functionality languished for want of a name. It came up several times in our livestreamed design reviews. Every time we would discuss it for a while—and often our viewers would offer good suggestions. But we were never happy with the name.

Finally, though, we decided we had to solve the problem. It was a painful naming process, culminating in a 90-minute livestream whose net effect was a change in one letter in the name. But in the end, we’re pretty happy with the name: Splice. Splice is a splice, like for film, or DNA—and it’s something that gets inserted. So now, as of Version 12.1 we have it:

{a, b, Splice
&#10005

{a, b, Splice[{x, y, z}], c, d}

Of course, the more common case is something like:

{a, b, x, c, d} /. x -> Splice
&#10005

{a, b, x, c, d} /. x -> Splice[{p, q, r}]

There’s a lot of strange (and potentially buggy) Flatten operations that are going to be avoided by Splice.

One of the things we’re always trying to do in developing Wolfram Language is to identify important “lumps of computation” that we can conveniently encapsulate into functions (and where we can give those functions good names!). In Version 12.1 there’s a family of new functions that handle computations around subsets of elements in lists:

SubsetCases
&#10005

SubsetCases[{a, b, a, b, a, c}, {x_, y_, x_}, Overlaps -> True]

I must have written special cases of these functions a zillion times. But now we’ve got general functions that anyone can just use. These functions come up in lots of places. And actually we first implemented general versions of them in connection with semantic-query-type computations.

But on the theory that any sufficiently well-designed function eventually gets a very wide range of uses, I can report that I’ve recently found a most unexpected but spectacular use for SubsetReplace in the context of fundamental physics. But much more on that in a little while…

Talking about physics brings me to something else in 12.1: new functions for handling time. DateInterval now provides a symbolic representation for an interval of time. And there’s an interesting algebra of ordering that needs to be defined for it. Which includes the need for the symbols InfinitePast and InfiniteFuture:

Today < InfiniteFuture
&#10005

Today < InfiniteFuture

More Array Gymnastics: Column Operations and Their Generalizations (December 2020)

Let’s say you’ve got an array, like:

{{a, b, c, d}, {x, y, z, w}} // MatrixForm
&#10005

{{a, b, c, d}, {x, y, z, w}} // MatrixForm

Map lets you map a function over the “rows” of this array:

Map
&#10005

Map[f, {{a, b, c, d}, {x, y, z, w}}]

But what if you want to operate on the “columns” of the array, effectively “reducing out” the first dimension of the array? In Version 12.2 the function ArrayReduce lets you do this:

ArrayReduce
&#10005

ArrayReduce[f, {{a, b, c, d}, {x, y, z, w}}, 1]

Here’s what happens if instead we tell ArrayReduce to “reduce out” the second dimension of the array:

ArrayReduce
&#10005

ArrayReduce[f, {{a, b, c, d}, {x, y, z, w}}, 2]

What’s really going on here? The array has dimensions 2×4:

Dimensions
&#10005

Dimensions[{{a, b, c, d}, {x, y, z, w}}]

ArrayReduce[f, ..., 1] “reduces out” the first dimension, leaving an array with dimensions {4}. ArrayReduce[f, ..., 2] reduces out the second dimension, leaving an array with dimensions {2}.

Let’s look at a slightly bigger case—a 2×3×4 array:

array = ArrayReshape
&#10005

array = ArrayReshape[Range[24], {2, 3, 4}]

This now eliminates the “first dimension”, leaving a 3×4 array:

ArrayReduce
&#10005

ArrayReduce[f, array, 1]

Dimensions
&#10005

Dimensions[%]

This, on the other hand, eliminates the “second dimension”, leaving a 2×4 array:

ArrayReduce
&#10005

ArrayReduce[f, array, 2]

Dimensions
&#10005

Dimensions[%]

Why is this useful? One example is when you have arrays of data where different dimensions correspond to different attributes, and then you want to “ignore” a particular attribute, and aggregate the data with respect to it. Let’s say that the attribute you want to ignore is at level n in your array. Then all you do to “ignore” it is to use ArrayReduce[f, ..., n], where f is the function that aggregates values (often something like Total or Mean).

You can achieve the same results as ArrayReduce by appropriate sequences of Transpose, Apply, etc. But it’s quite messy, and ArrayReduce provides an elegant “packaging” of these kinds of array operations.

ArrayReduce is quite general; it lets you not only “reduce out” single dimensions, but whole collections of dimensions:

ArrayReduce
&#10005

ArrayReduce[f, array, {2, 3}]

ArrayReduce
&#10005

ArrayReduce[f, array, {{2}, {3}}]

At the simplest level, ArrayReduce is a convenient way to apply functions “columnwise” on arrays. But in full generality it’s a way to apply functions to subarrays with arbitrary indices. And if you’re thinking in terms of tensors, ArrayReduce is a generalization of contraction, in which more than two indices can be involved, and elements can be “flattened” before the operation (which doesn’t have to be summation) is applied.

Cleaning Up After Your Code (December 2020)

You run a piece of code and it does what it does—and typically you don’t want it to leave anything behind. Often you can use scoping constructs like Module, Block, BlockRandom, etc. to achieve this. But sometimes there’ll be something you set up that needs to be explicitly “cleaned up” when your code finishes.

For example, you might create a file in your piece of code, and want the file removed when that particular piece of code finishes. In Version 12.2 there’s a convenient new function for managing things like this: WithCleanup.

WithCleanup[expr, cleanup] evaluates expr, then cleanup—but returns the result from expr. Here’s a trivial example (which could really be achieved better with Block). You’re assigning a value to x, getting its square—then clearing x before returning the square:

WithCleanup
&#10005

WithCleanup[x = 7; x^2, Clear[x]]

It’s already convenient just to have a construct that does cleanup while still returning the main expression you were evaluating. But an important detail of WithCleanup is that it also handles the situation where you abort the main evaluation you were doing. Normally, issuing an abort would cause everything to stop. But WithCleanup is set up to make sure that the cleanup happens even if there’s an abort. So if the cleanup involves, for example, deleting a file, the file gets deleted, even if the main operation is aborted.

WithCleanup also allows an initialization to be given. So here the initialization is done, as is the cleanup, but the main evaluation is aborted:

WithCleanup
&#10005

WithCleanup[Echo[1], Abort[]; Echo[2], Echo[3]]

By the way, WithCleanup can also be used with Confirm/Enclose to ensure that even if a confirmation fails, certain cleanup will be done.

Function Robustification (December 2020)

Confirm/Enclose provide a good high-level way to handle the “flow” of things going wrong inside a program or a function. But what if there’s something wrong right at the get-go? In our built-in Wolfram Language functions, there’s a standard set of checks we apply. Are there the correct number of arguments? If there are options, are they allowed options, and are they in the correct place? In Version 12.2 we’ve added two functions that can perform these standard checks for functions you write.

This says that f should have two arguments, which here it doesn’t:

CheckArguments
&#10005

CheckArguments[f[x, y, z], 2]

Here’s a way to make CheckArguments part of the basic definition of a function:

f
&#10005

f[args___] := Null /; CheckArguments[f[args], 2] 

Give it the wrong number of arguments, and it’ll generate a message, and then return unevaluated, just like lots of built-in Wolfram Language functions do:

f
&#10005

f[7]

ArgumentsOptions is another new function in Version 12.2—that separates “positional arguments” from options in a function. Set up options for a function:

Options
&#10005

Options[f] = {opt -> Automatic};

This expects one positional argument, which it finds:

ArgumentsOptions
&#10005

ArgumentsOptions[f[x, opt -> 7], 1]

If it doesn’t find exactly one positional argument, it generates a message:

ArgumentsOptions
&#10005

ArgumentsOptions[f[x, y], 1]

Confirm/Enclose: Symbolic Exception Handling (December 2020)

Did something go wrong inside your program? And if so, what should the program do? It can be possible to write very elegant code if one ignores such things. But as soon as one starts to put in checks, and has logic for unwinding things if something goes wrong, it’s common for the code to get vastly more complicated, and vastly less readable.

What can one do about this? Well, in Version 12.2 we’ve developed a high-level symbolic mechanism for handling things going wrong in code. Basically the idea is that you insert Confirm (or related functions)—a bit like you might insert Echo—to “confirm” that something in your program is doing what it should. If the confirmation works, then your program just keeps going. But if it fails, then the program stops–and exits to the nearest enclosing Enclose. In a sense, Enclose “encloses” regions of your program, not letting anything that goes wrong inside immediately propagate out.

Let’s see how this works in a simple case. Here the Confirm successfully “confirms” y, just returning it, and the Enclose doesn’t really do anything:

Enclose
&#10005

Enclose[f[x, Confirm[y], z]]

But now let’s put $Failed in place of y. $Failed is something that Confirm by default considers to be a problem. So when it sees $Failed, it stops, exiting to the Enclose—which in turn yields a Failure object:

Enclose
&#10005

Enclose[f[x, Confirm[$Failed], z]]

If we put in some echoes, we’ll see that x is successfully reached, but z is not; as soon as the Confirm fails, it stops everything:

Enclose
&#10005

Enclose[f[Echo[x], Confirm[$Failed], Echo[z]]]

A very common thing is to want to use Confirm/Enclose when you define a function:

addtwo
&#10005

addtwo[x_] := Enclose[Confirm[x] + 2]

Use argument 5 and everything just works:

addtwo
&#10005

addtwo[5]

But if we instead use Missing[]—which Confirm by default considers to be a problem—we get back a Failure object:

addtwo
&#10005

addtwo[Missing[]]

We could achieve the same thing with If, Return, etc. But even in this very simple case, it wouldn’t look as nice.

Confirm has a certain default set of things that it considers “wrong” ($Failed, Failure[...], Missing[...] are examples). But there are related functions that allow you to specify particular tests. For example, ConfirmBy applies a function to test if an expression should be confirmed.

Here, ConfirmBy confirms that 2 is a number:

Enclose
&#10005

Enclose[f[1, ConfirmBy[2, NumberQ], 3]]

But x is not considered so by NumberQ:

Enclose
&#10005

Enclose[f[1, ConfirmBy[x, NumberQ], 3]]

OK, so let’s put these pieces together. Let’s define a function that’s supposed to operate on strings:

world
&#10005

world[x_] := Enclose[ConfirmBy[x, StringQ] <> " world!"]

If we give it a string, all is well:

world
&#10005

world["hello"]

But if we give it a number instead, the ConfirmBy fails:

world
&#10005

world[4]

But here’s where really nice things start to happen. Let’s say we want to map world over a list, always confirming that it gets a good result. Here everything is OK:

Enclose
&#10005

Enclose[Confirm[world[#]] & /@ {"a", "b", "c"}]

But now something has gone wrong:

Enclose
&#10005

Enclose[Confirm[world[#]] & /@ {"a", "b", 3}]

The ConfirmBy inside the definition of world failed, causing its enclosing Enclose to produce a Failure object. Then this Failure object caused the Confirm inside the Map to fail, and the enclosing Enclose gave a Failure object for the whole thing. Once again, we could have achieved the same thing with If, Throw, Catch, etc. But Confirm/Enclose do it more robustly, and more elegantly.

These are all very small examples. But where Confirm/Enclose really show their value is in large programs, and in providing a clear, high-level framework for handling errors and exceptions, and defining their scope.

In addition to Confirm and ConfirmBy, there’s also ConfirmMatch, which confirms that an expression matches a specified pattern. Then there’s ConfirmQuiet, which confirms that the evaluation of an expression doesn’t generate any messages (or, at least, none that you told it to test for). There’s also ConfirmAssert, which simply takes an “assertion” (like p>0) and confirms that it’s true.

When a confirmation fails, the program always exits to the nearest enclosing Enclose, delivering to the Enclose a Failure object with information about the failure that occurred. When you set up the Enclose, you can tell it how to handle failure objects it receives—either just returning them (perhaps to enclosing Confirm’s and Enclose’s), or applying functions to their contents.

Confirm and Enclose provide an elegant mechanism for handling errors, that are easy and clean to insert into programs. But—needless to say—there are definitely some tricky issues around them. Let me mention just one. The question is: which Confirm’s does a given Enclose really enclose? If you’ve written a piece of code that explicitly contains Enclose and Confirm, it’s pretty obvious. But what if there’s a Confirm that’s somehow generated—perhaps dynamically—deep inside some stack of functions? It’s similar to the situation with named variables. Module just looks for the variables directly (“lexically”) inside its body. Block looks for variables (“dynamically”) wherever they may occur. Well, Enclose by default works like Module, “lexically” looking for Confirm’s to enclose. But if you include tags in Confirm and Enclose, you can set them up to “find each other” even if they’re not explicitly “visible” in the same piece of code.

Watch Your Code Run: More in the Echo Family (December 2020)

It’s an old adage in debugging code: “put in a print statement”. But it’s more elegant in the Wolfram Language, thanks particularly to Echo. It’s a simple idea: Echo[expr] “echoes” (i.e. prints) the value of expr, but then returns that value. So the result is that you can put Echo anywhere into your code (often as Echo@…) without affecting what your code does.

In Version 12.2 there are some new functions that follow the “Echo” pattern. A first example is EchoLabel, which just adds a label to what’s echoed:

EchoLabel
&#10005

EchoLabel["a"]@5! + EchoLabel["b"]@10!

Aficionados might wonder why EchoLabel is needed. After all, Echo itself allows a second argument that can specify a label. The answer—and yes, it’s a mildly subtle piece of language design—is that if one’s going to just insert Echo as a function to apply (say with @), then it can only have one argument, so no label. EchoLabel is set up to have the operator form EchoLabel[label] so that EchoLabel[label][expr] is equivalent to Echo[expr,label].

Another new “echo function” in 12.2 is EchoTiming, which displays the timing (in seconds) of whatever it evaluates:

Table
&#10005

Table[Length[EchoTiming[Permutations[Range[n]]]], {n, 8, 10}]

It’s often helpful to use both Echo and EchoTiming:

Length
&#10005

Length[EchoTiming[Permutations[Range[Echo@10]]]]

And, by the way, if you always want to print evaluation time (just like Mathematica 1.0 did by default 32 years ago) you can always globally set $Pre=EchoTiming.

Another new “echo function” in 12.2 is EchoEvaluation which echoes the “before” and “after” for an evaluation:

EchoEvaluation
&#10005

EchoEvaluation[2 + 2]

You might wonder what happens with nested EchoEvaluation’s. Here’s an example:

EchoEvaluation
&#10005

EchoEvaluation[
 Accumulate[EchoEvaluation[Reverse[EchoEvaluation[Range[10]]]]]]

By the way, it’s quite common to want to use both EchoTiming and EchoEvaluation:

Table
&#10005

Table[EchoTiming@EchoEvaluation@FactorInteger[2^(50 n) - 1], {n, 2}]

Finally, if you want to leave echo functions in your code, but want your code to “run quiet”, you can use the new QuietEcho to “quiet” all the echoes (like Quiet “quiets” messages):

QuietEcho@Table
&#10005

QuietEcho@
 Table[EchoTiming@EchoEvaluation@FactorInteger[2^(50 n) - 1], {n, 2}]

Lots of Little New Conveniences (May 2021)

What should “just work”? What should be made easier? Ever since Version 1.0 we’ve been working hard to figure out what little conveniences we can add to make the Wolfram Language ever more streamlined.

Version 12.3 has our latest batch of conveniences, scattered across many parts of the language. A new dynamic that’s emerged in this version is functions that have essentially been “prototyped” in the Wolfram Function Repository, and then “upgraded” to be built into the system.

Here’s a first example of a new convenience function: SolveValues. The function Solve—originally introduced in Version 1.0—has a very flexible way of representing its results, that allows for different numbers of variables, different numbers of solutions, etc.

&#10005

Solve[x^2 + 3 x + 1 == 0, x]

But often you’re happy to assume a fixed structure for the solution, and you just want to know the values of variables. And that’s what SolveValues gives:

&#10005

SolveValues[x^2 + 3 x + 1 == 0, x]

By the way, there’s also an NSolveValues that gives approximate numerical values:

&#10005

NSolveValues[x^2 + 3 x + 1 == 0, x]

Another example of a new convenience function is NumberDigit. Let’s say you want the 10th digit of π. You can always use RealDigits and then pick out the digit you want:

&#10005

RealDigits[Pi, 10, 20]

But now you can also just use NumberDigit (where now by “10th digit” we’re assuming you mean the coefficient of 10-10):

&#10005

NumberDigit[Pi, -10]

Back in Version 1.0, we just had Sort. In Version 10.3 (2015) we added AlphabeticSort, and then in Version 11.1 (2017) we added NumericalSort. Now in Version 12.3—to round out this family of default types of sorting—we’re adding LexicographicSort. The default sorting sort (as produced by Sort) is:

&#10005

Subsets[{a, b, c, d}]

But here’s true lexicographic order, like you would find in a dictionary:

&#10005

LexicographicSort[Subsets[{a, b, c, d}]]

Another small new function in Version 12.3 is StringTakeDrop:

&#10005

StringTakeDrop["abcdefghijklmn", {2, 5}]

Having this as a single function makes it easier to use in functional programming constructs like this:

&#10005

FoldPairList[StringTakeDrop, "abcdefghijklmn", {2, 3, 4, 5}]

It’s always an important goal to make “standard workflows” as straightforward as possible. For example, in handling graphs we’ve had VertexOutComponent since Version 8.0 (2010). It gives a list of the vertices that can be reached from a given vertex. And for some things that’s exactly what one wants. But sometimes it’s much more convenient to get the subgraph (and in fact in the formalism of our Physics Project that subgraph—that we view as a “geodesic ball”—is a rather central construct). So in Version 12.3 we’ve added VertexOutComponentGraph:

&#10005

VertexOutComponentGraph[CloudGet["http://wolfr.am/VAs5QDwv"], 10, 4]

Another example of a small “workflow improvement” is in HighlightImage. HighlightImage typically takes a list of regions of interest to highlight in the image. But functions like MorphologicalComponents don’t just make lists of regions in an image; instead they produce a “label matrix” that puts numbers to label different regions in an image. So to make the HighlightImage workflow smoother, in Version 12.3 we let it directly use a label matrix, assigning different colors to the differently labeled regions:

&#10005

HighlightImage[
 CloudGet["http://wolfr.am/VAs6lxUj"], MorphologicalComponents]

One thing we work hard to ensure in the Wolfram Language is coherence and interoperability. (And in fact, we have a whole initiative around this that we call “Language Completeness & Consistency”, whose weekly meetings we regularly livestream.) One of the various facets of interoperability is that we want functions to be able to “eat” any reasonable input and turn it into something they can “naturally” handle.

And as a small example of this, something we added in Version 12.3 is automatic conversion between color spaces. Red by default means the RGB color red (RGBColor[1,0,0]). But now

&#10005

Hue[Red]

means turns that red into red in hue space:

&#10005

Hue[Red] // InputForm

Let’s say you’re running a long computation. You often want to get some indication of the progress that’s being made. In Version 6.0 (2007) we added Monitor, and in subsequent versions we’ve added automatic built-in progress reporting for some functions, for example NetTrain. But now we have an initiative underway to systematically add progress reporting for all sorts of functions that can end up doing long computations. ($ProgressReporting = False globally switches it off.)

&#10005

VideoMap[ColorConvert[#Image, "Grayscale"] &, 
 Video["ExampleData/bullfinch.mkv"]]

We work hard in Wolfram Language to make sure that we pick good defaults, for example for how to display things. But sometimes you have to tell the system what kind of “look” you want. And in Version 12.3 we’ve added the option DatasetTheme to specify “themes” for how Dataset objects should be displayed.

Underneath, each theme is just setting specific options, which you could set yourself. But the theme is “bank switching” options in a convenient way. Here’s a basic dataset with default formatting:

&#10005

Dataset[IdentityMatrix[6]]

Here it is looking more “lively” for the web:

&#10005

Dataset[IdentityMatrix[6], DatasetTheme -> "Web"]

You can give various “theme directives” too:

&#10005

Dataset[IdentityMatrix[6], 
 DatasetTheme -> "AlternatingColumnBackgrounds"]

As well as additional hints:

&#10005

Dataset[IdentityMatrix[6], 
 DatasetTheme -> {"AlternatingColumnBackgrounds", LightGreen}]

I’m not sure why we didn’t think of it before, but in Version 11.3 (2018) we introduced a very nice “user interface innovation”: Iconize. And in Version 12.3 we’ve added another piece of polish to iconization. If you select a piece of an expression, then use Iconize in the context (“right-click”) menu, an appropriate subpart of the expression will get iconized, even if the selection you made might have included an extra comma, or been something that can’t be a strict subpart of the expression:

&#10005

Range[20]

Let’s say you generate an object that takes a lot of memory to store:

&#10005

SparseArray[Range[10^7]]

By default, the object is kept in your kernel session, but it’s not stored directly in your notebook—so it won’t persist after you end your current kernel session. In Version 12.3 we’ve added some options for where you can store the data:

&#10005

SparseArray[Range[10^7]]

One important area where we put lots of effort into making things “just work” is in importing and exporting of data. The Wolfram Language now supports about 250 external data formats, with for example new statistical data formats like SAS7BDAT, DTA, POR and SAV being added in Version 12.3.

Lots of Things Got Faster (May 2021)

In addition to all the effort we put into creating new functionality for Wolfram Language, we’re also always trying to make existing functionality better, and faster. And in Version 12.3 there are lots of things that are now faster. One particularly large group of things got faster because of advances in our compiler technology that allow a broader range of Wolfram Language functionality to be compiled directly into optimized machine code. An example of a beneficiary of this is Around.

Here’s a computation with Around:

&#10005

Sin[Around[RandomReal[], 0.001]]

In Version 12.2 doing this 10,000 times takes about 1.3 seconds on my computer:

&#10005

Do[Sin[Around[RandomReal[], 0.001]], 10^4] // Timing

In Version 12.3, it’s roughly 100 times faster:

&#10005

Do[Sin[Around[RandomReal[], 0.001]], 10^4] // Timing

There are lots of different reasons that things got faster in Version 12.3. In the case of Permanent, for example, we were able to use a new and much better algorithm. Here it is in 12.2:

&#10005

Permanent[Table[2.3 i/j, {i, 15}, {j, 15}]] // Timing

And now in 12.3:

&#10005

Permanent[Table[2.3 i/j, {i, 15}, {j, 15}]] // Timing

Another example is date parsing: converting dates from textual to internal form. The main advance here came from realizing that date parsing is often done in bulk, so it makes sense to adaptively cache parts of the operation. And the result, for example in parsing a million dates, is that what used to take many minutes now takes just a few seconds.

One more example is Rasterize, which in Version 12.3 is typically 2 to 4 times faster than in Version 12.2. The reason for this speedup is somewhat subtle. Back when Rasterize was first introduced in Version 6.0 (2007) data transfer speeds between processes were an issue, and so it was a good optimization to compress any data being transferred. But today transfer speeds are much higher, and we have better optimized array data structures—and so compression no longer makes sense, and removing it (together with other codepath optimization) allows Rasterize to be significantly faster.

An important advance in Version 12.1 was the introduction of DataStructure, allowing direct use of optimization data structures (implemented through our new compiler technology). Version 12.3 introduces several new data structures. There’s "ByteTrie" for fast prefix-based lookups (think Autocomplete and GenomeLookup), and there’s "KDTree" for fast geometric lookups (think Nearest). There’s also now "ImmutableVector", which is basically like an ordinary Wolfram Language list, except that it’s optimized for fast appending.

In addition to speed improvements in the computational kernel, Version 12.3 has user interface speed improvements too. Particularly notable is significantly faster rendering on Windows platforms, achieved by using DirectWrite and making use of GPU capabilities.

Another Kind of Number (December 2021)

One might think that a number is just a number. And that’s basically true for integers. But when a number is a real number the story is more complicated. Sometimes you can “name” a real number symbolically, say . But most real numbers don’t have “symbolic names”. And to specify them exactly you’d have to give an infinite number of digits, or the equivalent. And the result is that one ends up wanting to have approximate real numbers that one can think of as representing certain whole collections of actual real numbers.

A straightforward way of doing this is to use finite-precision numbers, as in:

&#10005


Another approach—introduced in Version 12.0—is Around, which in effect represents a distribution of numbers “randomly distributed” around a given number:

&#10005


When you do operations on Around numbers the “errors” are combined using a certain calculus of errors that’s effectively based on Gaussian distributions—and the results you get are always in some sense statistical.

But what if you want to use approximate numbers, but still get provable results? One approach is to use Interval. But a more streamlined approach now available in Version 13.0 is to use CenteredInterval. Here’s a CenteredInterval used as input to a Bessel function:

&#10005


You can prove things in the Wolfram Language in many ways. You can use Reduce. You can use FindEquationalProof. And you can use CenteredInterval—which in effect leverages numerical evaluation. Here’s a function that has complicated transcendental roots:

&#10005


Can we prove that the function is above 0 between 3 and 4? Let’s evaluate the function over a centered interval there:

&#10005


Now we can check that indeed “all of this interval” is greater than 0:

&#10005


And from the “worst-case” way the interval was computed this now provides a definite theorem.