Congratulations! Now what? Revisions, of course! And we, the kindly Wolfram Blog Team, are here to get you through your revisions with a little help from the Wolfram Language.

By combining the Wolfram Language’s text analysis tools with the Wolfram Knowledgebase’s collection of public-domain novels by authors like Jane Austen and James Joyce, we’ve come up with a few things to help you reflect on your work and see how you measure up to some of the greats.

Literary scholars have been using computational thinking to explore things like genre and emotion for years. Working with large amounts of data, this research gives us a larger picture of things that we can’t discover through reading individual novels, illuminating patterns within the mass of published novels that we would otherwise know little about.

“That’s all well and good,” you might say, “but what about my great (and scandalously unread) dragon bildungsroman?” Well, you’re in luck! You can apply the principles of computational thinking to your writing as well by using the Wolfram Language to help you revise.

Many writers have things about their writing that they would like to improve. It might be a tendency to overuse adjectives or a penchant for bird metaphors. If you already know your writing tics, it’s easy to find them using your word processor. But what if you don’t know what you’re looking for?

A great way to find unknown writing tics is to use `WordCloud`, which can help you visualize words’ frequencies and relative importance. We can test this method on Herman Melville’s classic *Moby-Dick*.

We start by pulling up a list of Melville’s notable books.

Then use `WordCloud` to create a visualization of word frequency in one of his novels—say, *Moby-Dick*.

Of the words that appear most often, the titular Moby Dick is a whale, and the narrator reflects frequently on the obsessed ship captain Ahab. But notice something interesting: the word “like” shows up disproportionately—even more than key words such as “ship,” “sea” and “man.” And from inspecting places where he uses the word “like,” we can discover that Melville loves similes:

*“like silent sentinels all around the town”
“like leaves upon this shepherd’s head”
“like a grasshopper in a May meadow”
“like a snow hill in the air”
“like a candle moving about in a tomb”
“like a Czar in an ice palace made of frozen sighs”*

The similes help bring cosmic grandeur to his epic about a whaling expedition—but it also shows that even the greats aren’t immune to over-reliance on literary devices. As explained in the classic book on writing, *The Elements of Style* by Strunk and White, similes are handy, but should be used in moderation: “Readers need time to catch their breath; they can’t be expected to compare everything with something else, and no relief in sight.”

Our coworker Kathryn Cramer, a science fiction editor and author with some serious chops, often uses word clouds as an editing tool. She looks at her most frequently used words and asks whether there are any double meanings (what she calls “sinister puns”) that she can develop. She notes that you can also use them to clean up sloppy writing; if she sees too many instances of “then,” she knows that there are too many sentences that use “and then.”

An easy way to find if your word has some double meanings, synonyms, antonyms or any other interesting property is to use the `WordData` function and see how many different ways you can play with a word like “hand,” for instance.

You can try these techniques out on your own writing using `Import`. Maybe you could even identify some more writing tics in some of the famous authors whose works are built into the Wolfram Language!

When polishing our prose, many of us often think, “I wish I could write like [insert famous author here].”

For some added fun, in just a few lines of code, we can take a selection of authors—Virginia Woolf, Herman Melville, Frederick Douglass, etc.—and build a `ClassifierFunction` from their notable books.

Then with a simple `FormPage`, we built in about half an hour a fun, toy web app called AuthorIdentify that tries to figure out which classic author would most likely have written your text sample.

To test it out, we gave AuthorIdentify the first paragraph of James Joyce’s *Finnegans Wake*, which was not already in the system. To our delight, it correctly identified the author of the work to be Joyce.

But it’s more fun to let it take a stab at your own work. Our coworker Jeremy Sykes, a publishing assistant here at Wolfram, shared with us a paragraph of his novel, an intergalactic space thriller that combines sci-fi, economics and comedy: *Norman Aidlebee, Galactic CPA*.

It’s fun trying different samples and playing with them to see if you can get a different author. While far from perfect, our AuthorIdentify is still amusing, even when it’s incredibly wrong.

Feel free to try it out. With some more work, including more text samples, a wider range of authors and playing with the options in `Classify`, it is simple to build a robust author identification app with the Wolfram Language.

We hope some of these tips and tools help you aspiring writers out there as you sit down and edit your manuscript. We’ve found the Wolfram Language to be an excellent revision buddy, and are constantly on the lookout for new ways to enhance the editing process. So if you have any ideas of your own on how to code with your novel, please feel free to share in the comments!

]]>Computational thinking can be integrated across the curriculum. It is not just the purview of the math teacher or the computer club, but a key instructional tool for educators from all disciplines. For example, using the Wolfram Language to teach computational thinking, English teachers can explore palindromes, history students can explore main concepts from the Gettysburg Address and science teachers can examine dinosaurs’ weights.

How does a busy teacher apply computational thinking in the classroom? Easily: computational thinking provides a framework for learning, which makes concepts easier for students to understand. It incorporates real-world math into students’ everyday lives.

For instance, using the Wolfram Programming Lab during Computer Science Education Week, you can teach students to think computationally about geography. The “Miles around You” starter Exploration will allow your students to see what exists in their vicinity. Students can make a map of the location, then draw a disk of any size around it and zoom in and out to gain perception. How many sites show on the map at a radius of 100 miles? What about 150 miles?

This exercise requires no knowledge of the Wolfram Language. The activity can last as long or as short as the students and teacher desire. Yet it introduces in a relevant way how computational thinking can answer questions. As students advance with their Wolfram Language engagement, they can complete Wolfram challenges on a variety of subjects, from basketball scores to Pig Latin.

Students are often bored with math in school because they do not see the real-world applications of their lessons. Computer-Based Math education lets students use computers at school the same way they would in their everyday lives: with the computers, not the humans, performing rote calculations. Computational thinking helps students discern which calculations the computer needs to solve so they can explore higher concepts. For instance, if your students are basketball fans, they can take a Wolfram challenge to discover how a basketball team can reach a certain score. The many applications of computational thinking make it easy to incorporate math throughout the curriculum.

Computational thinking in the classroom encourages student engagement when students see the results of their efforts. Maybe your students are excited about the upcoming holidays. Why not let them create a unique decoration in the Wolfram Demonstrations Project?

This example and other Wolfram Demonstrations are accessible ways to explore computational thinking at any level of classroom. Once you’ve played with a few interactive examples, contributing your own Demonstration might be a fun and informative way for you and your students to spend an Hour of Code.

The Wolfram Summer Programs are one example of a place where students learn computational thinking through achieving their personal goals. This year at the Wolfram Summer School in Armenia, students developed their original ideas into working prototypes. Prior to the Armenian and other Wolfram camps, students prepare by completing homework assignments. Students can do the same in a flipped classroom, where they experience material before coming to class and arrive at the in-person lesson ready to engage with an activity.

If you’ve flipped your classroom, then computational thinking can be easily integrated into this environment by introducing your students to the Wolfram Language and using it to work with real-world data. An Elementary Introduction to the Wolfram Language Training Series will provide the pre-class materials for your students. Using this video series, you and your students can learn the basics of the Wolfram Language. Maybe you’re teaching an astronomy lesson this week. The Real-World Data video can introduce your students to computational thinking by using the Wolfram Language to explore planets, stars, galaxies and more—perhaps during the Hour of Code.

With computational thinking, students will learn by doing. Allow students to follow their own interests. Let them choose projects that intrigue them or relate to something they are already undertaking in class. Work computational thinking into the syllabus. Computational thinking is part of the learning process, not a single lesson.

Computational thinking can lead students to answer big questions. Are your students interested in public health? Teaming up with each other—and perhaps members of Wolfram Community—they can use the Wolfram Language to model the spread of a global disease outbreak.

Professors can teach computational thinking too. Perhaps you’re a humanities faculty member. Why not flex your own computational thinking by learning to analyze your data with the Wolfram Language? You and your students may be surprised by what you discover.

Here at Wolfram, there are more plans to help educators teach a generation of students computational thinking. For Computer Science Education Week, we will be hosting another Hour of Code event: middle- and high-school students will go on a computation adventure. If you’re unable to join us in person, why not host your own event?

If you’ve decided to have an Hour of Code, perhaps spend your time having students create tweetable programs—code that fits into 140 characters. Or analyze sea level rise, like Anush Mehrabyan did during the 2015 Wolfram High School Summer Camp. Or create a camera-controlled musical instrument. The examples are inspiring; the possibilities are exciting.

Whatever you decide to do with your students, don’t confine computational thinking and Computer-Based Math to Computer Science Education Week or an Hour of Code. Have fun exploring—and please let us know what you and your students create and learn.

]]>

This operation returns a `FeatureExtractorFunction`, which can be applied to the original data:

As you can see, the examples are transformed into vectors of numeric values. This operation can also be done in one step using `FeatureExtraction`’s sister function `FeatureExtract`:

But a `FeatureExtractorFunction` allows you to process new examples as well:

In the example above, the transformation is very simple: the nominal values are converted using a “one-hot” encoding, but sometimes the transformation can be more complex:

In that case, a vector based on word counts is extracted for the text, another vector is extracted from the color using its RGB values and another vector is constructed using features contained in the `DateObject` (such as the absolute time, the year, the month, etc.). Finally, these vectors are joined and a dimensionality reduction step is performed (see `DimensionReduction`).

OK, so what is the purpose of all this? Part of the answer is that numerical spaces are very handy to deal with: one can easily define a distance (e.g. `EuclideanDistance`) or use classic transformations (`Standardize`, `AffineTransform`, etc.), and many machine learning algorithms (such as linear regression or *k*-means clustering) require numerical vectors as input. In this respect, feature extraction is often a necessary preprocess for classification, clustering, etc. But as you can guess from the example above, `FeatureExtraction` is more than a mere data format converter: its real goal is to find a meaningful and useful representation of the data, a representation that will be helpful for downstream tasks. This is quite clear when dealing with images; for example, let’s use `FeatureExtraction` on the following set:

We can then extract features from the first image:

In this case, a vector of length 31 is extracted (a huge data reduction from the 255,600 pixel values in the original image). This extraction is again done in two steps: first, a built-in extractor, specializing in images, is used to extract about 1,000 features from each image. Then a dimensionality reducer is trained from the resulting data to reduce the number of features down to 31. The resulting vectors are much more useful than raw pixel values. For example, let’s say that one wants to find images in the original dataset that are similar to the following query images:

We can try to solve this task using `Nearest` directly on images:

Some search results make sense, but many seem odd. This is because, by default, `Nearest` uses a simple distance function based on pixel values, and this is probably why the white unicorn is matched with a white dragon. Now let’s use `Nearest` again, but in the space of features defined by the extractor function:

This time, the retrieved images seem semantically close to the queries, while their colors can differ a lot. This is a sign that the extractor captures semantic features, and an example of how we can use `FeatureExtraction` to create useful distances. Another experiment we can do is to further reduce the dimension of the vectors in order to visualize the dataset on a plot:

As you can see, the examples are somewhat semantically grouped (most dragons in the lower right corner, most griffins in the upper right, etc.), which is another sign that semantic features are encoded in these vectors. In a sense, the extractor “understands” the data, and in a sense this is what `FeatureExtraction` is trying to do.

In the preceding, the “understanding” is mostly due to the first step of the feature extraction process—that is, the use of a built-in feature extractor. This extractor is a byproduct of our effort to develop `ImageIdentify`. In a nutshell, we took the network trained for `ImageIdentify` and removed its last layers. The resulting network transforms images into feature vectors encoding high-level concepts. Thanks to the large and diverse dataset (about 10 million images and 10,000 classes) used to train the network, this simple strategy gives a pretty good extractor even for objects that were not in the dataset (such as griffins, centaurs and unicorns). Having such a feature extractor for images is a game-changer in computer vision. For example, if one were to label the above dataset with the classes “unicorn,” “griffin,” etc. and use `Classify` on the resulting data (as shown here), one would obtain a classifier that correctly classifies about 90% of new images! This is pretty high considering that only eight images per class have been seen during the training. This is not yet a “one-shot learning,” as humans can perform on such tasks, but we are getting there… This result would have been unthinkable in the first versions of `Classify`, which did not use such an extractor. In a way, this extractor is the visual system of the Wolfram Language. There is still progress to be made, though. For example, this extractor can be greatly enhanced. One of our jobs now is to train other feature extractors in order to boost machine learning performance for all classic data types, such as image, text and sound. I often think of these extractors, and trained models in general, as a new form of built-in knowledge added to the Wolfram Language (along with algorithms and data).

The second step of the reduction, called dimensionality reduction (also sometimes “embedding learning” or “manifold learning”), is the “learned” part of the feature extraction. In the example above, it is probably not the most important step to obtain a useful representation, but it can play a key role for other data types, or when the number of examples is higher (since there is more to learn from). Dimensionality reduction stems from the fact that, in a typical dataset, examples are not uniformly distributed in their original space. Instead, most examples are lying near a lower-dimensional structure (think of it as a manifold). The data examples can in principle be projected on this structure and thus represented with fewer variables than in their original space. Here is an illustration of a two-dimensional dataset reduced to a one-dimensional dataset:

The original data (blue points) is projected onto a uni-dimensional manifold (multi-color curve) that is learned using an autoencoder (see here for more details). The colors indicate the value of the (unique) variable in the reduced space. This procedure can also be applied to more complex datasets, and given enough data and a powerful-enough model, much of the structure of the data can be learned. The representation obtained can then be very useful for downstream tasks, because the data has been “disentangled” (or more loosely again, “understood”). For example, you could train a feature extractor for images that is just as good as our built-in extractor using only dimensionality reduction (this would require a lot of data and computational power, though). Also, reducing the dimension in such a way has other advantages: the resulting dataset is smaller in memory, and the computation time needed to run a downstream application is reduced. This is why we apply this procedure to extract features even in the image case.

We talked about extracting numeric vectors from data in an automatic way, which is the main application of `FeatureExtraction`, but there is another application: the possibility of creating customized data processing pipelines. Indeed, the second argument can be used to specify named extraction methods, and more generally, named processing methods. For example, let’s train a simple pipeline that imputes missing data and then standardizes it:

We can now use it on new data:

Another classic pipeline, often used in text search systems, consists of segmenting text documents into their words, constructing tf–idf (term frequency–inverse document frequency) vectors and then reducing the dimension of the vectors. Let’s train this pipeline using the sentences of *Alice in Wonderland* as documents:

The resulting extractor converts each sentence into a numerical vector of size 144 (and a simple distance function in that space could be used to create a search system).

One important thing to mention is that this pipeline creation functionality is not as low-level as one might think; it is a bit automatized. For example, methods such as tf–idf can be applied to more than one data type (in this case, it will work on nominal sequences, but also directly on text). More importantly, methods are only applied to data types they can deal with. For example, in this case the standardization is only performed on the numerical variable (and not on the nominal one):

These properties make it quite handy to define processing pipelines when many data types are present (which is why `Classify`, `Predict`, etc. use a similar functionality to perform their automatic processing), and we hope that this will allow users to create custom pipelines in a simple and natural way.

`FeatureExtraction` is a versatile function. It offers the possibility to control processing pipelines for various machine learning tasks, but also unlocks two new applications: dataset visualization and metric learning for search systems. `FeatureExtraction` will certainly become a central function in our machine learning ecosystem, but there is still much to do. For example, we are now thinking of generalizing its concept to supervised settings, and there are many interesting cases here: data examples could be labeled by classes or numeric values, or maybe ranking relations between examples (such as “A is closer to B than C”) could be provided instead. Also, `FeatureExtraction` is another important step in the domain of unsupervised learning, and like `ClusterClassify`, it enables us to learn something useful about the data—but sometimes we need more than just clusters or an embedding. For example, in order to randomly generate new data examples, predict any variable in the dataset or detect outliers, we need something better: the full probability distribution of the data. It is still a long shot, but we are working to achieve this milestone, probably through a function called `LearnDistribution`.

To download this post as a CDF, click here. New to CDF? Get your copy for free with this one-time download.

]]>Over the past few months, Wolfram Community members have been exploring ways of visualizing the known universe of Wikipedia knowledge. From Bob Dylan’s networks to the persistence of “philosophy” as a category, Wolfram Community has been asking: “What does knowledge actually *look like* in the digital age?”

Mathematician Marco Thiel explored this question by modeling the “Getting to Philosophy” phenomenon on Wikipedia. “If you start at a random Wikipedia page, click on the first link in the main body of the article and then iterate, you will (with a probability of over 95%) end up at the Wikipedia article on philosophy,” Thiel explains. Using `WikipediaData`, he demonstrates how you can generate networks that describe this phenomenon.

He is able to document that about 94% of all Wikipedia articles lead to the “Philosophy” page if one follows the links as instructed, generating in the process some mesmerizing and elegant visualizations of the way that we categorize information.

University student Andres Aramburo also touched on the theme of Wikipedia categories by developing a method for clustering Wikipedia articles by topic. He began by taking a random sample of Wikipedia articles using a Wolfram Language function that he created for this specific task. He then used the links in and out of these articles to generate a graph of the relationships between them. “It’s not a trivial task” to determine if two articles are related to one another, he notes, since “there are several things that can affect the meaning of a sentence, semantics, synonyms, etc.” His visualizations include radial plots of the relationships between articles and word clouds listing shared words for related articles.

One final thread worth highlighting is Community’s celebration of the decision to award Bob Dylan the Nobel Prize in Literature. Wolfram’s own Vitaliy Kaurov created the visualization of the “Universe of Bob Dylan” featured at the top of this post. Alan Joyce (Wolfram|Alpha) generated a graph that compares the lengths of Dylan’s songs (in seconds) to the years in which they were recorded.

And first-time Wolfram Community participant Amy Friedman uploaded her submission from the 2016 Wolfram One-Liner Competition, an amusing word cloud of the poet’s songs in the shape of a guitar.

What new ways of visualizing Wikipedia knowledge can you dream up? With built-in functions like `WikipediaData` and `WikipediaSearch`, the Wolfram Language is the perfect tool for exploring Wikipedia data. Show us what you can do with those functions and more on Wolfram Community. We can’t wait to see what you create!

To some degree, we’ve been working on a Wolfram notebook front end for iOS for about six years now. And in the process, we’ve learned a lot about notebook front ends, a thing we already knew a lot about. Let’s rewind the tape a bit and review.

In 1988, Theo Gray and Stephen Wolfram conceived the idea of the notebook front end for use with the Mathematica system. My first exposure to it was in 1989, when I first saw Mathematica running on a NeXT machine at university. That was 27 years ago. Little did I suspect that I would someday be spending 20 years of my life (and counting) working on it.

It’s interesting to see how relevant today the basic concepts we started with are. We have a document we refer to as a notebook. The notebook is structured into cells. Cells might be designated for headings, narrative, code or results. Cells with code are considered input, which generate outputs inline in the document. While the word “notebook” evokes the idea of a laboratory notebook, it easily encompasses educational documents, literate programs, academic papers, generated reports and experimental scratch pads.

One might have thought that the web would make notebooks obsolete. HTML exposes many similar concepts as notebooks at a lower level. Editing environments such as various Markdown editors or the WordPress environment expose many of these concepts at a higher level. But those environments don’t accomplish inline computation, and the world is increasingly recognizing how much inline computation with immediate feedback really matters.

And even the notion of what “computation” is has evolved over time. It seems that in the 1990s and 2000s, we were in a cycle where many in the software field thought that inline computation was merely for a few math tricks of the sort that could be done by Mathematica or Excel, while hardcore computation required some sort of compile or deployment step. I remember the immediate feedback of line-by-line programming from my youth in the 1980s, although it had actually begun much earlier. But by the time I graduated university, this wasn’t considered “serious programming” anymore. In computer science, there’s a fancy name for this: a read–evaluate–print loop, or REPL. And while the REPL fell out of fashion, the humble notebook continued to present its REPL-plus-narrative structured content.

In 2010, as iPhones and iPads evolved into general computing platforms, they became an obvious platform for notebooks. But iOS came with some very different constraints from the desktop system—so much so that it seemed an impossible task to try to adapt our existing notebook technology to the platform. So, we decided to try to recreate the notebook experience from scratch in a way that both fit within the constraints of the platform and played to its strengths. Seemed straightforward. Cells. Evaluation. Maybe some basic `Manipulate` support. Surely it wouldn’t take long to get that up and running, we thought.

That was the second notebook front end. Since then, there have been others. A short while later, another Wolfram development team started contacting me, asking about notebook front ends. Turned out they were working on this web thing, and wanted a bit of advice. But it can’t be that hard. A small skunk works project.

Even outside Wolfram’s doors, people were adapting to notebook-oriented computing. REPLs started becoming fashionable in software development circles again. Variants of Markdown started to become the language of document creation on the web, and many of those documents looked a lot like notebooks. There was even a significant open-source project that recreated some of our major concepts, down to the use of cells and the Shift + Enter evaluations.

All of these projects ran into some trouble, though. It turns out that the “cells and notebooks” concept wasn’t as easy to recreate as we all thought it would be. Above and beyond the basic technology hurdles, it turns out that we had evolved notebooks to do things we’re no longer willing to sacrifice.

Notebooks today support typesetting. Mathematical typesetting. Typesetting of code. Typesetting that you can properly interact with. Some of this is about the math. Typeset math has had broad appeal to our users, well beyond a core math education market. But making the math and the code coexist while remaining fully interactive and easy to read is a challenge requiring extreme attention to a large number of details.

Notebooks today are dynamic documents that can be generated, transformed and manipulated at the language level. Many core features rely on being able to read and write raw notebook content from the language, and we quickly discovered just how many notebooks in the wild would stop working without this functionality. We use this functionality prolifically to enable our user interfaces. In retrospect, this shouldn’t be too surprising, since we’ve been active exploiting this capability of our product since 1996.

Notebooks today offer a truly interactive experience with computation. We’ve been doing this since 2007 with `Manipulate` and `Dynamic` functionality, but it’s easy to undersell the achievement. When I slide a slider, I learn so much more about what I’m doing from instant feedback than I do from waiting for a web server to respond. Our devices are so powerful today that there is simply no reason for me to wait for computation when it can and often should be done on my local device. And we can create incredibly sophisticated and general interactive interfaces with just a single line of code. The ability to do this allows for novel applications. For example, I find myself using `Manipulate` to understand bugs and test features while doing notebook front end development.

And that brings us to today. iOS has been a challenging platform to bring a proper notebook experience to, and in order to do so, we’ve been creating a brand-new front end from scratch. In terms of CPU power and memory, this new front end is running on the most diminished platform we support today, but we’ve worked hard not to sacrifice the notebook experience. More than any previous front end, this new notebook front end uses its environment incredibly efficiently. It uses more CPU cores, less energy and less memory to get its job done.

And so what we have today is a product that displays and plays notebook content that we’re extremely proud of. It’s just entered into beta for the iPad, and we’re hoping to have a version that works well on the iPhone coming out soon. It’s been a long time coming, but I think the technology we’ve developed is worth the wait.

You can sign up for the beta version at wolfram.com/iosbeta.

]]>

In his book *Idea Makers*, Stephen Wolfram devotes a chapter to Leibniz. Wolfram visited the Leibniz archive in Hanover and wrote about it:

Leafing through his yellowed (but still robust enough for me to touch) pages of notes, I felt a certain connection—as I tried to imagine what he was thinking when he wrote them, and tried to relate what I saw in them to what we now know after three more centuries…. [A]s I’ve learned more, and gotten a better feeling for Leibniz as a person, I’ve realized that underneath much of what he did was a core intellectual direction that is curiously close to the modern computational one that I, for example, have followed.

Leibniz was an early visionary of computing, and built his own calculator, which Wolfram photographed when he visited the archive.

In a recent talk about AI ethics, Wolfram talked more about how Leibniz’s visions of the future are embodied in current Wolfram technologies:

Leibniz—who died 300 years ago next month—was always talking about making a universal language to, as we would say now, express [mathematics] in a computable way. He was a few centuries too early, but I think now we’re finally in a position to do this…. With the Wolfram Language we’ve managed to express a lot of kinds of things in the world—like the ones people ask Siri about. And I think we’re now within sight of what Leibniz wanted: to have a general symbolic discourse language that represents everything involved in human affairs….

If we look back even to Leibniz’s time, we can see all sorts of modern concepts that hadn’t formed yet. And when we look inside a modern machine learning or theorem proving system, it’s humbling to see how many concepts it effectively forms—that we haven’t yet absorbed in our culture.

The Wolfram Language is a form of philosophical language, what Leibniz called a *lingua generalis*, a universal language to be used for calculation. What would Leibniz have made of the tools we have today? How will these tools transform our world? In his essay on Leibniz, Wolfram mulls this over:

In Leibniz’s whole life, he basically saw less than a handful of computers, and all they did was basic arithmetic. Today there are billions of computers in the world, and they do all sorts of things. But in the future there will surely be far far more computers (made easier to create by the Principle of Computational Equivalence). And no doubt we’ll get to the point where basically everything we make will explicitly be made of computers at every level. And the result is that absolutely everything will be programmable, down to atoms. Of course, biology has in a sense already achieved a restricted version of this. But we will be able to do it completely and everywhere.

Leibniz was also a major figure in philosophy, best known for his contention that we live in the “best of all possible worlds,” and his development in his book *Monadology* of the concept of the *monad*: an elementary particle of metaphysics that has properties resulting in what we observe in the physical world.

Wolfram speculates that the concept of the monad may have motivated Leibniz’s invention of binary:

With binary, Leibniz was in a sense seeking the simplest possible underlying structure. And no doubt he was doing something similar when he talked about what he called “monads”. I have to say that I’ve never really understood monads. And usually when I think I almost have, there’s some mention of souls that just throws me completely off.

Still, I’ve always found it tantalizing that Leibniz seemed to conclude that the “best of all possible worlds” is the one “having the greatest variety of phenomena from the smallest number of principles”. And indeed, in the prehistory of my work on

A New Kind of Science, when I first started formulating and studying one-dimensional cellular automata in 1981, I considered naming them “polymones”—but at the last minute got cold feet when I got confused again about monads.

Despite being the daughter of a physicist and having heard about elementary particles since infancy, I am a bit boggled by the concept of the monad. As I contemplate Leibniz’s strange bridge between metaphysics and such things as electrons or the mathematical definition of a point, I am reminded of lines from *Candide*, a book Voltaire wrote satirizing the notion that we live in the best of all possible worlds:

“But for what purpose was the earth formed?” asked Candide.

“To drive us mad,” replied Martin.

Yet knowledge is increasingly digitized in the twenty-first century, a process that relies on that binary language Leibniz invented. I think perhaps that if monads as such did not exist in Leibniz’s time, it may have become necessary to invent them.

]]>Participants in the competition submit 128 or fewer tweetable characters of Wolfram Language code to perform the most impressive computation they can dream up. We had a bumper crop of entries this year that showed the surprising power of the Wolfram Language. You might think that after decades of experience creating and developing with the Wolfram Language, we at Wolfram Research would have seen and thought of it all. But every year our conference attendees surprise us. Read on to see the amazing effects you can achieve with a tweet of Wolfram Language code.

Amy Friedman: “The Song Titles” (110 characters)

Amy calls this homage to the 2016 Nobel Laureate in Literature her contribution to “the nascent field of Bob Dylan analytics.” She writes further, “I started teaching myself how to code in the Wolfram Language yesterday after breakfast, with the full encouragement of my son and aided solely by Stephen Wolfram’s *Elementary Introduction to the Wolfram Language*.”

Amy’s helpful son, Jesse, is the youngest-ever prize winner in our One-Liner Competition. In 2014, at the age of 13, he took second place.

George Varnavides: “An SE Legacy Re-imagined as a Self-Triggering Dynamic” (128 characters)

*(faster than actual speed)*

Order proceeds from chaos in this hypnotic simulation that appealed to the judges’ inner physicists. Points evenly distributed in a spherical volume slowly evolve thread-like structures as they migrate toward target points.

Stephan Leibbrandt: “Projections” (128 characters)

This impressively compact implementation of a smooth transition between map projections gave the judges an “Aha!” moment as they perceived the relationship between orthographic and Mercator projections. Stephan’s key insight in producing a submission that is graphically engaging as well as instructive is that the structure of a map’s geometric data is the same, regardless of the projection.

Manuel Odendahl: “Quilt Pattern” (128 characters)

Manuel writes that he generated this graphic as a quilt pattern for his girlfriend. The judges were impressed by its combination of repetition and variety. No word yet on whether Manuel’s girlfriend has succeeded in assembling the 6,000 quilt squares cut from 645 different colors of fabric.

George Varnavides: “Symmetry in Chaos” (128 characters)

Achieving this graphically appealing image required some clever coding tricks from George, including factoring out the function slot `c` and naming the range `a` so that it could be compactly reused. Binning the points generated by an iterated function and plotting the log of the bin counts yields the refined graphical treatment in the result.

David Debrota: “Transcendental Pattern” (123 characters)

Starting with three million digits of the transcendental number `E`, David’s deft application of a series of image processing functions yields this visual representation of the randomness of the digits.

Abby Brown: “Happy Halloween!” (127 characters)

A timely entry, given that Halloween—celebrated in the United States with pumpkins—was little more than a week after the conference. Abby takes skillful advantage of the Wolfram Language’s default plotting colors. In a plot of multiple functions, pumpkin orange is the first default color. The second is blue, which isn’t appropriate for a pumpkin’s stem. But by bumping the stem function to third place with `Nothing`, Abby achieved a green stem and squeaked in just under the 128-character limit.

Philip Maymin: “Mickey Mousical” (125 characters)

Thanks to Philip, you no longer have to travel to Disneyland to get your mouse ears. All you need is the Wolfram Language!

Richard Scott: “How to Count to One Million in Two Minutes in 100 Easy Steps” (128 characters)

Play Audio

It was slightly embarrassing to have to award a (Dis)Honorable Mention to one of our distinguished Innovation Award winners. But Richard’s helpful two-minute timer drove the judges nuts with its incessant counting and prompted them to warn each other not to evaluate that one.

I must point out its ingenious construction, though, which Richard helpfully illustrated with this image:

Shishir Reddy, Alex Krotz: “Face Projection” (104 characters)

The third-place prize went to two of Abby Brown’s high-school students, whom she brought to the conference to present work they had done in her Advanced Topics in Mathematics class (taught with the Wolfram Language). Shishir and Alex made an amusing video transformation that, in real time, pastes the face of the person on the left onto the person on the right, making them virtual twins.

Snapchat, watch out. Here comes Mathematica!

Michael Sollami: “Neural Hypnosis” (128 characters)

Michael Sollami took second place with this unusual and visually stunning application of the neural net functionality that debuted in Version 11.

After viewing the animation for a short time, the judges were glassy-eyed and chanting in unison, “Second place! Second place! …”. Dunno, Michael. Bug in your code somewhere?

Philip Maymin: “Circle Pong” (128 characters)

*(5x actual speed)*

Philip Maymin’s winning entry packs an impressive load of functionality into 128 characters of code. Not only does it implement a complete and thoroughly playable game of solitaire Pong (“Shorter, rounder and more fun than the original.”), it encourages you to play dangerously by rewarding you with bonus points if you almost let the “ball” escape before swooping in to deflect it.

A brilliant and creative combination of features implemented concisely with complex arithmetic, the game nearly derailed the One-Liner judges, who had to be reminded to stop playing Pong and get back to work.

There were many more impressive contributions than we had time to recognize in the awards ceremony. You can see all of the submissions in this signed CDF. (New to CDF? Get your copy for free with this one-time download.) There’s a wealth of good ideas to take away for anyone willing to invest a little time understanding the code.

Thanks to all who participated and impressed us with their coding chops and creativity. Come again next year!

]]>- “We had a nearly 4-billion-time speedup on this code example.”
- “We’ve worked together for over 9 years, and now we’re finally meeting!”
- “Coding in the Wolfram Language is like collaborating with 200 or 300 experts.”
- “You can turn financial data into rap music. Instead, how about we turn rap music into financial data?”

As a first-timer from the Wolfram Blog Team attending the Technology Conference, I wanted to share with you some of the highlights for me—making new friends, watching Wolfram Language experts code and seeing what the Wolfram family has been up to around the world this past year.

I was only able to attend one talk at a time, and with over a hundred talks going on over three days, there was no way I could see everything—but what I saw, I loved. Tuesday evening, Stephen Wolfram kicked off the event with his fantastic keynote presentation, giving an overview of the present and future of Wolfram Research, demoing live the new features of the Wolfram Language and setting the stage for the rest of the conference.

The nice thing about the Technology Conference is that if you’ve had a burning question about how something in the Wolfram Language works, you won’t get a better opportunity to ask the developers face to face. When someone in the audience asked about storing chemical data, the panel asked, “Is Michael Trott in the room?” And sure enough, Michael Trott was sitting a few seats down from me, and he stood up and addressed the question. Now that’s convenient.

Probably my favorite speaker was Igor Bakshee, a senior research associate here at Wolfram. He described our new publish-subscribe service, the Channel Framework, which allows asynchronous communication between Wolfram systems without dealing with the details of specific senders and receivers. I especially appreciated Igor’s humor and patience as messages came in from someone in the audience: he raised his hands and insisted it was indeed someone else sending them.

This talk was the one I was most looking forward to, and it was exactly what I wanted. Jakub Kabala talked about how he used Mathematica to compare 12th-century Latin texts in his search to determine if the monk of Lido and Gallus Anonymus were actually the same author. Jakub’s talk will also be in our upcoming virtual conference, so be sure to check that out!

It would be downright silly of me to not mention the extremely memorable duo Thomas Carpenter and Daniel “Scantron” Reynolds. The team used Wolfram Language code and JLink to infuse traditional disc jockey and video jockey art with abstract mathematics and visualizations. The experience was made complete when Daniel passed special glasses throughout the audience.

We had the best Wolfram Language programmers all in one place, so of course there had to be competitions! This included both our annual One-Liner Competition and our first after-hours live coding competition on Wednesday night. Phil Maymin won both competitions. Incidentally, in between winning competitions, Phil also gave an energetic presentation, “Sports and eSports Analytics with the Wolfram Language.” Thanks to everyone who participated. Be sure to check out our upcoming blog post on the One-Liner Competition.

Thursday night at Stephen’s Keynote Dinner, six Wolfram Innovator Awards were given out. The Wolfram Innovator Award is our opportunity to recognize people and organizations that have helped bring Wolfram technologies into use around the world. Congratulations again to this year’s recipients, Bryan Minor, Richard Scott, Brian Kanze, Samer Adeeb, Maik Meusel and Ruth Dover!

Like many Wolfram employees around the world, I usually work remote, so a big reason I was eager to go to the Wolfram Technology Conference was to meet people! I got to meet coworkers that I normally only email or talk on the phone with, and I got to speak with people who actually use our technologies and hear what they’ve been up to. After almost every talk, I’d see people shaking hands, trading business cards and exchanging ideas. It was easy to be social at the Technology Conference—everyone there shared an interest in and passion for Wolfram technologies, and the fun was figuring out what that passion was. And Wolfram gave everyone plenty of opportunities for networking and socializing, with lunches, dinners and meet-ups throughout the conference.

Attending the Wolfram Technology Conference has been the highlight of my year. The speakers were great across the board, and a special thanks goes to the technical support team that dealt with network and display issues in stride. I strongly encourage everyone interested in Wolfram technologies to register for next year’s conference, and if you bump into me, please feel free to say hi!

]]>

When he was working at Enova, Slaughter used the Wolfram Language to build Colossus, an analytics engine that provides Enova’s clients in the financial services industry with instantaneous risk and credit analysis. Slaughter’s team was looking for a programming language that would allow them to deploy software changes without involving the entire engineering team in each new change. The Wolfram Language streamlines the process and saves countless hours of development work by communicating more effectively across teams involved in the development process, prototyping and deploying ideas quickly, and avoiding the use of multiple systems to process internal and external data.

In a talk at the 2015 Wolfram Technology Conference, Slaughter’s colleague Vinod Cheriyan explained that streamlining the production process enables Colossus to significantly outperform its predecessor. Colossus can deploy a model to production in just one and a half to two weeks, where its predecessor would typically take one to one and a half months.

Slaughter’s team also used Mathematica to efficiently manage Enova’s large database of XML credit reports. Credit agencies give Enova reports as XMLs with metadata that is packaged as a PDF or Word document. Slaughter’s team replaced a slower procedural approach for merging data with the Wolfram Language’s functional approach, where pattern matching and accelerating rules allowed them to achieve the same result two orders of magnitude faster.

When we talked with Slaughter about why he prefers the Wolfram Language, he mentioned its power both as a programming language and as a computation engine. By using the Wolfram Language, he is able to dramatically streamline his team’s workflow, bringing testing and production into one efficient system.

Be sure to check out other Wolfram Language stories like Chad Slaughter’s on our Customer Stories pages.

]]>I’ll start by talking about our improvements in collaboration. I develop lots of models in SystemModeler, and when I do, I seldom develop them in a vacuum. Either I send a model to my colleagues for them to use, I receive one from them or models get sent back and forth while we work on them together. This is, of course, also true for novice users. A great way to learn how to use SystemModeler—or any product, for that matter—is to look at things other people have done, whether it be a coworker or other users online, and build upon that.

Whether you send your models to other people, receive models or send models between your own platforms, we want to make sure that you have everything you need to start using the model, straight out of the box.

As an example, I have built a model of an inverted pendulum using the PlanarMechanics library. It has a linear-quadratic regulator built using the Modelica Standard Library, and it also includes components from the ModelPlug library that connect to real-life hardware, such as actuators and sensors on an Arduino board (or any other board following the Firmata protocol).

In the model, you can apply a force to different parts of the pendulum using input from an Arduino board. When simulated, the model produces an automatically generated animation.

As a developer of this model, I usually know of quite a few things that will be interesting to plot. In this particular case, for example, you can create interesting results by studying the different forces acting on the pendulum and the different states of the controller. In SystemModeler 4.3, you can predefine plots in a model. After choosing a set of variables to plot, simply right-click “Add Plot to Model” and give it a name, e.g. ControllerInputs.

Now the stored plot can easily be accessed each time the model is simulated.

Even if model parameters or the model structure are changed, the plots will remain and be available next time you need to use the model. Storing plots is not only a useful feature when you revisit models that you yourself have built, but it is also useful when you share or receive models from others.

Now, let me save this model and send it to a colleague. Previously I would have needed to make sure that they had all the resources to run the model, including all the libraries I have used. In SystemModeler 4.3, I can now easily save all this in one convenient file with the improved Save Total Model feature. Everything needed, including libraries, stored plots and animation objects, will be available for the person who receives the file.

So a coworker receives my model—how would he or she begin analyzing it? In SystemModeler 4.3, we have introduced new model analytics features that help answer that question. Starting out, we can get a quick look at the model using the new summary property for `WSMModelData`.

The pie chart shows how large of a percentage of components are from a particular domain. A majority of the components comes from the dark blue slice, the PlanarMechanics library. In Mathematica, you can mouse over the slices to see the domain name.

Another good place for my coworker to start would be by looking at the plots I defined in the model before sending it. Support for the stored plots have, of course, also been included in the Wolfram Language. If a plot has been chosen as the preferred plot, a very neat one-liner in the Wolfram Language makes it easy to start exploring the model.

In Simulation Center, you will find a list of all stored plots in the experiment browser. You can list all the available plots with the Wolfram Language via the `"PlotNames"` property.

Parametric plots can be stored and plotted.

Use the stored plot functionality to easily measure the response to changes in parameters.

A stored plot can consist of multiple plots.

One area where we have made heavy use of this new functionality is with our SystemModeler examples. On our webpage, we have for a long time provided a large selection of SystemModeler models collected from different industries and education areas. Whether it be the internal states of a digital adder or the heat flows in a freezer, these examples usually contain a lot of different things that you can study. We have now added the most important plots to analyze and understand each example model using stored plots.

Furthermore, the models that we have created over the years have now also been included directly in the product. Whether you want to get started using SystemModeler using models from your domain or study new concepts, the new included curated models will be useful.

Now let’s return to the model my colleague just received. Suppose that he or she would like to perform some further analysis on it. A new set of templates has been included in order to facilitate this. The following command, for example, creates a template in Mathematica that allows you to change an input in real time and plot the response.

Just fill in the blanks, and the simulation models will come alive with real-time interaction in Mathematica.

Templates for many other tasks are available, such as FFT analyses, model calibrations, parameters, initial value sweeps and much more.

These are just some of the new, exciting model analytics and collaboration features in SystemModeler 4.3. For a more complete view, check out our What’s New page. If you try out the new SystemModeler, you will experience one of the things that I haven’t mentioned, namely that it is snappier and faster than before. Actually, performance has been improved across the board, including faster model compilation times and faster simulations from the Wolfram Language.

]]>