I’ll start by talking about our improvements in collaboration. I develop lots of models in SystemModeler, and when I do, I seldom develop them in a vacuum. Either I send a model to my colleagues for them to use, I receive one from them or models get sent back and forth while we work on them together. This is, of course, also true for novice users. A great way to learn how to use SystemModeler—or any product, for that matter—is to look at things other people have done, whether it be a coworker or other users online, and build upon that.

Whether you send your models to other people, receive models or send models between your own platforms, we want to make sure that you have everything you need to start using the model, straight out of the box.

As an example, I have built a model of an inverted pendulum using the PlanarMechanics library. It has a linear-quadratic regulator built using the Modelica Standard Library, and it also includes components from the ModelPlug library that connect to real-life hardware, such as actuators and sensors on an Arduino board (or any other board following the Firmata protocol).

In the model, you can apply a force to different parts of the pendulum using input from an Arduino board. When simulated, the model produces an automatically generated animation.

As a developer of this model, I usually know of quite a few things that will be interesting to plot. In this particular case, for example, you can create interesting results by studying the different forces acting on the pendulum and the different states of the controller. In SystemModeler 4.3, you can predefine plots in a model. After choosing a set of variables to plot, simply right-click “Add Plot to Model” and give it a name, e.g. ControllerInputs.

Now the stored plot can easily be accessed each time the model is simulated.

Even if model parameters or the model structure are changed, the plots will remain and be available next time you need to use the model. Storing plots is not only a useful feature when you revisit models that you yourself have built, but it is also useful when you share or receive models from others.

Now, let me save this model and send it to a colleague. Previously I would have needed to make sure that they had all the resources to run the model, including all the libraries I have used. In SystemModeler 4.3, I can now easily save all this in one convenient file with the improved Save Total Model feature. Everything needed, including libraries, stored plots and animation objects, will be available for the person who receives the file.

So a coworker receives my model—how would he or she begin analyzing it? In SystemModeler 4.3, we have introduced new model analytics features that help answer that question. Starting out, we can get a quick look at the model using the new summary property for `WSMModelData`.

The pie chart shows how large of a percentage of components are from a particular domain. A majority of the components comes from the dark blue slice, the PlanarMechanics library. In Mathematica, you can mouse over the slices to see the domain name.

Another good place for my coworker to start would be by looking at the plots I defined in the model before sending it. Support for the stored plots have, of course, also been included in the Wolfram Language. If a plot has been chosen as the preferred plot, a very neat one-liner in the Wolfram Language makes it easy to start exploring the model.

In Simulation Center, you will find a list of all stored plots in the experiment browser. You can list all the available plots with the Wolfram Language via the `"PlotNames"` property.

Parametric plots can be stored and plotted.

Use the stored plot functionality to easily measure the response to changes in parameters.

A stored plot can consist of multiple plots.

One area where we have made heavy use of this new functionality is with our SystemModeler examples. On our webpage, we have for a long time provided a large selection of SystemModeler models collected from different industries and education areas. Whether it be the internal states of a digital adder or the heat flows in a freezer, these examples usually contain a lot of different things that you can study. We have now added the most important plots to analyze and understand each example model using stored plots.

Furthermore, the models that we have created over the years have now also been included directly in the product. Whether you want to get started using SystemModeler using models from your domain or study new concepts, the new included curated models will be useful.

Now let’s return to the model my colleague just received. Suppose that he or she would like to perform some further analysis on it. A new set of templates has been included in order to facilitate this. The following command, for example, creates a template in Mathematica that allows you to change an input in real time and plot the response.

Just fill in the blanks, and the simulation models will come alive with real-time interaction in Mathematica.

Templates for many other tasks are available, such as FFT analyses, model calibrations, parameters, initial value sweeps and much more.

These are just some of the new, exciting model analytics and collaboration features in SystemModeler 4.3. For a more complete view, check out our What’s New page. If you try out the new SystemModeler, you will experience one of the things that I haven’t mentioned, namely that it is snappier and faster than before. Actually, performance has been improved across the board, including faster model compilation times and faster simulations from the Wolfram Language.

]]>With its emphasis on cross-pollination, the Wolfram Data Summit has emerged as an exciting place to share insight into the subtle differences and unique challenges presented by data in different domains. New and unexpected points of commonality emerge from these conversations, allowing participants to trade solutions to emergent data problems.

This question of what to do with data was taken up by Lei Wu (Ancestry.com) and Cinzia Perlingieri (Center for Digital Archaeology), both of whom curate large data collections that preserve cultural memory. At Ancestry.com, Wu and his team of data scientists are reimagining digital genealogy as social networking. We use Facebook for friends and LinkedIn for professional relationships. What if we had a network—what he calls a “big tree”—for our shared genealogical history as well? This big tree engages users in the process of turning data into knowledge.

Like Wu, Perlingieri emphasized the importance of combining human insight with machine learning when it comes to doing things with data. Perlingieri and her team are searching for scalable methodologies that preserve cultural heritage. “We need a redefinition of data as a concept inclusive of alternative narratives and community-driven contributions to heritage,” she said.

Though Ancestry.com and the Center for Digital Archaeology serve very different clienteles, they share a user-focused approach to data curation and interpretation. Users create knowledge by interacting with archives, generating new connections between data elements in the process. These types of connections were explored by each of the Summit’s presenters, including Anthony Scriffignano, chief data scientist at Data Summit co-sponsor Dun & Bradstreet, whose talk delved into some of the “Things We Forget to Think About.”

Because we recognize the promise of computational knowledge for every industry, we will continue to expand the Wolfram Data Summit. The Wolfram Data Summit develops understanding of data at the level of software, hardware and technical processes. But at its core, the Wolfram Data Summit is about how we create and structure data, what kinds of insights can be derived from it and how we apply computational thinking. What kind of computational thinking do professionals use in different domains? How might computational thinking be applied across those industry boundaries? As the Wolfram Data Summit turns eight, we continue to search for answers to these questions and more.

Watch more videos from the 2016 Wolfram Data Summit, including Stephen Wolfram’s keynote address, here.

*This post has been updated to include video of Anthony Scriffignano’s talk.*

In a nutshell, `AskFunction` allows you to ask the user a series of questions—as in surveys, web orders, tax forms, quizzes, etc. To demonstrate, I have built a simple web form for my friends and family to RSVP to my upcoming party.

The first time I call `Ask["name"]`, the web form prompts the user for input. When I call `Ask["name"]` the second time, the `AskFunction` remembers the user’s previous answer and uses that instead of asking again. See the example for yourself here.

This example isn’t really that different from what you can already do with `FormFunction`; `Ask` is not intended to replace `FormFunction`. Where `Ask` begins to really shine over its older brother is when you want to skip or tailor questions based on previous responses.

In the event a friend or family member says they are not attending my party, I would like to give them the opportunity to reconsider their tragic mistake. This is where `AskConfirm` comes in: `AskConfirm` asks users if they are OK with their answer; if they say no, `AskConfirm` effectively rewinds the computation and gives them a second chance to answer differently. And we can make it so only those who said they aren’t attending are prompted for confirmation. Try it out.

And we can then ask more questions to those who are attending, such as whether they’ll be bringing a guest. There’s no need to ask those who aren’t attending if they are going to bring a guest—so we won’t! Try it out.

Notice that when I ask for the user’s name and the name of the guest, I now use `AskAppend` instead of `Ask`. `AskAppend["name"]` does exactly what it sounds like it does—it appends the new answer for `"name"` to the previous answer. Unlike `Ask`, `AskAppend` can ask the same question multiple times and build up a list of each answer. So if I first answer `AskAppend["name"]` with “Carlo” and then later answer with “Ada,” when I call `Ask["name"]` again, the answer will be: **{“Carlo”, “Ada”}**.

I will also ask the user what sort of food they want—and once again, those who aren’t attending get to skip this question since it doesn’t apply. As an added bonus, I can utilize the built-in Wolfram Cloud storage and save the survey results to a `Databin` or `CloudExpression`. In less than ten minutes, I have built a functional web form with branching, using straightforward and declarative code. Here is the complete code, which you can try out in the cloud right now:

A few other new functions I didn’t mention include `AskedQ` and `AskedValue`. `AskedQ` lets you check if a question has been asked already. `AskedValue` works like `Ask`, except it will *not* prompt the user if they have not been asked yet; the function will instead return `Missing` for unanswered questions.

RSVP forms are far from the only thing you can build with `Ask`. We’ve been asked by many different users for this sort of control flow to be added to web forms. And we’re proud of how flexible this generic framework is. What I really love about the Wolfram Language is that behind a design that makes it simple to build 90% of applications, there is also deep customizability. With the basic tools of `Ask`, `AskConfirm`, `AskAppend` and `AskDisplay`, any problem with an “If this, do that” structure, or any flow chart, can be written as an `AskFunction` form. Here are just a few of the many applications you can now build.

*Troubleshooting Guides*

Have you ever needed to help friends or family members troubleshoot their computers? Build a troubleshooting guide web app and give your friends, family or clients an effective guide to fix their own problems.

*Tax/Legal Forms*

A lot of factors go into finances—are you married, do you have children, how much do you make or spend? If you don’t have dependents, you should be able to skip over any question about dependents. Try out our example web form for computing your marginal tax rate.

*Quizzes*

`FormFunction` works fine for quizzes, but maybe you want to only display advanced questions once a student has demonstrated mastery of the beginner questions. Or use `AskConfirm` and `AskAppend` to loop on a particular question until the student has the correct answer.

`Ask` is still new, and the potential for what it can do is rapidly growing. Even now, we’re continuing to build on the `Ask` functionality and expand the functions in new directions. Coming down the pipeline are new projects that will bring the `Ask` syntax to different applications like chatbots and more. I’m genuinely proud of what we’ve accomplished so far and where we’re headed in the future. So if you’re interested in learning more about how to use these functions, please feel free to leave a comment!

Join us on October 27 for a series of talks, case studies and workshops that will equip you with the knowledge to make the most of Wolfram technologies.

Our experts will introduce our latest release, Wolfram Enterprise Private Cloud, which aims to shape the way organizations see and deliver computation, enabling users to derive vastly increased value from their big data for analytics, business intelligence and smart application development.

Special guest Jan Brugard of Wolfram MathCore will demonstrate how Mathematica and SystemModeler can be used together to model and analyze a wide range of systems, and will give industry examples such as:

- Optimizing bio-ethanol production
- Landing drones autonomously on ships
- Eliminating the need for biopsy when diagnosing problems with the liver

The conference will be a great opportunity to check out our latest technology, talk to our developers and get the chance to meet fellow technical specialists.

For more information and to reserve your space, click here.

See you there!

]]>Matt shared how he was pretty new to coding before MHacks V. That was before Olivia, a web cartoonist who also studies mathematics, introduced him to hackathons and the Wolfram Language. They read our blog post about creating popular curves with Fourier series, and realized they could use the same idea to create drawing guides on the fly. The Wolfram Language, with built-in cloud technology and over 5,000 functions, proved perfect not only for bringing their hackathon idea to life but also, as Matt says, “for making it so easy to get in there and not be scared of programming.” Watch the video below as Olivia and Matt describe their journey to victory at MHacks V.

Since DrawAnything and MHacks V, Matt has continued to expand his programming abilities, while both Olivia and Matt have grabbed more hackathon prizes, including a trip to compete in Taiwan for HackNTU. And they have no doubts about keeping Wolfram technology at the top of their coding toolbox for future competitions.

Are you interested in doing a hackathon with Wolfram technology, need an idea or want to see some of the other winning hacks? Then visit this page to learn more about using Wolfram tech at hackathons.

Be sure to check out other Wolfram Language stories like Olivia and Matt’s on our Customer Stories pages.

]]>A Mersenne prime is a prime number of the form *M _{p}* = 2

Mersenne claimed that 2^{p} – 1 was prime for primes *p* ≤ 257 only for

p ∈ {2,3,5,7,13,17,19,31,67,127,257}. It is easy to verify where he was correct and where he was not, using the Wolfram Language function `PrimeQ`. `PrimeQ` uses modern prime testing methods that do not require finding a factor to prove a number to be composite.

It is possible that his claim that *M*_{67} is prime was a typographical error of *M*_{61}. However, it is not hard to understand why primality testing was difficult in Mersenne’s time, since trial division was one of the few tools available. For example, for *M*_{257}, the smallest factor is a 15-digit number and, even with modern factoring methods, it is not easy to find. The Wolfram Language function `FactorInteger` uses advanced methods that enable it to factor large integers.

Some of the first advances in primality testing were accomplished by the great mathematician Leonhard Euler, who verified that *M*_{31} is prime sometime before 1772. He did this by showing that any prime divisor of *M*_{31} must be congruent to 1 or 62 (mod 248).

Such a relatively short list could be checked by trial division (by hand) in a reasonable amount of time in Euler’s day. His was an application of the Mersenne factor theorem, which states that if *q* is a divisor of *M _{p}*, then

We use these functions to quickly find a factor of 2^{41} – 1. Note that *q* is a factor of 2^{p} – 1 if and only if 2^{p} ≡ 1 (mod *q*). This enables the use of `PowerMod`, which provides very efficient modular exponentiation.

The following is a Mersenne number with 161,649 digits

The next major advance was the discovery by Édouard Lucas of a clever method to test the primality of numbers of this form. He used his method in 1876 to verify that *M*_{127}, the largest Mersenne prime discovered before the age of computers, is prime. In the early twentieth century, after the understanding of binary arithmetic and algebra became widely known, Derek Henry Lehmer refined Lucas’ method. The resulting Lucas–Lehmer primality test provides an efficient method of testing if a number of this form is prime. It does this by using the modular equivalence

This means that *k* is congruent to the number represented by its lowest-order *p* bits plus the number represented by the remaining bits. This relation can be applied recursively until *k* < 2^{p} – 1.

Consider the example that follows. Here we show that for

*k* = 1234567891. Note that , the lowest-order 23 bits, and , the remaining bits shifted to the lowest position.

The function below encodes this method to compute *k* (mod 2^{p} – 1) using bit operations only (no division). Notice that 2^{n} – 1 has the binary form 111 … 111_{2}, all 1s and no 0s, so it also serves as a mask for the lower-order *p* bits of *k*.

The following function encodes the Lucas–Lehmer primality test (LLT). We define the sequence *s _{0}* = 4,

Note: Experiments have shown that the runtime of these functions is dominated by the large integer arithmetic.

To efficiently test if 2* ^{p}* – 1 is prime, it is better to first check for small prime divisors and to perform other basic primality testing. We first use the Mersenne prime divisor theorem encoded in

Here we present an extended version of `PrimeQ` that applies the Lucas–Lehmer test for large integers of the form 2* ^{p}* – 1.

The first Mersenne prime discovered by a computer running the Lucas–Lehmer test was *M _{521}*, found by Raphael M. Robinson on January 30, 1952, using the early vacuum tube-based computer SWAC (Standards Western Automatic Computer). The Williams tube memory unit of this computer, holding 256 words of 37 bits each, is shown below.

The 20^{th} Mersenne prime was discovered by Alexander Hurwitz in November of 1961 by running the Lucas–Lehmer test for about 50 minutes on an IBM 7090. We reproduce these early results below, using about 151 seconds of single-core computing time on a modern laptop.

One feature of the Wolfram Language that makes it suitable for this kind of work is its fast, large-integer arithmetic. This was a real challenge in the early days of computerized Mersenne prime searching. Researchers quickly adopted fast Fourier transform methods to convert the problem of multiplying two large integers, essentially a convolution of two lists of digits, into a simple element-by-element product of transformed digits. Fast integer multiplication is needed for the squaring step in the Lucas–Lehmer test. The Wolfram Language uses the latest platform-optimized algorithms to work with exact integer numbers with up to billions of digits. By way of example, we verify that the last of these, *M _{4423}*, is indeed a Mersenne prime and show all of its digits.

There is an interesting connection between Mersenne primes and perfect numbers. A perfect number is a number that is equal to the sum of all of its divisors (other than the number itself). Euclid suspected, and Euler finally proved, that all even, perfect numbers have the form *P* = 2* ^{p}* – 1(2

We proceed to rediscover #21 = *M*_{9689}, #22 = *M*_{9941} and #23 = *M*_{11213}. These were all discovered by Donald B. Gillies running the LLT on an ILLIAC II during the spring 1963 (the article can be found here). We use nearly 6 minutes of elapsed time to test all of the numbers of the form 2^{p} – 1 for primes 7,927 ≤ *p* ≤ 17,389.

We next extend the search to find #24 = *M*_{19937},

#25 = *M*_{21701} and #26 = *M*_{23209}. The last of these was discovered in February of 1979 by Landon Curt Noll and Laura Nickel. They searched the range *M*_{21001} to *M*_{24499} using 6,000 CPU hours on a CDC Cyber 174 (that article can be found here). Our computations are becoming sufficiently intense to warrant the use of parallel processing. Since the tests of the candidate prime factors are independent, we can use `ParallelMap` to speed up the work. We check the range

17,393 ≤ *p* ≤ 27,449 in about three and a half minutes using 4 cores.

Notice how the specialized Lucas–Lehmer test is significantly faster than the more general function `PrimeQ` for these Mersenne primes.

We next test the range 27457 ≤ *p* ≤ 48611 to locate

#27 = *M*_{44497}. This was discovered in April 1979 on a Cray-1 by Harry Nelson and his team. Our search of this range runs in about 15 minutes.

The next Mersenne prime is #28 = *M*_{86243}. It was discovered in September of 1982 by David Slowinski, also on a Cray-1. The Cray-1 supercomputer weighed about 5 tons, consumed about 115 kilowatts of power and delivered 160 MFLOPS of computing performance. It was supplied with 1 million 64-bit words of memory (8 megabytes), and cost about $16 million in today’s dollars. A detail of its significant cooling system is shown below. By comparison, a Raspberry Pi weighs a few ounces, runs on 4 watts, delivers about 410 MFLOPS and is provided with 1 gigabyte of RAM, all for about $40, and it comes with Mathematica.

The number *M*_{86243} has 25,962 digits. In 1 hour and 14 minutes we were able to find this value (on my laptop, not on a Raspberry Pi) by testing over the range 48,619 ≤ *p* ≤ 87,533.

Since we are now using serious computer time, we also produce a timestamp for each run. We now check the range 87,557 ≤ *p* ≤ 110,597. In 1 hour and 44 minutes, this reveals #29 = *M*_{110503}, first discovered on January 29, 1988, by Walker Colquitt and Luke Welsh running the LLT on an NEC DX-2 supercomputer (the article can be found here).

The next two Mersenne primes, *M*_{132049}, the 30^{th}, and *M*_{216091}, the 31^{st}, were actually discovered before #29, by the same team that discovered #28. They used a Cray X-MP to find #30 in September of 1983 and #31 in September 1985. We verify #30 by searching the range 110,603 ≤ *p* ≤ 139,901. It took nearly 4 hours and 8 minutes to check each *M _{p}* in this range.

The discovery of the 34^{th} Mersenne prime, *M*_{1257787}, in September 1996 ended the reign of the supercomputer in the search for Mersenne primes. The next 15 were found by volunteers of the Great Internet Mersenne Prime Search (GIMPS), which runs a variant of the Lucas–Lehmer test as a background process on personal computers. This large-scale distributed computing project currently achieves a performance equivalent to approximately 300 TFLOPS per second, harnessing the otherwise idle time of more than 1.3 million computers.

We verify the 34^{th} Mersenne prime by directly using the Lucas–Lehmer test. We are reaching the limits of personal computer capability. Testing thousands of Mersenne numbers in this range would take many days. It is interesting to note that the Lucas–Lehmer test is often used as a stress test for the reliability of computer hardware and software, as even one arithmetic error among the billions of computations needed for testing one large prime will produce an incorrect conclusion, miss a true Mersenne prime or falsely report that a composite is prime. The fact that we have tested every *M _{p}* for primes between 2 and 139,901 is strong evidence for the reliability of large integer arithmetic and binary operations in Mathematica.

As we have seen, the possible factors of numbers of the form

2* ^{p}* – 1 are limited by the Mersenne factor theorem. This has enabled an efficient computerized search for the factors of large integers of this form. The integer

We can quickly find the first few factors of 2^{1201} – 1 using the Wolfram Language function `FactorInteger`.

The Wolfram Language has cataloged of all of the Mersenne primes discovered to date, with ordering up to #44. Access to this information is provided by the functions `MersennePrimeExponent` and `MersennePrimeExponentQ`.

If you find this subject interesting, you can find more details at the following websites.

Download this post as a Computable Document Format (CDF) file. New to CDF? Get your copy for free with this one-time download.

]]>

The conferences will have seminars on topics such as these:

- Machine learning and neural networks
- Data science
- Predictive analytics
- Cloud development
- Image processing
- Graph theory
- Applied mathematics

Each seminar will introduce Mathematica 11 and its new features, alongside a more in-depth talk with one of our specialists.

The tour will be a great opportunity to check out our latest technology, talk to our developers and get the chance to meet fellow technical specialists and Mathematica experts.

Each date will include several talks. For more information and to reserve your space, please visit the following webpages:

October 4, 2–5pm: Lyon

October 5, 9am–12pm and 2–4:30pm: Grenoble

October 6, 1:30–5:30pm: Paris

Guest speakers on the tour include Sander Huisman, an active contributor on Wolfram Community, who will discuss Mathematica by examples; Bruno Autin, who will share insights on *Geometrica*; and Alain Carmasol from Universite de Lorraine, who will give a talk on Mathematica for engineers. Wolfram Research’s technical consultant Robert Cook will be available all three days, giving an overview on what’s new in Mathematica 11 and a talk on insight and prediction.

See you there!

]]>First, we needed a brand-new atomic object in the language: the `Audio` object.

Play Audio

The `Audio` object is represented by a playable user interface and stores the signal as a collection of sample values, along with some properties such as sample rate.

In addition to importing and storing every sample value in memory, an `Audio` object can reference an external object, which means that all the processing is done by streaming the samples from a local or remote file. This allows us to deal with big recordings or large collections of audio files without the need for any special attention.

The file size of the two-minute Bach piece above is almost 50MB, uncompressed.

The out-of-core representation of the same file is only a few hundred bytes.

Play Audio

`Audio` objects can be created using an explicit list of values.

Play Audio

Various commonly generated audio signals can be easily and efficiently created using the new `AudioGenerator` function, ranging from basic waveform and noise models to more complex signals.

Play Audio

The `AudioGenerator` function also supports pure functions, random processes and `TimeSeries` as input.

Play Audio

Now that we know what `Audio` objects are and how to create them, what can we do with them?

The Wolfram Language has a lot of native features for audio processing. As an example, we have complex filters at our disposal with very little effort.

Use `LowpassFilter` to make a recording less harsh.

Play Audio

Play Audio

`WienerFilter` can be useful in removing background noise.

Play Audio

Play Audio

A lot of audio-specific functionality has been developed for editing and processing `Audio` objects—for example, editing (`AudioTrim`, `AudioPad`, `AudioNormalize`, `AudioResample`), to visualization (`AudioPlot`, `Spectrogram`, `Periodogram`), special effects (`AudioPitchShift`, `AudioTimeStretch`, `AudioReverb`) and analysis (`AudioLocalMeasurements`, `AudioMeasurements`, `AudioIntervals`).

It is easy to manipulate sample values or perform basic edits, such as trimming.

A fun special effect consists of increasing the pitch of a recording without changing the speed.

Play Audio

Play Audio

And maybe adding an echo to the result.

Play Audio

With a little effort, it is also possible to apply more refined processing. Let’s try to replicate what often happens at the end of commercials: speed up a normal recording without losing words.

We can start by deleting silent intervals.

Delete the silences from the recording.

Play Audio

Finally, speed up the result using `AudioTimeStretch`.

Play Audio

To make the result sound less dry, we can apply some reverberation using `AudioReverb`.

Play Audio

Much of the processing can be done by using the Wolfram Language’s arithmetic functions; all of them work seamlessly on `Audio` objects. This is all the code we need for amplitude modulation.

Play Audio

Play Audio

Play Audio

Or you can do a weighted average of a list of recordings.

Play Audio

Play Audio

Play Audio

Play Audio

Play Audio

A lot of the analysis tasks can be made easier by `AudioLocalMeasurements`. This function can automatically compute a collection of features from a recording. Say you want to synthesize a sound with the same pitch and amplitude as a recording.

Play Audio

`AudioLocalMeasurements` makes the extraction of the fundamental frequency and the amplitude profile a one-liner.

Using these two measurements, one can reconstruct pitch and amplitude of the original signal using `AudioGenerator`.

Play Audio

We get a huge bonus by using the results of `AudioLocalMeasurements` as an input to any of the advanced capabilities the Wolfram Language has in many different fields.

Potential applications include machine learning tasks like classifying a collection of recordings.

And then there’s 3D printing! Produce a 3D-printed version of the waveform of a recording.

You can get an idea of the variety of applications at Wolfram’s Computational Audio page, or by looking at the audio documentation pages and tutorials.

Sounds are a big part of everyone’s life, and the `Audio` framework in the Wolfram Language can be a powerful tool to create and understand them.

In essence, EPC enables you to put computation at the heart of your infrastructure and in turn deliver a complete enterprise computation solution for your organization.

There are two strands to this blog post: what we’re delivering and why you’d want an enterprise computation solution and strategy.

Here’s how we got to today. For a few years, one of our key directions at Wolfram has been to build computation as a cloud service, so high-level computation (and computation-rich development) can be deliverable to everyone with the convenience of a cloud deployment. A couple of years ago we delivered our public cloud, a manifestation of core Wolfram technology delivered as a consumer service for professionals and individuals, but hosted by us.

EPC is the enterprise “privatization” with enhanced capabilities—taking this public Wolfram Cloud and packaging it up for hosting on any organization’s infrastructure or a designate like Amazon’s EC2. Instead of us offering the computation cloud service, you can, all within your enterprise. That means all the computation of Mathematica 11 and rapid application development of the Wolfram Language can now be server-side and cloud-based in your organization. High-level computation (for example, applied to your private data) can be an instant, ready-to-go, secure internal service for anyone you choose, with a wide range of interface modalities you can use directly for deploying from CEOs to developers, and instant APIs to go through other applications too.

Let me also point out the key principle that I believe marks out our technology as uniquely suitable for this centralized computation service model: we’re a unified, all-in-one system, not a collection of different systems for different tasks. We’ve put together all computational fields and functionality into one high-level, coherent Wolfram Language. We’re enabling complete interconnectedness. In a cloud-based service, lots of different systems means lots of separate “computational servers” to do different things—stats, reporting, modeling—causing huge switching losses, and that’s once you’ve got them and kept them playing together at all for a given task or workflow. Disparate systems are a real killer for broad, computation-based productivity.

That’s why our technology is in general so suitable for a private cloud manifestation.

We’re also adding many technologies we’ve implemented specially for EPC—from pre-warmed APIs to intra-node parallelization.

In the end, EPC has one objective: to enable computation everywhere in your organization, whether through ease of advanced access by traditionally computational groups or newfound access outside those groups.

So at one level, Wolfram Enterprise Private Cloud is an (exciting) new product. But at another, I believe it’s something much more significant: the start of a fundamental shift in how organizations see and deliver computation for the enterprise.

What do I mean by this? Until recently, the use of high-level computation has only been accessible to a small number of specialists in most organizations. If you’re not one of them, you really had three options: use basic computation (like Excel) yourself; rely on preordained, heavily collimated uses of computation; or seek out a specialist to build something custom or give you a one-off answer.

But computation is now very central to a huge number of organizational functions and the organization itself. It isn’t just for the specialist, many layers removed from the CEO; it’s too core for that. So likewise, it’s important to have an architecture for computation that matches this new reality. That means quality, security, command-and-control, coherence and consistent technology ability for computation need to be enterprise functions, not each decided ad hoc for each use or by each user. I’m describing the need for every organization to look at their enterprise computation strategy (which you can see explained more in our short video piece).

Here’s a typical example I come across. I’m visiting a bank and they ask, “Can the Wolfram Language make DLLs [dynamic link libraries] for Excel?” Digging into this request further, I find out that R&D is using the Wolfram Language for building prototypes that traders want to use through their familiar Excel interface. They’d like to package the DLLs up to hand to each trader instead of recoding. I ask, “But what happens if that R&D code has got a bug and the trader goes on running it? Or leaves and has taken a copy with them? How quickly can you even deploy this in practice? How is this wired directly to management reporting?”

The bank’s question betrays an “individual computation” way to think about the problem. The “enterprise computation” way would instead be for R&D to host an API on the private cloud to connect to an Excel interface. R&D can update the cloud deployment anytime—there’s no DLL to be updated and redistributed, there’s no choice needed between computational ability and interface, there’s no translation or installation; the quality, tracking, auditing and security models are much easier to govern.

One key driver for enterprise computation is big data—you could even say big data is a killer first reason for enterprise computation. So many organizations now state that failing to get the best answers from their data is a core business-strategic issue. They have amassed huge amounts of data but not effective, imaginative and broad-based analytics and visualization. Data and analytics mustn’t be siloed but need to be diced between groups; a data analytics hub is needed. (Watch my live and interactive 2015 Thinking Digital talk about decisions and data and computation.)

When data analytics was a specialist function in organizations, using desktop software—ours particularly!—matched up fine. But now data analytics is a shared enterprise problem; you need to match it with a shared enterprise computation solution—starting with EPC. Only an enterprise model, not an individual desktop one, can sort out data analytics failings.

This change to an enterprise model is new to general computation but not to previous technological progress. Often there’s a question of whether the powerhouse should be distributed or centralized.

Think electrical power. In the very early days (mid-19th century), each user pretty much generated their own. Then centralized power stations were found more effective and efficient at delivering the widely varying requirements of each user. But to reach most people, they depended on a network, technological and engineering progress (e.g. transformers) and standardization so everything interoperated (e.g. the power grid). It may be now with photovoltaic cells and other small-scale power generation opportunities that we’re entering a hybridized power generation future with an optimized mixture of local and centralized generation combined.

With computing, we’ve flip-flopped from mainframe to PC and now to the hybridization of local and cloud computing—the web providing necessary networking standardization to make this a practical reality.

Yet since the mainframe, the high-end computation part of computing hadn’t adopted an enterprise or hybridized architecture. That’s the change we’re starting today with EPC: enterprise as well as local computation—elevating computation to a core service. Much more will follow from us, including a complete hybridized enterprise computation ecosystem.

One consequence I’m very happy about: how EPC empowers our many Wolfram technology enthusiasts to get colleagues’ and management’s attention for their great, innovative work. Almost any Wolfram Language results that have stayed local can now be deployed (all within organizational security policies) as ready-to-use computational power and knowledge-based programs to anyone with a web browser. EPC gives our existing users (and me!) a terrific answer to questions like “How do I use my Wolfram Language code in a production environment?”

EPC can deliver many things we’ve been asked for, but it can go further by resetting thinking about computation.

In particular, I’m finding real excitement in early briefings to CTOs, CIOs and others concerned with technology strategy about this architectural shift and EPC. Not all couched their current infrastructural challenges in these terms, but most agree they do need a much more coherent enterprise computation strategy moving forward.

That’s what Wolfram Enterprise Private Cloud and Wolfram can get you started with today.

Throughout the two weeks, students learned the Wolfram Language from faculty and a variety of guest speakers. They had the opportunity to see Stephen Wolfram give a “live experiment” and speak about the company, entrepreneurialism and the future of technology. Students also heard from guest speakers such as Etienne Bernard and Christopher Wolfram, who showed off other aspects of the Wolfram Language.

Although students spent a vast amount of time hard at work on their projects, they also had many laughs throughout the program. They participated in group activities such as the human knot, the Spikey photo scavenger hunt and the toothpick-and-gumball building contest, as well as weekend field trips to the Boston Museum of Science and the New England Aquarium.

The students completed phenomenal projects on a wide range of topics, ranging from geospatial analysis, textual analysis, machine learning and neural nets, physical simulations, pure math and much more. Here are just a few projects:

“Where Is Downtown?” by Kaitlyn Wang. This project uses cluster analysis and data from Yelp and Wikipedia obtained with ServiceConnect to estimate the polygon of a city’s downtown.

“Where Will Your Balloon Go?” by Cyril Gliner. This project uses WindVectorData to simulate where a balloon would travel when let go at a given time and location on Earth.

“Tiling Polyominoes Game,” by Jared Wasserman. This drag-and-drop game asks the user to place the polyominoes on the right to cover all the gray areas on the left without overlapping the tiles.

“Automatic Emoji Annotator!” by Marco Franco. This project imported over 50,000 tweets to create a neural network that gives the emojis that best represent a sentence.

“Automated Face Anonymizer,” by Max Lee. This is perhaps the project I found to be the most fun, only because it involved me. It anonymizes an image by replacing faces with my head.

This word cloud represents the most common Wolfram Language symbols the students collectively used in their projects:

Here are frequencies of the 30 most commonly used symbols by the students. The first few symbols were used so frequently, a log scale is used:

How do these frequencies compare with normal, everyday usage of the Wolfram Language? We can answer this with the `WolframLanguageData` property `"Frequencies"`. It turns out the usage frequencies from camp versus normal usage have a correlation coefficient of about 0.8. Here’s how the first few symbols compare:

Lastly, we can use the `WolframLanguageData` property `"RelatedSymbols"` and `CommunityGraphPlot` to group the symbols used by the students into clusters based on topic. It shows how eclectic this group of 39 students’ projects were: