Over the past few months, Wolfram Community members have been exploring ways of visualizing the known universe of Wikipedia knowledge. From Bob Dylan’s networks to the persistence of “philosophy” as a category, Wolfram Community has been asking: “What does knowledge actually *look like* in the digital age?”

Mathematician Marco Thiel explored this question by modeling the “Getting to Philosophy” phenomenon on Wikipedia. “If you start at a random Wikipedia page, click on the first link in the main body of the article and then iterate, you will (with a probability of over 95%) end up at the Wikipedia article on philosophy,” Thiel explains. Using `WikipediaData`, he demonstrates how you can generate networks that describe this phenomenon.

He is able to document that about 94% of all Wikipedia articles lead to the “Philosophy” page if one follows the links as instructed, generating in the process some mesmerizing and elegant visualizations of the way that we categorize information.

University student Andres Aramburo also touched on the theme of Wikipedia categories by developing a method for clustering Wikipedia articles by topic. He began by taking a random sample of Wikipedia articles using a Wolfram Language function that he created for this specific task. He then used the links in and out of these articles to generate a graph of the relationships between them. “It’s not a trivial task” to determine if two articles are related to one another, he notes, since “there are several things that can affect the meaning of a sentence, semantics, synonyms, etc.” His visualizations include radial plots of the relationships between articles and word clouds listing shared words for related articles.

One final thread worth highlighting is Community’s celebration of the decision to award Bob Dylan the Nobel Prize in Literature. Wolfram’s own Vitaliy Kaurov created the visualization of the “Universe of Bob Dylan” featured at the top of this post. Alan Joyce (Wolfram|Alpha) generated a graph that compares the lengths of Dylan’s songs (in seconds) to the years in which they were recorded.

And first-time Wolfram Community participant Amy Friedman uploaded her submission from the 2016 Wolfram One-Liner Competition, an amusing word cloud of the poet’s songs in the shape of a guitar.

What new ways of visualizing Wikipedia knowledge can you dream up? With built-in functions like `WikipediaData` and `WikipediaSearch`, the Wolfram Language is the perfect tool for exploring Wikipedia data. Show us what you can do with those functions and more on Wolfram Community. We can’t wait to see what you create!

To some degree, we’ve been working on a Wolfram notebook front end for iOS for about six years now. And in the process, we’ve learned a lot about notebook front ends, a thing we already knew a lot about. Let’s rewind the tape a bit and review.

In 1988, Theo Gray and Stephen Wolfram conceived the idea of the notebook front end for use with the Mathematica system. My first exposure to it was in 1989, when I first saw Mathematica running on a NeXT machine at university. That was 27 years ago. Little did I suspect that I would someday be spending 20 years of my life (and counting) working on it.

It’s interesting to see how relevant today the basic concepts we started with are. We have a document we refer to as a notebook. The notebook is structured into cells. Cells might be designated for headings, narrative, code or results. Cells with code are considered input, which generate outputs inline in the document. While the word “notebook” evokes the idea of a laboratory notebook, it easily encompasses educational documents, literate programs, academic papers, generated reports and experimental scratch pads.

One might have thought that the web would make notebooks obsolete. HTML exposes many similar concepts as notebooks at a lower level. Editing environments such as various Markdown editors or the WordPress environment expose many of these concepts at a higher level. But those environments don’t accomplish inline computation, and the world is increasingly recognizing how much inline computation with immediate feedback really matters.

And even the notion of what “computation” is has evolved over time. It seems that in the 1990s and 2000s, we were in a cycle where many in the software field thought that inline computation was merely for a few math tricks of the sort that could be done by Mathematica or Excel, while hardcore computation required some sort of compile or deployment step. I remember the immediate feedback of line-by-line programming from my youth in the 1980s, although it had actually begun much earlier. But by the time I graduated university, this wasn’t considered “serious programming” anymore. In computer science, there’s a fancy name for this: a read–evaluate–print loop, or REPL. And while the REPL fell out of fashion, the humble notebook continued to present its REPL-plus-narrative structured content.

In 2010, as iPhones and iPads evolved into general computing platforms, they became an obvious platform for notebooks. But iOS came with some very different constraints from the desktop system—so much so that it seemed an impossible task to try to adapt our existing notebook technology to the platform. So, we decided to try to recreate the notebook experience from scratch in a way that both fit within the constraints of the platform and played to its strengths. Seemed straightforward. Cells. Evaluation. Maybe some basic `Manipulate` support. Surely it wouldn’t take long to get that up and running, we thought.

That was the second notebook front end. Since then, there have been others. A short while later, another Wolfram development team started contacting me, asking about notebook front ends. Turned out they were working on this web thing, and wanted a bit of advice. But it can’t be that hard. A small skunk works project.

Even outside Wolfram’s doors, people were adapting to notebook-oriented computing. REPLs started becoming fashionable in software development circles again. Variants of Markdown started to become the language of document creation on the web, and many of those documents looked a lot like notebooks. There was even a significant open-source project that recreated some of our major concepts, down to the use of cells and the Shift + Enter evaluations.

All of these projects ran into some trouble, though. It turns out that the “cells and notebooks” concept wasn’t as easy to recreate as we all thought it would be. Above and beyond the basic technology hurdles, it turns out that we had evolved notebooks to do things we’re no longer willing to sacrifice.

Notebooks today support typesetting. Mathematical typesetting. Typesetting of code. Typesetting that you can properly interact with. Some of this is about the math. Typeset math has had broad appeal to our users, well beyond a core math education market. But making the math and the code coexist while remaining fully interactive and easy to read is a challenge requiring extreme attention to a large number of details.

Notebooks today are dynamic documents that can be generated, transformed and manipulated at the language level. Many core features rely on being able to read and write raw notebook content from the language, and we quickly discovered just how many notebooks in the wild would stop working without this functionality. We use this functionality prolifically to enable our user interfaces. In retrospect, this shouldn’t be too surprising, since we’ve been active exploiting this capability of our product since 1996.

Notebooks today offer a truly interactive experience with computation. We’ve been doing this since 2007 with `Manipulate` and `Dynamic` functionality, but it’s easy to undersell the achievement. When I slide a slider, I learn so much more about what I’m doing from instant feedback than I do from waiting for a web server to respond. Our devices are so powerful today that there is simply no reason for me to wait for computation when it can and often should be done on my local device. And we can create incredibly sophisticated and general interactive interfaces with just a single line of code. The ability to do this allows for novel applications. For example, I find myself using `Manipulate` to understand bugs and test features while doing notebook front end development.

And that brings us to today. iOS has been a challenging platform to bring a proper notebook experience to, and in order to do so, we’ve been creating a brand-new front end from scratch. In terms of CPU power and memory, this new front end is running on the most diminished platform we support today, but we’ve worked hard not to sacrifice the notebook experience. More than any previous front end, this new notebook front end uses its environment incredibly efficiently. It uses more CPU cores, less energy and less memory to get its job done.

And so what we have today is a product that displays and plays notebook content that we’re extremely proud of. It’s just entered into beta for the iPad, and we’re hoping to have a version that works well on the iPhone coming out soon. It’s been a long time coming, but I think the technology we’ve developed is worth the wait.

You can sign up for the beta version at wolfram.com/iosbeta.

]]>

In his book *Idea Makers*, Stephen Wolfram devotes a chapter to Leibniz. Wolfram visited the Leibniz archive in Hanover and wrote about it:

Leafing through his yellowed (but still robust enough for me to touch) pages of notes, I felt a certain connection—as I tried to imagine what he was thinking when he wrote them, and tried to relate what I saw in them to what we now know after three more centuries…. [A]s I’ve learned more, and gotten a better feeling for Leibniz as a person, I’ve realized that underneath much of what he did was a core intellectual direction that is curiously close to the modern computational one that I, for example, have followed.

Leibniz was an early visionary of computing, and built his own calculator, which Wolfram photographed when he visited the archive.

In a recent talk about AI ethics, Wolfram talked more about how Leibniz’s visions of the future are embodied in current Wolfram technologies:

Leibniz—who died 300 years ago next month—was always talking about making a universal language to, as we would say now, express [mathematics] in a computable way. He was a few centuries too early, but I think now we’re finally in a position to do this…. With the Wolfram Language we’ve managed to express a lot of kinds of things in the world—like the ones people ask Siri about. And I think we’re now within sight of what Leibniz wanted: to have a general symbolic discourse language that represents everything involved in human affairs….

If we look back even to Leibniz’s time, we can see all sorts of modern concepts that hadn’t formed yet. And when we look inside a modern machine learning or theorem proving system, it’s humbling to see how many concepts it effectively forms—that we haven’t yet absorbed in our culture.

The Wolfram Language is a form of philosophical language, what Leibniz called a *lingua generalis*, a universal language to be used for calculation. What would Leibniz have made of the tools we have today? How will these tools transform our world? In his essay on Leibniz, Wolfram mulls this over:

In Leibniz’s whole life, he basically saw less than a handful of computers, and all they did was basic arithmetic. Today there are billions of computers in the world, and they do all sorts of things. But in the future there will surely be far far more computers (made easier to create by the Principle of Computational Equivalence). And no doubt we’ll get to the point where basically everything we make will explicitly be made of computers at every level. And the result is that absolutely everything will be programmable, down to atoms. Of course, biology has in a sense already achieved a restricted version of this. But we will be able to do it completely and everywhere.

Leibniz was also a major figure in philosophy, best known for his contention that we live in the “best of all possible worlds,” and his development in his book *Monadology* of the concept of the *monad*: an elementary particle of metaphysics that has properties resulting in what we observe in the physical world.

Wolfram speculates that the concept of the monad may have motivated Leibniz’s invention of binary:

With binary, Leibniz was in a sense seeking the simplest possible underlying structure. And no doubt he was doing something similar when he talked about what he called “monads”. I have to say that I’ve never really understood monads. And usually when I think I almost have, there’s some mention of souls that just throws me completely off.

Still, I’ve always found it tantalizing that Leibniz seemed to conclude that the “best of all possible worlds” is the one “having the greatest variety of phenomena from the smallest number of principles”. And indeed, in the prehistory of my work on

A New Kind of Science, when I first started formulating and studying one-dimensional cellular automata in 1981, I considered naming them “polymones”—but at the last minute got cold feet when I got confused again about monads.

Despite being the daughter of a physicist and having heard about elementary particles since infancy, I am a bit boggled by the concept of the monad. As I contemplate Leibniz’s strange bridge between metaphysics and such things as electrons or the mathematical definition of a point, I am reminded of lines from *Candide*, a book Voltaire wrote satirizing the notion that we live in the best of all possible worlds:

“But for what purpose was the earth formed?” asked Candide.

“To drive us mad,” replied Martin.

Yet knowledge is increasingly digitized in the twenty-first century, a process that relies on that binary language Leibniz invented. I think perhaps that if monads as such did not exist in Leibniz’s time, it may have become necessary to invent them.

]]>Participants in the competition submit 128 or fewer tweetable characters of Wolfram Language code to perform the most impressive computation they can dream up. We had a bumper crop of entries this year that showed the surprising power of the Wolfram Language. You might think that after decades of experience creating and developing with the Wolfram Language, we at Wolfram Research would have seen and thought of it all. But every year our conference attendees surprise us. Read on to see the amazing effects you can achieve with a tweet of Wolfram Language code.

Amy Friedman: “The Song Titles” (110 characters)

Amy calls this homage to the 2016 Nobel Laureate in Literature her contribution to “the nascent field of Bob Dylan analytics.” She writes further, “I started teaching myself how to code in the Wolfram Language yesterday after breakfast, with the full encouragement of my son and aided solely by Stephen Wolfram’s *Elementary Introduction to the Wolfram Language*.”

Amy’s helpful son, Jesse, is the youngest-ever prize winner in our One-Liner Competition. In 2014, at the age of 13, he took second place.

George Varnavides: “An SE Legacy Re-imagined as a Self-Triggering Dynamic” (128 characters)

*(faster than actual speed)*

Order proceeds from chaos in this hypnotic simulation that appealed to the judges’ inner physicists. Points evenly distributed in a spherical volume slowly evolve thread-like structures as they migrate toward target points.

Stephan Leibbrandt: “Projections” (128 characters)

This impressively compact implementation of a smooth transition between map projections gave the judges an “Aha!” moment as they perceived the relationship between orthographic and Mercator projections. Stephan’s key insight in producing a submission that is graphically engaging as well as instructive is that the structure of a map’s geometric data is the same, regardless of the projection.

Manuel Odendahl: “Quilt Pattern” (128 characters)

Manuel writes that he generated this graphic as a quilt pattern for his girlfriend. The judges were impressed by its combination of repetition and variety. No word yet on whether Manuel’s girlfriend has succeeded in assembling the 6,000 quilt squares cut from 645 different colors of fabric.

George Varnavides: “Symmetry in Chaos” (128 characters)

Achieving this graphically appealing image required some clever coding tricks from George, including factoring out the function slot `c` and naming the range `a` so that it could be compactly reused. Binning the points generated by an iterated function and plotting the log of the bin counts yields the refined graphical treatment in the result.

David Debrota: “Transcendental Pattern” (123 characters)

Starting with three million digits of the transcendental number `E`, David’s deft application of a series of image processing functions yields this visual representation of the randomness of the digits.

Abby Brown: “Happy Halloween!” (127 characters)

A timely entry, given that Halloween—celebrated in the United States with pumpkins—was little more than a week after the conference. Abby takes skillful advantage of the Wolfram Language’s default plotting colors. In a plot of multiple functions, pumpkin orange is the first default color. The second is blue, which isn’t appropriate for a pumpkin’s stem. But by bumping the stem function to third place with `Nothing`, Abby achieved a green stem and squeaked in just under the 128-character limit.

Philip Maymin: “Mickey Mousical” (125 characters)

Thanks to Philip, you no longer have to travel to Disneyland to get your mouse ears. All you need is the Wolfram Language!

Richard Scott: “How to Count to One Million in Two Minutes in 100 Easy Steps” (128 characters)

Play Audio

It was slightly embarrassing to have to award a (Dis)Honorable Mention to one of our distinguished Innovation Award winners. But Richard’s helpful two-minute timer drove the judges nuts with its incessant counting and prompted them to warn each other not to evaluate that one.

I must point out its ingenious construction, though, which Richard helpfully illustrated with this image:

Shishir Reddy, Alex Krotz: “Face Projection” (104 characters)

The third-place prize went to two of Abby Brown’s high-school students, whom she brought to the conference to present work they had done in her Advanced Topics in Mathematics class (taught with the Wolfram Language). Shishir and Alex made an amusing video transformation that, in real time, pastes the face of the person on the left onto the person on the right, making them virtual twins.

Snapchat, watch out. Here comes Mathematica!

Michael Sollami: “Neural Hypnosis” (128 characters)

Michael Sollami took second place with this unusual and visually stunning application of the neural net functionality that debuted in Version 11.

After viewing the animation for a short time, the judges were glassy-eyed and chanting in unison, “Second place! Second place! …”. Dunno, Michael. Bug in your code somewhere?

Philip Maymin: “Circle Pong” (128 characters)

*(5x actual speed)*

Philip Maymin’s winning entry packs an impressive load of functionality into 128 characters of code. Not only does it implement a complete and thoroughly playable game of solitaire Pong (“Shorter, rounder and more fun than the original.”), it encourages you to play dangerously by rewarding you with bonus points if you almost let the “ball” escape before swooping in to deflect it.

A brilliant and creative combination of features implemented concisely with complex arithmetic, the game nearly derailed the One-Liner judges, who had to be reminded to stop playing Pong and get back to work.

There were many more impressive contributions than we had time to recognize in the awards ceremony. You can see all of the submissions in this signed CDF. (New to CDF? Get your copy for free with this one-time download.) There’s a wealth of good ideas to take away for anyone willing to invest a little time understanding the code.

Thanks to all who participated and impressed us with their coding chops and creativity. Come again next year!

]]>- “We had a nearly 4-billion-time speedup on this code example.”
- “We’ve worked together for over 9 years, and now we’re finally meeting!”
- “Coding in the Wolfram Language is like collaborating with 200 or 300 experts.”
- “You can turn financial data into rap music. Instead, how about we turn rap music into financial data?”

As a first-timer from the Wolfram Blog Team attending the Technology Conference, I wanted to share with you some of the highlights for me—making new friends, watching Wolfram Language experts code and seeing what the Wolfram family has been up to around the world this past year.

I was only able to attend one talk at a time, and with over a hundred talks going on over three days, there was no way I could see everything—but what I saw, I loved. Tuesday evening, Stephen Wolfram kicked off the event with his fantastic keynote presentation, giving an overview of the present and future of Wolfram Research, demoing live the new features of the Wolfram Language and setting the stage for the rest of the conference.

The nice thing about the Technology Conference is that if you’ve had a burning question about how something in the Wolfram Language works, you won’t get a better opportunity to ask the developers face to face. When someone in the audience asked about storing chemical data, the panel asked, “Is Michael Trott in the room?” And sure enough, Michael Trott was sitting a few seats down from me, and he stood up and addressed the question. Now that’s convenient.

Probably my favorite speaker was Igor Bakshee, a senior research associate here at Wolfram. He described our new publish-subscribe service, the Channel Framework, which allows asynchronous communication between Wolfram systems without dealing with the details of specific senders and receivers. I especially appreciated Igor’s humor and patience as messages came in from someone in the audience: he raised his hands and insisted it was indeed someone else sending them.

This talk was the one I was most looking forward to, and it was exactly what I wanted. Jakub Kabala talked about how he used Mathematica to compare 12th-century Latin texts in his search to determine if the monk of Lido and Gallus Anonymus were actually the same author. Jakub’s talk will also be in our upcoming virtual conference, so be sure to check that out!

It would be downright silly of me to not mention the extremely memorable duo Thomas Carpenter and Daniel “Scantron” Reynolds. The team used Wolfram Language code and JLink to infuse traditional disc jockey and video jockey art with abstract mathematics and visualizations. The experience was made complete when Daniel passed special glasses throughout the audience.

We had the best Wolfram Language programmers all in one place, so of course there had to be competitions! This included both our annual One-Liner Competition and our first after-hours live coding competition on Wednesday night. Phil Maymin won both competitions. Incidentally, in between winning competitions, Phil also gave an energetic presentation, “Sports and eSports Analytics with the Wolfram Language.” Thanks to everyone who participated. Be sure to check out our upcoming blog post on the One-Liner Competition.

Thursday night at Stephen’s Keynote Dinner, six Wolfram Innovator Awards were given out. The Wolfram Innovator Award is our opportunity to recognize people and organizations that have helped bring Wolfram technologies into use around the world. Congratulations again to this year’s recipients, Bryan Minor, Richard Scott, Brian Kanze, Samer Adeeb, Maik Meusel and Ruth Dover!

Like many Wolfram employees around the world, I usually work remote, so a big reason I was eager to go to the Wolfram Technology Conference was to meet people! I got to meet coworkers that I normally only email or talk on the phone with, and I got to speak with people who actually use our technologies and hear what they’ve been up to. After almost every talk, I’d see people shaking hands, trading business cards and exchanging ideas. It was easy to be social at the Technology Conference—everyone there shared an interest in and passion for Wolfram technologies, and the fun was figuring out what that passion was. And Wolfram gave everyone plenty of opportunities for networking and socializing, with lunches, dinners and meet-ups throughout the conference.

Attending the Wolfram Technology Conference has been the highlight of my year. The speakers were great across the board, and a special thanks goes to the technical support team that dealt with network and display issues in stride. I strongly encourage everyone interested in Wolfram technologies to register for next year’s conference, and if you bump into me, please feel free to say hi!

]]>

When he was working at Enova, Slaughter used the Wolfram Language to build Colossus, an analytics engine that provides Enova’s clients in the financial services industry with instantaneous risk and credit analysis. Slaughter’s team was looking for a programming language that would allow them to deploy software changes without involving the entire engineering team in each new change. The Wolfram Language streamlines the process and saves countless hours of development work by communicating more effectively across teams involved in the development process, prototyping and deploying ideas quickly, and avoiding the use of multiple systems to process internal and external data.

In a talk at the 2015 Wolfram Technology Conference, Slaughter’s colleague Vinod Cheriyan explained that streamlining the production process enables Colossus to significantly outperform its predecessor. Colossus can deploy a model to production in just one and a half to two weeks, where its predecessor would typically take one to one and a half months.

Slaughter’s team also used Mathematica to efficiently manage Enova’s large database of XML credit reports. Credit agencies give Enova reports as XMLs with metadata that is packaged as a PDF or Word document. Slaughter’s team replaced a slower procedural approach for merging data with the Wolfram Language’s functional approach, where pattern matching and accelerating rules allowed them to achieve the same result two orders of magnitude faster.

When we talked with Slaughter about why he prefers the Wolfram Language, he mentioned its power both as a programming language and as a computation engine. By using the Wolfram Language, he is able to dramatically streamline his team’s workflow, bringing testing and production into one efficient system.

Be sure to check out other Wolfram Language stories like Chad Slaughter’s on our Customer Stories pages.

]]>I’ll start by talking about our improvements in collaboration. I develop lots of models in SystemModeler, and when I do, I seldom develop them in a vacuum. Either I send a model to my colleagues for them to use, I receive one from them or models get sent back and forth while we work on them together. This is, of course, also true for novice users. A great way to learn how to use SystemModeler—or any product, for that matter—is to look at things other people have done, whether it be a coworker or other users online, and build upon that.

Whether you send your models to other people, receive models or send models between your own platforms, we want to make sure that you have everything you need to start using the model, straight out of the box.

As an example, I have built a model of an inverted pendulum using the PlanarMechanics library. It has a linear-quadratic regulator built using the Modelica Standard Library, and it also includes components from the ModelPlug library that connect to real-life hardware, such as actuators and sensors on an Arduino board (or any other board following the Firmata protocol).

In the model, you can apply a force to different parts of the pendulum using input from an Arduino board. When simulated, the model produces an automatically generated animation.

As a developer of this model, I usually know of quite a few things that will be interesting to plot. In this particular case, for example, you can create interesting results by studying the different forces acting on the pendulum and the different states of the controller. In SystemModeler 4.3, you can predefine plots in a model. After choosing a set of variables to plot, simply right-click “Add Plot to Model” and give it a name, e.g. ControllerInputs.

Now the stored plot can easily be accessed each time the model is simulated.

Even if model parameters or the model structure are changed, the plots will remain and be available next time you need to use the model. Storing plots is not only a useful feature when you revisit models that you yourself have built, but it is also useful when you share or receive models from others.

Now, let me save this model and send it to a colleague. Previously I would have needed to make sure that they had all the resources to run the model, including all the libraries I have used. In SystemModeler 4.3, I can now easily save all this in one convenient file with the improved Save Total Model feature. Everything needed, including libraries, stored plots and animation objects, will be available for the person who receives the file.

So a coworker receives my model—how would he or she begin analyzing it? In SystemModeler 4.3, we have introduced new model analytics features that help answer that question. Starting out, we can get a quick look at the model using the new summary property for `WSMModelData`.

The pie chart shows how large of a percentage of components are from a particular domain. A majority of the components comes from the dark blue slice, the PlanarMechanics library. In Mathematica, you can mouse over the slices to see the domain name.

Another good place for my coworker to start would be by looking at the plots I defined in the model before sending it. Support for the stored plots have, of course, also been included in the Wolfram Language. If a plot has been chosen as the preferred plot, a very neat one-liner in the Wolfram Language makes it easy to start exploring the model.

In Simulation Center, you will find a list of all stored plots in the experiment browser. You can list all the available plots with the Wolfram Language via the `"PlotNames"` property.

Parametric plots can be stored and plotted.

Use the stored plot functionality to easily measure the response to changes in parameters.

A stored plot can consist of multiple plots.

One area where we have made heavy use of this new functionality is with our SystemModeler examples. On our webpage, we have for a long time provided a large selection of SystemModeler models collected from different industries and education areas. Whether it be the internal states of a digital adder or the heat flows in a freezer, these examples usually contain a lot of different things that you can study. We have now added the most important plots to analyze and understand each example model using stored plots.

Furthermore, the models that we have created over the years have now also been included directly in the product. Whether you want to get started using SystemModeler using models from your domain or study new concepts, the new included curated models will be useful.

Now let’s return to the model my colleague just received. Suppose that he or she would like to perform some further analysis on it. A new set of templates has been included in order to facilitate this. The following command, for example, creates a template in Mathematica that allows you to change an input in real time and plot the response.

Just fill in the blanks, and the simulation models will come alive with real-time interaction in Mathematica.

Templates for many other tasks are available, such as FFT analyses, model calibrations, parameters, initial value sweeps and much more.

These are just some of the new, exciting model analytics and collaboration features in SystemModeler 4.3. For a more complete view, check out our What’s New page. If you try out the new SystemModeler, you will experience one of the things that I haven’t mentioned, namely that it is snappier and faster than before. Actually, performance has been improved across the board, including faster model compilation times and faster simulations from the Wolfram Language.

]]>With its emphasis on cross-pollination, the Wolfram Data Summit has emerged as an exciting place to share insight into the subtle differences and unique challenges presented by data in different domains. New and unexpected points of commonality emerge from these conversations, allowing participants to trade solutions to emergent data problems.

This question of what to do with data was taken up by Lei Wu (Ancestry.com) and Cinzia Perlingieri (Center for Digital Archaeology), both of whom curate large data collections that preserve cultural memory. At Ancestry.com, Wu and his team of data scientists are reimagining digital genealogy as social networking. We use Facebook for friends and LinkedIn for professional relationships. What if we had a network—what he calls a “big tree”—for our shared genealogical history as well? This big tree engages users in the process of turning data into knowledge.

Like Wu, Perlingieri emphasized the importance of combining human insight with machine learning when it comes to doing things with data. Perlingieri and her team are searching for scalable methodologies that preserve cultural heritage. “We need a redefinition of data as a concept inclusive of alternative narratives and community-driven contributions to heritage,” she said.

Though Ancestry.com and the Center for Digital Archaeology serve very different clienteles, they share a user-focused approach to data curation and interpretation. Users create knowledge by interacting with archives, generating new connections between data elements in the process. These types of connections were explored by each of the Summit’s presenters, including Anthony Scriffignano, chief data scientist at Data Summit co-sponsor Dun & Bradstreet, whose talk delved into some of the “Things We Forget to Think About.”

Because we recognize the promise of computational knowledge for every industry, we will continue to expand the Wolfram Data Summit. The Wolfram Data Summit develops understanding of data at the level of software, hardware and technical processes. But at its core, the Wolfram Data Summit is about how we create and structure data, what kinds of insights can be derived from it and how we apply computational thinking. What kind of computational thinking do professionals use in different domains? How might computational thinking be applied across those industry boundaries? As the Wolfram Data Summit turns eight, we continue to search for answers to these questions and more.

Watch more videos from the 2016 Wolfram Data Summit, including Stephen Wolfram’s keynote address, here.

*This post has been updated to include video of Anthony Scriffignano’s talk.*

In a nutshell, `AskFunction` allows you to ask the user a series of questions—as in surveys, web orders, tax forms, quizzes, etc. To demonstrate, I have built a simple web form for my friends and family to RSVP to my upcoming party.

The first time I call `Ask["name"]`, the web form prompts the user for input. When I call `Ask["name"]` the second time, the `AskFunction` remembers the user’s previous answer and uses that instead of asking again. See the example for yourself here.

This example isn’t really that different from what you can already do with `FormFunction`; `Ask` is not intended to replace `FormFunction`. Where `Ask` begins to really shine over its older brother is when you want to skip or tailor questions based on previous responses.

In the event a friend or family member says they are not attending my party, I would like to give them the opportunity to reconsider their tragic mistake. This is where `AskConfirm` comes in: `AskConfirm` asks users if they are OK with their answer; if they say no, `AskConfirm` effectively rewinds the computation and gives them a second chance to answer differently. And we can make it so only those who said they aren’t attending are prompted for confirmation. Try it out.

And we can then ask more questions to those who are attending, such as whether they’ll be bringing a guest. There’s no need to ask those who aren’t attending if they are going to bring a guest—so we won’t! Try it out.

Notice that when I ask for the user’s name and the name of the guest, I now use `AskAppend` instead of `Ask`. `AskAppend["name"]` does exactly what it sounds like it does—it appends the new answer for `"name"` to the previous answer. Unlike `Ask`, `AskAppend` can ask the same question multiple times and build up a list of each answer. So if I first answer `AskAppend["name"]` with “Carlo” and then later answer with “Ada,” when I call `Ask["name"]` again, the answer will be: **{“Carlo”, “Ada”}**.

I will also ask the user what sort of food they want—and once again, those who aren’t attending get to skip this question since it doesn’t apply. As an added bonus, I can utilize the built-in Wolfram Cloud storage and save the survey results to a `Databin` or `CloudExpression`. In less than ten minutes, I have built a functional web form with branching, using straightforward and declarative code. Here is the complete code, which you can try out in the cloud right now:

A few other new functions I didn’t mention include `AskedQ` and `AskedValue`. `AskedQ` lets you check if a question has been asked already. `AskedValue` works like `Ask`, except it will *not* prompt the user if they have not been asked yet; the function will instead return `Missing` for unanswered questions.

RSVP forms are far from the only thing you can build with `Ask`. We’ve been asked by many different users for this sort of control flow to be added to web forms. And we’re proud of how flexible this generic framework is. What I really love about the Wolfram Language is that behind a design that makes it simple to build 90% of applications, there is also deep customizability. With the basic tools of `Ask`, `AskConfirm`, `AskAppend` and `AskDisplay`, any problem with an “If this, do that” structure, or any flow chart, can be written as an `AskFunction` form. Here are just a few of the many applications you can now build.

*Troubleshooting Guides*

Have you ever needed to help friends or family members troubleshoot their computers? Build a troubleshooting guide web app and give your friends, family or clients an effective guide to fix their own problems.

*Tax/Legal Forms*

A lot of factors go into finances—are you married, do you have children, how much do you make or spend? If you don’t have dependents, you should be able to skip over any question about dependents. Try out our example web form for computing your marginal tax rate.

*Quizzes*

`FormFunction` works fine for quizzes, but maybe you want to only display advanced questions once a student has demonstrated mastery of the beginner questions. Or use `AskConfirm` and `AskAppend` to loop on a particular question until the student has the correct answer.

`Ask` is still new, and the potential for what it can do is rapidly growing. Even now, we’re continuing to build on the `Ask` functionality and expand the functions in new directions. Coming down the pipeline are new projects that will bring the `Ask` syntax to different applications like chatbots and more. I’m genuinely proud of what we’ve accomplished so far and where we’re headed in the future. So if you’re interested in learning more about how to use these functions, please feel free to leave a comment!

Join us on October 27 for a series of talks, case studies and workshops that will equip you with the knowledge to make the most of Wolfram technologies.

Our experts will introduce our latest release, Wolfram Enterprise Private Cloud, which aims to shape the way organizations see and deliver computation, enabling users to derive vastly increased value from their big data for analytics, business intelligence and smart application development.

Special guest Jan Brugard of Wolfram MathCore will demonstrate how Mathematica and SystemModeler can be used together to model and analyze a wide range of systems, and will give industry examples such as:

- Optimizing bio-ethanol production
- Landing drones autonomously on ships
- Eliminating the need for biopsy when diagnosing problems with the liver

The conference will be a great opportunity to check out our latest technology, talk to our developers and get the chance to meet fellow technical specialists.

For more information and to reserve your space, click here.

See you there!

]]>