Wolfram|Alpha at 10

Wolfram|Alpha at 10

The Wolfram|Alpha Story

Today it’s 10 years since we launched Wolfram|Alpha. At some level, Wolfram|Alpha is a never-ending project. But it’s had a great first 10 years. It was a unique and surprising achievement when it first arrived, and over its first decade it’s become ever stronger and more unique. It’s found its way into more and more of the fabric of the computational world, both realizing some of the long-term aspirations of artificial intelligence, and defining new directions for what one can expect to be possible. Oh, and by now, a significant fraction of a billion people have used it. And we’ve been able to keep it private and independent, and its main website has stayed free and without external advertising.

For me personally, the vision that became Wolfram|Alpha has a very long history. I first imagined creating something like it more than 47 years ago, when I was about 12 years old. Over the years, I built some powerful tools—most importantly the core of what’s now Wolfram Language. But it was only after some discoveries I made in basic science in the 1990s that I felt emboldened to actually try building what’s now Wolfram|Alpha.

It was—and still is—a daunting project. To take all areas of systematic knowledge and make them computable. To make it so that any question that can in principle be answered from knowledge accumulated by our civilization can actually be answered, immediately and automatically.

Leibniz had talked about something like this 350 years ago; Turing 70 years ago. But while science fiction (think the Star Trek computer) had imagined it, and AI research had set it as a key goal, 50 years of actual work on question-answering had failed to deliver. And I didn’t know for sure if we were in the right decade—or even the right century—to be able to build what I wanted.

But I decided to try. And it took lots of ideas, lots of engineering, lots of diverse scholarship, and lots of input from experts in a zillion fields. But by late 2008 we’d managed to get Wolfram|Alpha to the point where it was beginning to work. Day by day we were making it stronger. But eventually there was no sense in going further until we could see how people would actually use it.

And so it was that on May 18, 2009, we officially opened Wolfram|Alpha up to the world. And within hours we knew it: Wolfram|Alpha really worked! People asked all kinds of questions, and got successful answers. And it became clear that the paradigm we’d invented of generating synthesized reports from natural language input by using built-in computational knowledge was very powerful, and was just what people needed.

Perhaps because the web interface to Wolfram|Alpha was just a simple input field, some people assumed it was like a search engine, finding content on the web. But Wolfram|Alpha isn’t searching anything; it’s computing custom answers to each particular question it’s asked, using its own built-in computational knowledge—that we’ve spent decades amassing. And indeed, quite soon, it became clear that the vast majority of questions people were asking were ones that simply didn’t have answers already written down anywhere on the web; they were questions whose answers had to be computed, using all those methods and models and algorithms—and all that curated data—that we’d so carefully put into Wolfram|Alpha.

As the years have gone by, Wolfram|Alpha has found its way into intelligent assistants like Siri, and now also Alexa. It’s become part of chatbots, tutoring systems, smart TVs, NASA websites, smart OCR apps, talking (toy) dinosaurs, smart contract oracles, and more. It’s been used by an immense range of people, for all sorts of purposes. Inventors have used it to figure out what might be possible. Leaders and policymakers have used it to make decisions. Professionals have used it to do their jobs every day. People around the world have used it to satisfy their curiosity about all sorts of peculiar things. And countless students have used it to solve problems, and learn.

And in addition to the main, public Wolfram|Alpha, there are now all sorts of custom “enterprise” Wolfram|Alphas operating inside large organizations, answering questions using not only public data and knowledge, but also the internal data and knowledge of those organizations.

It’s fun when I run into high-school and college kids who notice my name and ask “Are you related to Wolfram|Alpha?” “Well”, I say, “actually, I am”. And usually there’s a look of surprise, and a slow dawning of the concept that, yes, Wolfram|Alpha hasn’t always existed: it had to be created, and there was an actual human behind it. And then I often explain that actually I first started thinking about building it a long time ago, when I was even younger than them…

How Come It Actually Worked?

When I started building Wolfram|Alpha I certainly couldn’t prove it would work. But looking back, I realize there were a collection of key things—mostly quite unique to us and our company—that ultimately made it possible. Some were technical, some were conceptual, and some were organizational.

On the technical side, the most important was that we had what was then Mathematica, but is now the Wolfram Language. And by the time we started building Wolfram|Alpha, it was clear that the unique symbolic programming paradigm that we’d invented to be the core of the Wolfram Language was incredibly general and powerful—and could plausibly succeed at the daunting task of providing a way to represent all the computational knowledge in the world.

It also helped a lot that there was so much algorithmic knowledge already built into the system. Need to solve a differential equation to compute a trajectory? Just use the built-in NDSolve function! Need to solve a difficult recurrence relation? Just use RSolve. Need to simplify a piece of logic? Use BooleanMinimize. Need to do the combinatorial optimization of finding the smallest number of coins to give change? Use FrobeniusSolve. Need to find out how long to cook a turkey of a certain weight? Use DSolve. Need to find the implied volatility of a financial derivative? Use FinancialDerivative. And so on.

But what about all that actual data about the world? All the information about cities and movies and food and so on? People might have thought we’d just be able to forage the web for it. But I knew very quickly this wouldn’t work: the data—if it even existed on the web—wouldn’t be systematic and structured enough for us to be able to correctly do actual computations from it, rather than just, for example, displaying it.

So this meant there wouldn’t be any choice but to actually dive in and carefully deal with each different kind of data. And though I didn’t realize it with so much clarity at the time, this is where our company had another extremely rare and absolutely crucial advantage. We’ve always been a very intellectual company (no doubt to our commercial detriment)—and among our staff we, for example, have PhDs in a wide range of subjects, from chemistry to history to neuroscience to architecture to astrophysics. But more than that, among the enthusiastic users of our products we count many of the world’s top researchers across a remarkable diversity of fields.

So when we needed to know about proteins or earthquakes or art history or whatever, it was easy for us to find an expert. At first, I thought the main issue would just be “Where is the best source of the relevant data?” Sometimes that source would be very obvious; sometimes it would be very obscure. (And, yes, it was always fun to run across people who’d excitedly say things like: “Wow, we’ve been collecting this data for decades and nobody’s ever asked for it before!”)

But I soon realized that having raw data was only the beginning; after that came the whole process of understanding it. What units are those quantities in? Does -99 mean that data point is missing? How exactly is that average defined? What is the common name for that? Are those bins mutually exclusive or combined? And so on. It wasn’t just enough to have the data; one also had to have an expert-level dialog with whomever had collected the data.

But then there was another issue: people want answers to questions, not raw data. It’s all well and good to know the orbital parameters for a television satellite, but what most people will actually want to know is where the satellite is in the sky at their location. And to work out something like that requires some method or model or algorithm. And this is where experts were again crucial.

My goal from the beginning was always to get the best research-level results for everything. I didn’t consider it good enough to use the simple formula or the rule of thumb. I wanted to get the best answers that current knowledge could give—whether it was for time to sunburn, pressure in the ocean, mortality curves, tree growth, redshifts in the early universe, or whatever. Of course, the good news was that the Wolfram Language almost always had the built-in algorithmic power to do whatever computations were needed. And it was remarkably common to find that the original research we were using had actually been done with the Wolfram Language.

As we began to develop Wolfram|Alpha we dealt with more and more domains of data, and more and more cross-connections between them. We started building streamlined frameworks for doing this. But one of the continuing features of the Wolfram|Alpha project has been that however good the frameworks are, every new area always seems to involve new and different twists—that can be successfully handled only because we’re ultimately using the Wolfram Language, with all its generality.

Over the years, we’ve developed an elaborate art of data curation. It’s a mixture of automation (these days, often using modern machine learning), management processes, and pure human tender loving care applied to data. I have a principle that there always has to be an expert involved—or you’ll never get the right answer. But it’s always complicated to allocate resources and to communicate correctly across the phases of data curation—and to inject the right level of judgement at the right points. (And, yes, in an effort to make the complexities of the world conveniently amenable to computation, there are inevitably judgement calls involved: “Should the Great Pyramid be considered a building?”, “Should Lassie be considered a notable organism or a fictional character?” “What was the occupation of Joan of Arc?”, and so on.)

When we started building Wolfram|Alpha, there’d already been all sorts of thinking about how large-scale knowledge should best be represented computationally. And there was a sense that—much like logic was seen as somehow universally applicable—so also there should be a universal and systematically structured way to represent knowledge. People had thought about ideas based on set theory, graph theory, predicate logic, and more—and each had had some success.

Meanwhile, I was no stranger to global approaches to things—having just finished a decade of work on my book A New Kind of Science, which at some level can be seen as being about the theory of all possible theories. But partly because of the actual science I discovered (particularly the idea of computational irreducibility), and partly because of the general intuition I had developed, I had what I now realize was a crucial insight: there’s not going to be a useful general theory of how to represent knowledge; the best you can ever ultimately do is to think of everything in terms of arbitrary computation.

And the result of this was that when we started developing Wolfram|Alpha, we began by just building up each domain “from its computational roots”. Gradually, we did find and exploit all sorts of powerful commonalities. But it’s been crucial that we’ve never been stuck having to fit all knowledge into a “data ontology graph” or indeed any fixed structure. And that’s a large part of why we’ve successfully been able to make use of all the rich algorithmic knowledge about the world that, for example, the exact sciences have delivered.

The Challenge of Natural Language

Perhaps the most obviously AI-like part of my vision for Wolfram|Alpha was that you should be able to ask it questions purely in natural language. When we started building Wolfram|Alpha there was already a long tradition of text retrieval (from which search engines had emerged), as well as of natural language processing and computational linguistics. But although these all dealt with natural language, they weren’t trying to solve the same problem as Wolfram|Alpha. Because basically they were all taking existing text, and trying to extract from it things one wanted. In Wolfram|Alpha, what we needed was to be able to take questions given in natural language, and somehow really understand them, so we could compute answers to them.

In the past, exactly what it meant for a computer to “understand” something had always been a bit muddled. But what was crucial for the Wolfram|Alpha project was that we were finally in a position to give a useful, practical definition: “understanding” for us meant translating the natural language into precise Wolfram Language. So, for example, if a user entered “What was the gdp of france in 1975?” we wanted to interpret this as the Wolfram Language symbolic expression Entity["Country", "France"][Dated["GDP", 1975]].

And while it was certainly nice to have a precise representation of a question like that, the real kicker was that this representation was immediately computable: we could immediately use it to actually compute an answer.

In the past, a bane of natural language understanding had always been the ambiguity of things like words in natural language. When you say “apple”, do you mean the fruit or the company? When you say “3 o’clock”, do you mean morning or afternoon? On which day? When you say “springfield”, do you mean “Springfield, MA” or one of the 28 other possible Springfield cities?

But somehow, in Wolfram|Alpha this wasn’t such a problem. And it quickly became clear that the reason was that we had something that no previous attempt at natural language understanding had ever had: we had a huge and computable knowledgebase about the world. So “apple” wasn’t just a word for us: we had extensive data about the properties of apples as fruit and Apple as a company. And we could immediately tell that “apple vitamin C” was talking about the fruit, “apple net income” about the company, and so on. And for “Springfield” we had data about the location and population and notoriety of every Springfield. And so on.

It’s an interesting case where things were made easier by solving a much larger problem: we could be successful at natural language understanding because we were also solving the huge problem of having broad and computable knowledge about the world. And also because we had built the whole symbolic language structure of the Wolfram Language.

There were still many issues, however. At first, I’d wondered if traditional grammar and computational linguistics would be useful. But they didn’t apply well to the often-not-very-grammatical inputs people actually gave. And we soon realized that instead, the basic science I’d done in A New Kind of Science could be helpful—because it gave a conceptual framework for thinking about the interaction of many different simple rules operating on a piece of natural language.

And so we added the strange new job title of “linguistic curator”, and set about effectively curating the semantic structure of natural language, and creating a practical way to turn natural language into precise Wolfram Language. (And, yes, what we did might shed light on how humans understand language—but we’ve been so busy building technology that we’ve never had a chance to explore this.)

How to Answer the Question

OK, so we can solve the difficult problem of taking natural language and turning it into Wolfram Language. And with great effort we’ve got all sorts of knowledge about the world, and we can compute all kinds of things from it. But given a particular input, what output should we actually generate? Yes, there may be a direct answer to a question (“42”, “yes”, whatever). And in certain circumstances (like voice output) that may be the main thing you want. But particularly when visual display is possible, we quickly discovered that people find richer outputs dramatically more valuable.

And so, in Wolfram|Alpha we use the computational knowledge we have to automatically generate a whole report about the question you asked:

We’ve worked hard on both the structure and content of the information presentation. There’d never been anything quite like it before, so everything had to be invented. At the top, there are sometimes “Assumings” (“Which Springfield did you mean?”, etc.)—though the vast majority of the time, our first choice is correct. We found it worked very well to organize the main output into a series of “pods”, often with graphical or tabular contents. Many of the pods have buttons that allow for drilldown, or alternatives.

Everything is generated programmatically. And which pods are there, with what content, and in what sequence, is the result of lots of algorithms and heuristics—including many that I personally devised. (Along the way, we basically had to invent a whole area of “computational aesthetics”: automatically determining what humans will find aesthetic and easy to interpret.)

In most large software projects, one’s building things to precise specifications. But one of the complexities of Wolfram|Alpha is that so much of what it does is heuristic. There’s no “right answer” to exactly what to plot in a particular pod, over what range. It’s a judgement call. And the overall quality of Wolfram|Alpha directly depends on doing a good job at making a vast number of such judgement calls.

But who should make these judgement calls? It’s not something pure programmers are used to doing. It takes real computational thinking skills, and it also usually takes serious knowledge of each content area. Sometimes similar judgement calls get repeated, and one can just say “do it like that other case”. But given how broad Wolfram|Alpha is, it’s perhaps not surprising that there are an incredible number of different things that come up.

And as we approached the launch of Wolfram|Alpha I found myself making literally hundreds of judgement calls every day. “How many different outputs should we generate here?” “Should we add a footnote here?” “What kind of graphic should we produce in that case?”

In my long-running work on designing Wolfram Language, the goal is to make everything precise and perfect. But for Wolfram|Alpha, the goal is instead just to have it behave as people want—regardless of whether that’s logically perfect. And at first, I worried that with all the somewhat arbitrary judgement calls we were making to achieve that, we’d end up with a system that felt very incoherent and unpredictable. But gradually I came to understand a sort of logic of heuristics, and we developed a good rhythm for inventing heuristics that fit together. And in the end—with a giant network of heuristic algorithms—I think we’ve been very successful at creating a system that broadly just automatically does what people want and expect.

Getting the Project Done

Looking back now, more than a decade after the original development of Wolfram|Alpha, it begins to seem even more surprising—and fortuitous—that the project ended up being possible at all. For it is clear now that it critically relied on a whole collection of technical, conceptual and organizational capabilities that we (and I) happened to have developed by just that time. And had even one of them been missing, it would probably have made the whole project impossible.

But even given the necessary capabilities, there was the matter of actually doing the project. And it certainly took a lot of leadership and tenacity from me—as well as all sorts of specific problem solving—to pursue a project that most people (including many of those working on it) thought, at least at first, was impossible.

How did the project actually get started? Well, basically I just decided one day to do it. And, fortunately, my situation was such that I didn’t really have to ask anyone else about it—and as a launchpad I already had a successful, private company without outside investors that had been running well for more than a decade.

From a standard commercial point of view, most people would have seen the Wolfram|Alpha project as a crazy thing to pursue. It wasn’t even clear it was possible, and it was certainly going to be very difficult and very long term. But I had worked hard to put myself in a position where I could do projects just because I thought they were intellectually valuable and important—and this was one I had wanted to do for decades.

One awkward feature of Wolfram|Alpha as a project is that it didn’t work, until it did. When I tried to give early demos, too little worked, and it was hard to see the point of the whole thing. And this led to lots of skepticism, even by my own management team. So I decided it was best to do the project quietly, without saying much about it. And though it wasn’t my intention, things ramped up to the point where a couple hundred people were working completely under the radar (in our very geographically distributed organization) on the project.

But finally, Wolfram|Alpha really started to work. I gave a demo to my formerly skeptical management team, and by the end of an hour there was uniform enthusiasm, and lots of ideas and suggestions.

And so it was that in the spring of 2009, we prepared to launch Wolfram|Alpha.

The Launch

On March 4, 2009, the wolframalpha.com domain lit up, with a simple:

On March 5, I posted a short (and, in the light of the past decade, satisfyingly prophetic) blog that began:

We were adding features and fixing bugs at a furious pace. And rack by rack we were building infrastructure to actually support the system (yes, below all those layers of computational intelligence there are ultimately computers with power cables and network connectors and everything else):

At the beginning, we had about 10,000 cores set up to run Wolfram|Alpha (back then, virtualization wasn’t an option for the kind of performance we wanted). But we had no real idea if this would be enough—or what strange things missed by our simulations might happen when real people started using the system.

We could just have planned to put up a message on the site if something went wrong. But I thought it would be more interesting—and helpful—to actually show people what was going on behind the scenes. And so we decided to do something very unusual—and livestream to the internet the process of launching Wolfram|Alpha.

We planned our initial go-live to occur on the evening of Friday, May 15, 2009 (figuring that traffic would be lower on a Friday evening). And we built our version of a “Mission Control” to coordinate everything:

There were plenty of last-minute issues, many of them captured on the livestream. But in classic Mission Control style, each of our teams finally confirmed that we were “go for launch”—and at 9:33:50 pm CT, I pressed the big “Activate” button, and soon all network connections were open, and Wolfram|Alpha was live to the world.

Queries immediately started flowing in from around the world—and within a couple of hours it was clear that the concept of Wolfram|Alpha was a success—and that people found it very useful. It wasn’t long before bugs and suggestions started coming in too. And for a decade we’ve been being told we should give answers about the strangest things (“How many teeth does a snail have?” “How many spiders does the average American eat?” “Which superheroes can hold Thor’s hammer?” “What is the volume of a dog’s eyeball?”).

After our initial go-live on Friday evening, we spent the weekend watching how Wolfram|Alpha was performing (and fixing some hair-raising issues, for example about the routing of traffic to our different colos). And then, on Monday May 18, 2009, we declared Wolfram|Alpha officially launched.

The Growth of Wolfram|Alpha

So what’s happened over the past decade? Every second, there’s been new data flowing into Wolfram|Alpha. Weather. Stock prices. Aircraft positions. Earthquakes. Lots and lots more. Some things update only every month or every year (think: government statistics). Other things update when something happens (think: deaths, elections, etc.) Every week, there are administrative divisions that change in some country around the world. And, yes, occasionally there’s even a new official country (actually, only South Sudan in the past decade).

Wolfram|Alpha has got both broader and deeper in the past decade. There are new knowledge domains. About cat breeds, shipwrecks, cars, battles, radio stations, mines, congressional districts, anatomical structures, function spaces, glaciers, board games, mythological entities, yoga poses and many, many more. Of course, the most obvious domains, like countries, cities, movies, chemicals, words, foods, people, materials, airlines and mountains were already present when Wolfram|Alpha first launched. But over the past decade, we’ve dramatically extended the coverage of these.

What a decade ago was a small or fragmentary area of data, we’ve now systematically filled out—often with great effort. 140,000+ new kinds of food. 350,000 new notable people. 170+ new properties about 58,000 public companies. 100+ new properties about species (tail lengths, eye counts, etc.). 1.6 billion new data points from the US Census. Sometimes we’ve found existing data providers to work with, but quite often we’ve had to painstakingly curate the data ourselves.

It’s amazing how much in the world can be made computable if one puts in the effort. Like military conflicts, for example, which required both lots of historical work, and lots of judgement. And with each domain we add, we’ve put more and more effort into ensuring that it connects with other domains (What was the geolocation of the battle? What historical countries were involved? Etc.).

From even before Wolfram|Alpha launched, we had a wish list of domains to add. Some were comparatively easy. Others—like military conflicts or anatomical structures—took many years. Often, we at first thought a domain would be easy, only to discover all sorts of complicated issues (I had no idea how many different categories of model, make, trim, etc. are important for cars, for example).

In earlier years, we did experiments with volunteer and crowd-sourced data collection and curation. And in some specific areas this worked well, like local information from different countries (how do shoe sizes work in country X?), and properties of fictional characters (who were Batman’s parents?). But as we’ve built out more sophisticated tools, with more automation—as well as tuning our processes for making judgement calls—it’s become much more difficult for outside talent to be effective.

For years, we’ve been the world’s most prolific reporter of bugs in data sources. But with so much computable data about so many things, as well as so many models about how things work, we’re now in an absolutely unique position to validate, cross-check data—and use the latest machine learning to discover patterns and detect anomalies.

Of course, data is just one part of the Wolfram|Alpha story. Because Wolfram|Alpha is also full of algorithms—both precise and heuristic—for computing all kinds of things. And over the past decade, we’ve added all sorts of new algorithms, based on recent advances in science. We’ve also been able to steadily polish what we have, covering all those awkward corner cases (“Are angle units really dimensionless or not?”, “What is the country code of a satphone?”, and so on).

One of the big unknowns when we first launched Wolfram|Alpha was how people would interact with it, and what forms of linguistic input they would give. Many billions of queries later, we know a lot about that. We know a thousand ways to ask how much wood a woodchuck can chuck, etc. We know all the bizarre variants people use to specify even simple arithmetic with units. Every day we collect the “fallthroughs”—inputs we didn’t understand. And for a decade now we’ve been steadily extending our knowledgebase and our natural language understanding system to address them.

Ever since we first launched what’s now the Wolfram Language 30+ years ago, we’ve supported things that would now be called machine learning. But over the past decade, we’ve also become leaders in modern neural nets and deep learning. And in some specific situations, we’ve now been able to make good use of this technology in Wolfram|Alpha.

But there’s been no magic bullet, and I don’t expect one. If one wants to get data that’s systematically computable, one can’t forage it from the web, even with the finest modern machine learning. One can use machine learning to make suggestions in the data curation pipeline, but in the end, if you want to get the right answer, you need a human expert who can exercise judgement based on the accumulated knowledge of a field. (And, yes, the same is true of good training sets for many machine learning tasks.)

In the natural language understanding we need to do for Wolfram|Alpha, machine learning can sometimes help, especially in speeding things up. But if one wants to be certain about the symbolic interpretation of natural language, then—a bit like for doing arithmetic—to get good reliability and efficiency there’s basically no choice but to use the systematic algorithmic approach that we’ve been developing for many years.

Something else that’s advanced a lot since Wolfram|Alpha was launched is our ability to handle complex questions that combine many kinds of knowledge and computation. To do this has required several things. It’s needed more systematically computable data, with consistent structure across domains. It’s needed an underlying data infrastructure that can handle more complex queries. And it’s needed the ability to handle more sophisticated linguistics. None of these have been easy—but they’ve all steadily advanced.

By this point, Wolfram|Alpha is one of the more complex pieces of software and data engineering that exists in the world. It helps that it’s basically all written in Wolfram Language. But over time, different parts have outgrown the frameworks we originally built for them. And an important thing we’ve done over the past decade is to take what we’ve learned from all our experience, and use it to systematically build a sequence of more efficient and more general frameworks. (And, yes, it’s never easy refactoring a large software system, but the high-level symbolic character of the Wolfram Language helps a lot.)

There’s always new development going on in the Wolfram|Alpha codebase—and in fact we normally redeploy a new version every two weeks. Wolfram|Alpha is a very complex system to test. Partly that’s because what it does is so diverse. Partly that’s because the world it’s trying to represent is a complex place. And partly it’s because human language usage is so profoundly non-modular. (“3 chains” is probably—at least for now—a length measurement, “2 chains” is probably a misspelling of a rapper, and so on.)

The Long Tail of Knowledge

What should Wolfram|Alpha know about? My goal has always been to have it eventually know about everything. But obviously one’s got to start somewhere. And when we were first building Wolfram|Alpha we started with what we thought were the “most obvious” areas. Of course, once Wolfram|Alpha was launched, the huge stream of actual questions that people ask have defined a giant to-do list, which we’ve steadily been working through, now for a decade.

When Wolfram|Alpha gets used in a new environment, new kinds of questions come up. Sometimes they don’t make sense (like “Where did I put my keys?” asked of Wolfram|Alpha on a phone). But often they do. Like asking Wolfram|Alpha on a device in a kitchen, “Can dogs eat avocado?” (And, yes, we try to give the best answer current science can provide.)

But I have to admit that, particularly before we launched Wolfram|Alpha, I was personally one of our main sources of “we should know about this” input. I collected reference books, seeing what kinds of things they covered. Wherever I went, I looked for informational posters to see what was on them. And whenever I wondered about pretty much anything, I’d try to see how we could compute about it.

How long will it take me to read this document?” “What country does that license plate come from?” “What height percentile are my kids at?” “How big is a typical 50-year-old oak tree?” “How long can I stay in the sun today?” “What planes are overhead now?” And on and on. Thousands upon thousands of different kinds of questions.

Often we’d be contacting world experts on different, obscure topics—always trying to get definitive computational knowledge about everything. Sometimes it’d seem as if we’d gone quite overboard, working out details nobody would ever possibly care about. But then we’d see people using those details, and sometimes we’d hear “Oh, yes, I use it every day; I don’t know anyplace else to get this right”. (I’ve sometimes thought that if Wolfram|Alpha had been out before 2008, and people could have seen our simulations, they wouldn’t have been caught with so many adjustable-rate mortgages.)

And, yes, it’s a little disappointing when one realizes that some fascinating piece of computational knowledge that took considerable effort to get right in Wolfram|Alpha will—with current usage patterns—probably only be used a few times in a century. But I view the Wolfram|Alpha project in no small part as a long-term effort to encapsulate the knowledge of our civilization, regardless of whether any of it happens to be popular right now.

So even if few people make queries about caves or cemeteries or ocean zones right now, or want to know about different types of paper, or custom screw threads, or acoustic absorption in different materials, I’m glad we’ve got all these things in Wolfram|Alpha. Because now it’s computational knowledge, that can be used by anyone, anytime in the future.

The Business of Wolfram|Alpha

We’ve put—and continue to put—an immense amount of effort into developing and running Wolfram|Alpha. So how do we manage to support doing that? What’s the business model?

The main Wolfram|Alpha website is simply free for everyone. Why? Because we want it to be that way. We want to democratize computational knowledge, and let anyone anywhere use what we’ve built.

Of course, we hope that people who use the Wolfram|Alpha website will want to buy other things we make. But on the website itself there’s simply no “catch”: we’re not monetizing anything. We’re not running external ads; we’re not selling user data; we’re just keeping everything completely private, and always have.

But obviously there are ways in which we are monetizing Wolfram|Alpha—otherwise we wouldn’t be able to do everything we’re doing. At the simplest level, there are subscription-based Pro versions on the website that have extra features of particular interest to students and professionals. There’s a Wolfram|Alpha app that has extra features optimized for mobile devices. There are also about 50 specialized apps (most for both mobile and web) that support more structured access to Wolfram|Alpha, convenient for students taking courses, hobbyists with particular interests, and professionals with standard workflows they repeatedly follow.

Then there are Wolfram|Alpha APIs—which are widely licensed by companies large and small (there’s a free tier for hobbyists and developers). There are multiple different APIs. Some are optimized for spoken results, some for back-and-forth conversation, some for visual display, and so on. Sometimes the API is used for some very specific purpose (calculus, particular socioeconomic data, tide computations, whatever). But more often it’s just set up to take any natural language query that arrives. (These days, specialized APIs are actually usually better built directly with Wolfram Language, as I’ll discuss a bit later.) Most of the time, the Wolfram|Alpha API runs on our servers, but some of our largest customers have private versions running inside their infrastructure.

When people access Wolfram|Alpha from different parts of the world, we automatically use local conventions for things like units, currency and so on. But when we first built Wolfram|Alpha we fundamentally did it for English language only. I always believed, though, that the methods for natural language understanding that we invented would work for other languages too, despite all their differences in structure. And it turns out that they do.

Each language is a lot of work, though. Even the best automated translation helps only a little; to get reliable results one has to actually build up a new algorithmic structure for each language. But that’s only the beginning. There’s also the issue of automatic natural language generation for output. And then there’s localized data relevant for the countries that use a particular language.

But we’re gradually working on building versions of Wolfram|Alpha for other languages. Nearly five years ago we actually built a full Wolfram|Alpha for Chinese—but, sadly, regulatory issues in China have so far prevented us from deploying it there. Recently we released a version for Japanese (right now set up to handle mainly student-oriented queries). And we’ve got versions for five other languages in various stages of completion (though we’ll typically need local partners to deploy them properly).

Beyond Wolfram|Alpha on the public web, there are also private versions of Wolfram|Alpha. In the simplest case, a private Wolfram|Alpha is just a copy of the public Wolfram|Alpha, but running inside a particular organization’s infrastructure. Data updates flow into the private Wolfram|Alpha from the outside, but no queries for the private Wolfram|Alpha ever need to leave the organization.

Ordinary Wolfram|Alpha deals with public computational knowledge. But the technology of Wolfram|Alpha can also be applied to private data in an organization. And in recent years an important part of the business story of Wolfram|Alpha is what we call Enterprise Wolfram|Alpha: custom versions of Wolfram|Alpha that answer questions using both public computational knowledge, and private knowledge inside an organization.

For years I’ve run into CEOs who look at Wolfram|Alpha and say, “I wish I could do that kind of thing with my corporate data; it’d be so much easier for my company to make decisions…” Well, that’s what Enterprise Wolfram|Alpha is for. And over the past several years we’ve been installing Enterprise Wolfram|Alpha in some of the world’s largest companies in all sorts of industries, from healthcare to financial services, retail, and so on.

For a few years now, there’s been a lot of talk (and advertising) about the potential for “applying AI in the enterprise”. But I think it’s fair to say that with Enterprise Wolfram|Alpha we’ve got a serious, enterprise use of AI up and running right now—delivering very successful results.

The typical pattern is that you ask a question in natural language, and Enterprise Wolfram|Alpha then generates a report about the answer, using a mixture of public and private knowledge. “What were our sales of foo-pluses in Europe between Christmas and New Year?” Enterprise Wolfram|Alpha has public knowledge about what dates we’re talking about, and what Europe is. But then it’s got to figure out the internal linguistics of what foo-pluses are, and then go query an internal sales database about how many were sold. Finally, it’s got to generate a report that gives the answer (perhaps both the number of units and dollar amount), as well as, probably, a breakdown by country (perhaps normalized by GDP), comparisons to previous years, maybe a time series of sales by day, and so on.

Needless to say, there’s plenty of subtlety in getting a useful result. Like what the definition of Europe is. Or the fact that Christmas (and New Year’s) can be on different dates in different cultures (and, of course, Wolfram|Alpha has all the necessary data and algorithms). Oh, and then one has to start worrying about currency conversion rates (which of course Wolfram|Alpha has)—as well as about conventions about conversion dates that some particular company may use.

Like any sophisticated piece of enterprise software, Enterprise Wolfram|Alpha has to be configured for each particular customer, and we have a business unit called Wolfram Solutions that does that. The goal is always to map the knowledge in an organization to a clear symbolic Wolfram Language form, so it becomes computable in the Wolfram|Alpha system. Realistically, for a large organization, it’s a lot of work. But the good news is that it’s possible—because Wolfram Solutions gets to use the whole curation and algorithm pipeline that we’ve developed for Wolfram|Alpha.

Of course, we can use all the algorithmic capabilities of the Wolfram Language too. So if we have to handle textual data we’re ready with the latest NLP tools, or if we want to be able to make predictions we’re ready with the latest statistics and machine learning, and so on.

Businesses started routinely putting their data onto computers more than half a century ago. But now across pretty much every industry, more acutely than ever, the challenge is to actually use that data in meaningful ways. Eventually everyone will take for granted that they can just ask about their data, like on Star Trek. But the point is that with Enterprise Wolfram|Alpha we have the technology to finally make this possible.

It’s a very successful application of Wolfram|Alpha technology, and the business potential for it is amazing. But for us the main limiting factor is that as a business it’s so different from the rest of what we do. Our company is very much focused on R&D—but Enterprise Wolfram|Alpha requires a large-scale customer-facing organization, like a typical enterprise software company. (And, yes, we’re exploring working with partners for this, but setting up such things has proved to be a slow process!)

By the way, people sometimes seem to think that the big opportunity for AI in the enterprise is in dealing with unstructured corporate data (such as free-form text), and finding “needles in haystacks” there. But what we’ve consistently seen is that in typical enterprises most of their data is actually stored in very structured databases. And the challenge, instead, is to answer unstructured queries.

In the past, it’s been basically impossible to do this in anything other than very simple ways. But now we can see why: because you basically need the whole Wolfram|Alpha technology stack to be able to do it. You need natural language understanding, you need computational knowledge, you need automated report generation, and so on. But that’s what Enterprise Wolfram|Alpha has. And so it’s finally able to solve this problem.

But what does it mean? It’s a little bit like when we first introduced Mathematica 30+ years ago. Before then, a typical scientist wouldn’t expect to use a computer themselves for a computation: they’d delegate it to an expert. But one of the great achievements of Mathematica is that it made things easy enough that scientists could actually compute for themselves. And so, similarly, typical executives in companies don’t directly compute answers themselves; instead, they ask their IT department to do it—then hope the results they get back a week later makes sense. But the point is that with Enterprise Wolfram|Alpha, executives can actually get questions answered themselves, immediately. And the consequences of that for making decisions are pretty spectacular.

Wolfram|Alpha Meets Wolfram Language

The Wolfram Language is what made Wolfram|Alpha possible. But over the past decade Wolfram|Alpha has also given back big time to Wolfram Language, delivering both knowledgebase and natural language understanding.

It’s interesting to compare Wolfram|Alpha and Wolfram Language. Wolfram|Alpha is for quick computations, specified in a completely unstructured way using natural language, and generating as output reports intended for human consumption. Wolfram Language, on the other hand, is a precise symbolic language intended for building up arbitrarily complex computations—in a way that can be systematically understood by computers and humans.

One of the central features of the Wolfram Language is that it can deal not only with abstract computational constructs, but also with things in the real world, like cities and chemicals. But how should one specify these real-world things? Documentation listing the appropriate way to specify every city wouldn’t be practical or useful. But what Wolfram|Alpha provided was a way to specify real-world things, using natural language.

Inside Wolfram|Alpha, natural language input is translated to Wolfram Language. And that’s what’s now exposed in the Wolfram Language, and in Wolfram Notebooks. Type + = and a piece of natural language (like “LA”). The output—courtesy of Wolfram|Alpha natural language understanding technology—is a symbolic entity representing Los Angeles. And that symbolic entity is then a precise object that the Wolfram Language can use in computations.

I didn’t particularly anticipate it, but this interplay between the do-it-however-you-want approach of Wolfram|Alpha and the precise symbolic approach of the Wolfram Language is exceptionally powerful. It gets the best of both worlds—and it’s an important element in allowing the Wolfram Language to assume its unique position as a full-scale computational language.

What about the knowledgebase of Wolfram|Alpha, and all the data it contains? Over the past decade we’ve spent immense effort fully integrating more and more of this into the Wolfram Language. It’s always difficult to get data to the point where it’s computable enough to use in Wolfram|Alpha—but it’s even more difficult to make it fully and systematically computable in the way that’s needed for the Wolfram Language.

Imagine you’re dealing with data about oceans. To make it useful for Wolfram|Alpha you have to get it to the point where if someone asks about a specific named ocean, you can systematically retrieve or compute properties of that ocean. But to make it useful for Wolfram Language, you have to get it to the point where someone can do computations about all oceans, with none missing.

A while ago I invented a 10-step hierarchy of data curation. For data to work in Wolfram|Alpha, you have to get it to level 9 in the hierarchy. But to get it to work in Wolfram Language, you have to get it all the way to level 10. And if it takes a few months to get some data to level 9, it can easily take another year to get it to level 10.

So it’s been a big achievement that over the past decade we’ve managed to get the vast majority of the Wolfram|Alpha knowledgebase up to the level where it can be directly used in the Wolfram Language. So all that data is now not only good enough for human consumption, but also good enough that one can systematically build up computations using it.

All the integration with the Wolfram Language means it’s in some sense now possible to “implement Wolfram|Alpha” in a single line of Wolfram Language code. But it also means that it’s easy to make Wolfram Language instant APIs that do more specific Wolfram|Alpha-like things.

There’s an increasing amount of interconnection between Wolfram|Alpha and Wolfram Language. For example, on the Wolfram|Alpha website most output pods have an “Open Code” button, which opens a Wolfram Notebook in the Wolfram Cloud, with Wolfram Language input that corresponds to what was computed in that pod.

In other words, you can use results from Wolfram|Alpha to “seed” a Wolfram Notebook, in which you can then edit or add inputs do a complete, multi-step Wolfram Language computation. (By the way, you can always generate full Wolfram|Alpha output inside a Wolfram Notebook too.)

Where to Now? The Future of Wolfram|Alpha

When Wolfram|Alpha first launched nobody had seen anything like it. A decade later, people have learned to take some aspects of it for granted, and have gotten used to having it available in things like intelligent assistants. But what will the future of Wolfram|Alpha now be?

Over the past decade we’ve progressively strengthened essentially everything about Wolfram|Alpha—to the point where it’s now excellently positioned for steady long-term growth in future decades. But with Wolfram|Alpha as it exists today, we’re now also in a position to start attacking all sorts of major new directions. And—important as what Wolfram|Alpha has achieved in its first decade has been—I suspect that in time it will be dwarfed by what comes next.

A decade ago, nobody had heard of “fake news”. Today, it’s ubiquitous. But I’m proud that Wolfram|Alpha stands as a beacon of accurate knowledge. And it’s not just knowledge that humans can use; it’s knowledge that’s computable, and suitable for computers too.

More and more is being done these days with computational contracts—both on blockchains and elsewhere. And one of the central things such contracts require is a way to know what’s actually happened in the world—or, in other words, a systematic source of computational facts.

But that’s exactly what Wolfram|Alpha uniquely provides. And already the Wolfram|Alpha API has become the de facto standard for computational facts. But one’s going to see a lot more of Wolfram|Alpha here in the future.

It’s going to put increasing pressure on the reliability of the computational knowledge in Wolfram|Alpha. Because it won’t be long before there will routinely be whole chains of computational contracts—that do important things in the world—and that trigger as soon as Wolfram|Alpha has delivered some particular fact on which they depend.

We’ve developed all sorts of procedures to validate facts. Some are automated—and depend on “theorems” that must be true about data, or cross-correlations or statistical regularities that should exist. Others ultimately rely on human judgement. (A macabre example is our obituary feed: we automatically detect news reports about deaths of people in our knowledgebase. These are then passed to our 24/7 site monitors, who confirm, or escalate the judgement call if needed. Somehow I’m on the distribution list for confirmation requests—and over the past decade there’ve been far too many times when this is how I’ve learned that someone I know has died.)

We take our responsibility as the world’s source of computational facts very seriously, and we’re planning more and more ways to add checks and balances—needless to say, defining what we’re doing using computational contracts.

When we first started developing Wolfram|Alpha, nobody was talking about computational contracts (though, to be fair, I had already thought about them as a potential application of my computational ideas). But now it turns out that Wolfram|Alpha is central to what can be done with them. And as a core component in the long history of the development of systematic knowledge, I think it’s inevitable that over time there will be all sorts of important uses of Wolfram|Alpha that we can’t yet foresee.

In the early days of artificial intelligence, much of what people imagined AI would be like is basically what Wolfram|Alpha has now delivered. So what can now be done with this?

We can certainly put “general knowledge AI” everywhere. Not just in phones and cars and televisions and smart speakers, but also in augmented reality and head- and ear-mounted devices and many other places too.

One of the Wolfram|Alpha APIs we provide is a “conversational” one, that can go back and forth clarifying and extending questions. But what about a full Wolfram|Alpha Turing test–like bot? Even after all these years, general-purpose bots have tended to be disappointing. And if one just connects Wolfram|Alpha to them, there tends to be quite a mismatch between general bot responses and “smart facts” from Wolfram|Alpha. (And, yes, in a Turing test competition, the presence of Wolfram|Alpha is a dead giveaway—because it knows much more than any human would.) But with progress in my symbolic discourse language–and probably some modern machine learning—I suspect it’ll be possible to make a more successful general-purpose bot that’s more integrated with Wolfram|Alpha.

But what I think is critical in many future applications of Wolfram|Alpha is to have additional sources of data and input. If one’s making a personal intelligent assistant, for example, then one wants to give it access to as much personal history data (messages, sensor data, video, etc.) as possible. (We already did early experiments on this back in 2011 with Facebook data.)

Then one can use Wolfram|Alpha to ask questions not only about the world in general, but also about one’s own interaction with it, and one’s own history. One can ask those questions explicitly with natural language—or one can imagine, for example, preemptively delivering answers based on video or some other aspect of one’s current environment.

Beyond personal uses, there are also organizational and enterprise ones. And indeed we already have Enterprise Wolfram|Alpha—making use of data inside organizations. So far, we’ve been building Enterprise Wolfram|Alpha systems mainly for some of the world’s largest companies—and every system has been unique and extensively customized. But in time—especially as we deal with smaller organizations that have more commonality within a particular industry—I expect that we’ll be able to make Enterprise Wolfram|Alpha systems that are much more turnkey, effectively by curating the possible structures of businesses and their IT systems.

And, to be clear, the potential here is huge. Because basically every organization in the world is today collecting data. And Enterprise Wolfram|Alpha will provide a realistic way for anyone in an organization to ask questions about their data, and make decisions based on it.

There are so many sources of data for Wolfram|Alpha that one can imagine. It could be photographs from drones or satellites. It could be video feeds. It could be sensor data from industrial equipment or robots. It could be telemetry from inside a game or a virtual world (like from our new UnityLink). It could be the results of a simulation of some system (say in Wolfram SystemModeler). But in all cases, one can expect to use the technology of Wolfram|Alpha to provide answers to free-form questions.

One can think of Wolfram|Alpha as enabling a kind of AI-powered human interface. And one can imagine using it not only to ask questions about existing data, but also as a way to control things, and to get actions taken. We’ve done experiments with Wolfram|Alpha-based interfaces to complex software systems. But one could as well do this with consumer devices, industrial systems, or basically anything that can be controlled through a connection to a computer.

Not everything is best done with pure Wolfram|Alpha—or with something like natural language. Many things are better done with the full computational language that we have in the Wolfram Language. But when we’re using this language, we’re of course still using the Wolfram|Alpha technology stack.

Wolfram|Alpha is already well on its way to being a ubiquitous presence in the computational infrastructure of the world. And between its direct use, and its use in Wolfram Language, I think we can expect that in the future we’ll all end up routinely encountering Wolfram|Alphas all the time.

For many decades our company—and I—have been single-mindedly pursuing the goal of realizing the potential of computation and the computational paradigm. And in doing this, I think we’ve built a very unique organization, with very unique capabilities.

And looking back a decade after the launch of Wolfram|Alpha, I think it’s no surprise that Wolfram|Alpha has such a unique place in the world. It is, in a sense, the kind of thing that our company is uniquely built to create and develop.

I’ve wanted Wolfram|Alpha for nearly 50 years. And it’s tremendously satisfying to have been able to create what I think will be a defining intellectual edifice in the long history of systematic knowledge. It’s been a good first decade for Wolfram|Alpha. And I begin its second decade with great enthusiasm for the future and for everything that can be done with Wolfram|Alpha.

Happy 10th birthday, Wolfram|Alpha.

Stephen Wolfram (2019), "Wolfram|Alpha at 10," Stephen Wolfram Writings. writings.stephenwolfram.com/2019/05/wolframalpha-at-10.
Text
Stephen Wolfram (2019), "Wolfram|Alpha at 10," Stephen Wolfram Writings. writings.stephenwolfram.com/2019/05/wolframalpha-at-10.
CMS
Wolfram, Stephen. "Wolfram|Alpha at 10." Stephen Wolfram Writings. May 18, 2019. writings.stephenwolfram.com/2019/05/wolframalpha-at-10.
APA
Wolfram, S. (2019, May 18). Wolfram|Alpha at 10. Stephen Wolfram Writings. writings.stephenwolfram.com/2019/05/wolframalpha-at-10.

Posted in: Companies & Business, Historical Perspectives, Software Design, Wolfram|Alpha