Celebrating a Third of a Century of Mathematica, and Looking Forward
Mathematica 1.0 was launched on June 23, 1988. So (depending a little on how you do the computation) today is its one-third-century anniversary. And it’s wonderful to see how the tower of ideas and technology that we’ve worked so hard on for so long has grown in that third of a century—and how tall it’s become and how rapidly it still goes on growing.
In the past few years, I’ve come to have an ever-greater appreciation for just how unique what we’ve ended up building is, and just how fortunate our original choices of foundations and principles were. And even after a third of a century, what we have still seems like an artifact from the future—indeed ever more so with each passing year as it continues to grow and develop.
In the long view of intellectual history, this past one-third century will be seen as the time when the computational paradigm first took serious root, and when all its implications for “computational X” began to grow. And personally I feel very fortunate to have lived at the right time in history to have been able to be deeply involved with this and for what we have built to have made such a contribution to it.
Going back just one more third of a century—to 1955—takes us to the dawn of electronic computers, and the time when mass production of computers began. Still another one-third century back takes us to 1921—when ideas from mathematical logic were just beginning to coalesce into what became the concept of computation. And in some ways what we have built owes more to 1921 than to 1955. Yes, Mathematica and everything that has come from it runs on electronic computers; they are what have allowed us to actualize what we have done. But from the beginning, the core of Mathematica—and what is now the Wolfram Language—was based on foundational ideas that transcend the specifics of their implementation.
For me the key question is how to take the concept of computation and use it as a framework for representing and thinking about things. As I have come to understand with greater clarity as the years go by, it’s all about creating a computational language: a language that allows humans to crystallize their thoughts and knowledge in computational form, and then leverage the power of actual computers to work out their consequences. Over the past third of a century, we have come a long way with this, to the point where I feel we can reasonably declare that we have now achieved the goal of creating a full-scale computational language—that transcends the expectations set by its original name Mathematica and that we now increasingly call simply the Wolfram Language.
It has been a wonderful and deeply rewarding journey, that has delivered to the world tools that have enabled countless inventions and discoveries, and helped educate generations of students. But in some ways it has been an increasingly lonely journey—that seems to ascend further and further away from the common expectations of what can be done with computers and computation today.
Back in 1955, there began a trend that has continued to the present day: that we should treat computers as things we “program”, in effect telling them—in its terms—what to do. And this point of view is what has led to the “programming languages” of the past two-thirds of a century.
But our goal with computational language has been something different—and something in a sense both more human, and more connected to the world. For our objective is to create not just a language to specifically program computers, but a language to represent everything—including real things in the world—in computational terms. We want to leverage not just the practical details of electronic computers, but the conceptual power of the computational paradigm.
A programming language need in a sense only directly incorporate what is required to represent the raw abilities of practical computers. But to achieve a full-scale computational language, we need to cast into computational terms broad actual knowledge of the world and incorporate it into the language. And even in aspiration, this is far away from the typical expectation of what it means to program a computer.
The development of computers and their use over the past two-thirds of a century has been marked by the addition of a series of layers of capability that can be taken for granted. First there were programming languages. Then operating systems. File systems. User interfaces. Networking. Internet-based services. Security. Maybe a few more. But our goal is to put the very important addition of computational intelligence onto this list, made possible by the development of full-scale computational language.
Particularly in recent years, we’ve been working very hard towards this goal, streamlining the deployment of the Wolfram Language, and working through the channels of the computer industry. I consider it one of our great achievements that we’ve been able to build an organization and a business that has been able to continue to focus on the long-term mission of developing and delivering computational intelligence—now for more than a third of a century. I am proud not only of our consistent innovation in technology, but also of the consistency and sustainability of our business practices. But in the end, the core of what we have built is something fundamentally intellectual.
The earliest formal systems—of mathematics and logic—date back to antiquity. But it was four centuries ago that the invention of mathematical notation streamlined mathematics to the point where it could take off and make possible the mathematical sciences. A century ago we saw the beginnings of the development of the formal concept of computation, arising as it did from ideas of logic. And in a sense, our goal with computational language is now to do something like the invention of mathematical notation, but for the much broader and deeper domain of computation—and thereby to enable a dramatic streamline of our ability to think in computational terms, and a framework in which to build all those “computational X” fields.
When I first started to design Mathematica—and its predecessor, SMP—my concept was to do what I might now call metamodeling: to drill down below the formal constructs we know, and find the core of what lies underneath. The essence of what I was doing was, however, a curious mixture of the abstract and the human: I wanted to find abstract computational primitives for the world, but I wanted them to be convenient and comfortable for us humans to deal with. And decades later, I’ve increasingly realized that I was in many ways very fortunate in the particular direction I took in doing this.
My core idea was to represent everything as a symbolic expression, and to represent all operations as transformations on symbolic expressions. After a third of a century of experience with what’s now the Wolfram Language, it might seem obvious that this would be successful. But in retrospect, particularly with what I’ve learned very recently from the formalism of our Physics Project, my decisions in SMP more than 40 years ago come to seem much more fortuitous—or perhaps prescient—than I had imagined.
The nub of the issue is what it takes to “get answers” from computations. One has a symbolic expression, and one has various transformation rules for it. But how should they be applied? What if there are multiple choices? Even 40 years ago, I certainly wondered about such things. But I made the decision to take the simplest approach, and just “do the first transformation that applies”, and keep doing this until nothing changes.
Yes, there were corner cases (like ) where I knew this would fail. But the question was whether the vast majority of computations that we humans would want to do would be successfully done. And now we know the answer: yes, they can. Over the years, we’ve seen how more and more kinds of things can be successfully represented by symbolic expressions—with transformations applied in this way.
In a sense, that this works in practice is an interesting—and ultimately deep—“metascientific” fact about our human view of things. But in recent times, I’ve realized that in the broad sweep of basic science there’s more to think about. Yes, what we have in Mathematica and the Wolfram Language is well optimized for our human view of things—and for applying the computational paradigm. But there’s also a whole multicomputational paradigm that’s possible, and that in fact the Wolfram Language has primed us for.
It’s a fascinating experience building our tower of ideas and technology. At each step it’s an exacting process (now often livestreamed) to reach the clarity needed to create layer after layer of robust and coherent structure. But as the tower grows, there are moments when suddenly we can see much further—and can conceptualize whole new areas that we had not even imagined before.
Technology usually has a certain temporariness, constantly being replaced by the new and the better. But the foundational character of Mathematica and the Wolfram Language gives them a certain timeless permanence—which makes even what we built more than a third of a century ago still seem completely fresh and modern today. It’s certainly satisfying to see all our effort progressively build on itself over the years. And particularly in recent times there’s been an impressive recursive acceleration: with everything we’ve built so far, it becomes faster and faster to build new things.
What does the future hold? Part of it feels to me quite inexorable. With the passage of time what now seem like artifacts from the future will steadily become familiar as artifacts of the present. That’s already happened with some of what seemed like artifacts from the future three decades ago. But even from that time there’s still much more to come. And there’s overwhelmingly more from the years since then.
Then there’s deployment. Over the past third of a century we’ve seen personal computers, GUIs, parallelism, mobile, embedded, web, cloud, and now XR, blockchain and more. And in each case there have been new ideas and opportunities for what one can do with our computational language and its core symbolic framework. And while we don’t know what kinds of deployment the future will bring, I think we can be confident that they will show us still more new ideas and opportunities.
But to me personally, the most exciting part is the conceptual breakthroughs. There are fundamental theoretical reasons to expect that there will always be more to discover and invent in the computational universe. And certainly in the past third of a century, that’s what I’ve experienced. The computational paradigm in general, and our computational language in particular, continually provide us new ways to think about things. It might start as an idea. But soon it becomes a tool. And then a framework. And then we can build on that framework to go yet further.
Some of what we’ve invented or discovered I in some way or another at least imagined, often decades earlier. But much I did not. And instead it’s only with the unique tower of ideas and technology that we’ve built that it’s eventually been possible to get to the new level of understanding or capability that is needed—and to successfully take that next step in intellectual history.
By the standards of modern technology, a third of a century might seem like an eternity. But when it comes to the kind of foundational progress that Mathematica and the Wolfram Language are about, it is but a small span of time. But in that time I am proud of how far we’ve come and how solid what we’ve built is. And now I look forward to the future and to seeing both the inexorable and the surprising developments that it will bring. It’s been a great first third of a century for Mathematica and the Wolfram Language. But it’s just the beginning….
To comment, please visit the copy of this post at Stephen Wolfram’s Writings »