Just as Wolfram was doing AI before it was cool, so have we been doing data science since before it was mainstream. A prime example is the creation of Wolfram|Alpha—a massive project that involved engineering, modeling, analyzing, visualizing and interfacing with terabytes of data, developing a natural language interface, and deploying results in a sensible way. Wolfram|Alpha itself is a tool for doing data science, and its continued success is largely because of the underlying strategy we used to build it: a multiparadigm approach driven by natural curiosity, exploring all kinds of data, using advanced methods from a range of areas and automating as much as possible.
Any approach to data science can only be as effective as the computational tools driving it; luckily for us, we had the Wolfram Language at our disposal. Leveraging its universal symbolic representation, high-level automation and human readability—as well as its broad range of built-in computation, knowledge and interfaces—streamlined our process to help bring Wolfram|Alpha to fruition. In this post, I’ll discuss some key tenets of the multiparadigm approach, then demonstrate how they combine with the computational intelligence of the Wolfram Language to make the ideal workflow for not only discovering and presenting insights from your data, but also for creating scalable, reusable applications that optimize your data science processes.