Wolfram Computation Meets Knowledge

Building the Automated Data Scientist: The New Classify and Predict

Automated Data Science

Imagine a baker connecting a data science application to his database and asking it, “How many croissants are we going to sell next Sunday?” The application would simply answer, “According to your recorded data and other factors such as the predicted weather, there is a 90% chance that between 62 and 67 croissants will be sold.” The baker could then plan accordingly. This is an example of an automated data scientist, a system to which you could throw arbitrary data and get insights or predictions in return.

One key component in making this a reality is the ability to learn a predictive model without specifications from humans besides the data. In the Wolfram Language, this is the role of the functions Classify and Predict. For example, let’s train a classifier to recognize morels from hedgehog mushrooms:

c = Classify[{

We can now use the resulting ClassifierFunction on new examples:

c[

c[

And we can obtain a probability for each possibility:
c[

As another example, let’s train a PredictorFunction to predict the average monthly temperature for some US cities:

data = RandomSample[ResourceData["Sample Data: US City Temperature"]]

p = Predict[data ->

Again, we can use the resulting function to make a prediction:

p[<|

And we can obtain a distribution of predictions:

dist = p[<|

As can you see, Classify and Predict do not need to be told what the variables are, what preprocessing to perform or which algorithm to use: they are automated functions.

New Classify and Predict

We introduced Classify and Predict in Version 10 of the Wolfram Language (about three years ago), and have been happy to see it used in various contexts (my favorite involves an astronaut, a plane and a Raspberry Pi). In Version 11.2, we decided to give these functions a complete makeover. The most visible update is the introduction of an information panel in order to get feedback during the training:

Classify progress animation

With it, one can monitor things such as the current best method and the current accuracy, and one can get an idea of how long the training will be—very useful in deciding if it is worth continuing or not! If one wants to stop the training, there are two ways to do it: either with the Stop button or by directly aborting the evaluation. In both cases, the best model that Classify and Predict came up with so far is returned (but the Stop interruption is softer: it waits until the current training is over).

A similar panel is now displayed when using ClassifierInformation and PredictorInformation on a classifier or a predictor:

Classify set 1

We tried to show some useful information about the model, such as its accuracy (on a test set), the time it takes to evaluate new examples and its memory size. More importantly, you can see a “learning curve” on the bottom that shows the value of the loss (the measure that one is trying to minimize) as a function of the number of examples that have been used for training. By pressing the left/right arrows, one can also look at other curves, such as the accuracy as a function of the number of training examples:

Classify set 2

Such curves are useful in figuring out if one needs more data to train on or not (e.g. when the curves are plateauing). We hope that giving easy access to them will ease the modeling workflow (for example, it might reduce the need to use ClassifierMeasurements and PredictorMeasurements).

An important update is the addition of the TimeGoal option, which allows one to specify how long one wishes the training to take, e.g:

c = Classify[{1, 2, 3, 4} -> {

ClassifierInformation[c,

TimeGoal has a different meaning than TimeConstraint: it is not about specifying a maximum amount of time, but really a goal that should be reached. Setting a higher time goal allows the automation system to try additional things in order to find a better model. In my opinion, this makes TimeGoal the most important option of both Classify and Predict (followed by Method and PerformanceGoal).

On the method side, things have changed as well. Each method now has its own documentation page ("LogisticRegression", "NearestNeighbors", etc.) that gives generic information and allows experts to play with the options that are described. We also added two new methods: "DecisionTree" and, more noticeably, "GradientBoostedTrees", which is a favorite of data scientists. Here is a simple prediction example:

data = # -> Sin[2 #] + Cos[#] + RandomReal[] & /@ RandomReal[10, 200];

p = Predict[data, Method ->
Show[ListPlot[List @@@ data, PlotStyle -> Gray, PlotLegends -> {

Under the Hood…

OK, let’s now get to the main change in Version 11.2, which is not directly visible: we reimplemented the way Classify and Predict determine the optimal method and hyperparameters for a given dataset (in a sense, the core of the automation). For those who are interested, let me try to give a simple explanation of how this procedure works for Classify.

A classifier needs to be trained using a method (e.g. "LogisticRegression", "RandomForest", etc.) and each method needs to be given some hyperparameters (such as "L2Regularization" or "NeighborsNumber"). The automation procedure is there to figure out the best configuration (i.e. the best method + hyperparameters) to use according to how well the classifier (trained with this configuration) performs on a test set, but also how fast or how small in memory the classifier is. It is hard to know if a given configuration would perform well without actually training and testing it. The idea of our procedure is to start with many configurations that we believe could perform well (let’s say 100), then train these configurations on small datasets and use the information gathered during these “experiments” to predict how well the configurations would perform on the full dataset. The predictions are not perfect, but they are useful in selecting a set of promising configurations that will be trained on larger datasets in order to gather more information (you might notice some similarities with the Hyperband procedure). This operation is repeated until only a few configurations (sometimes even just one) are trained on the full dataset. Here is a visualization of the loss function for some configurations (each curve represents a different one) that underwent this operation:
Training graph

As you can see, many configurations have been trained on 10 and 40 examples, but just a few of them on 200 examples, and only one of them on 800 examples. We found in our benchmarks that the final configuration obtained is often the optimal one (among the ones present in the initial configuration set). Also, since training on smaller datasets is faster, the time needed for the entire procedure is not much greater than the time needed to train one configuration on the full dataset, which, as you can imagine, is much faster than training all configurations on the full dataset!

Besides being faster than the previous version, this automation strategy was necessary to bring some of the capabilities that I presented above. For example, the procedure directly produces an estimation of model performances and learning curves. Also, it enables the display of a progress bar and quickly produces valid models that can be returned if the Stop button is pressed. Finally, it enables the introduction of the TimeGoal option by adapting the number of intermediate trainings depending on the amount of time available.

We hope that you will find ways to use this new version of Classify and Predict. Don’t hesitate to give us feedback. The road to a fully automated data scientist is still long, but we’re getting closer!


Download this post as a Computable Document Format (CDF) file. New to CDF? Get your copy for free with this one-time download.

Comments

Join the discussion

!Please enter your comment (at least 5 characters).

!Please enter your name.

!Please enter a valid email address.

5 comments

  1. Hi Etienne

    Great post!

    When I’m done with my current commitment I thought I would get involved in using Mathematica (aka WL) to do some citizen science, maybe something to do with classifying galaxies, for example. Can you see applications of the new Classify[] and Predict[] functions in this or other areas?

    Thanks

    Barrie

    Reply
  2. Thank you for taking the time to explain what’s new with classify and predict in Mathematica 11.2

    Reply
  3. Hi,
    I had set in classify as a TimeGoal->Quantity[5,”Minutes”]. However it is still running after 29m57s. I have two classes. Size of the first one is 2118760. Size of the second one is 398. I balanced it by setting ClassPriors. It does not react on Stop either. If I reduce the number of elements in a class, then the model becomes useless. What do you recommend?

    Reply
    • Hi,

      By default, Classify does not interrupt an ongoing model training even if the TimeGoal is exceeded. In general this does not pose a problem, but in your case a model training must run for a very long time for some reason. In that case you can use “Abort Evaluation” in the menu, and it will interrupt the internal training but still return the best model that it has found so far.

      For your specific situation though, I think you could try a few different approaches:

      1. Reduce the number of elements in the first class to something like 1000 or 2000, because it will speed up the training, and most likely not decrease the classification performance much. Then, after the training, you need to change the ClassPrior so it matches the class prior of the data that the model will be used on (so 0.9998 and 0.0002 if it is the same distribution as the training data).

      2. If you have many features, use DimensionReduction on the whole dataset (without the labels), then train as in 1.

      3. Try to give the class-1 examples to AnomalyDetection, and use this as your classifier (you can tune the AcceptanceThreshold to obtain better results)

      4. Variation of 3: you use LearnDistribution on class-1 examples, use the resulting distribution to obtain the Log[PDF[dist, #]] & as feature extractor, and use Classify on your dataset on this simple numeric feature.

      You can also combine the different approaches in the end.

      Thanks,
      Etienne

      Reply