#
*Mathematica* Q&A: Sow, Reap, and Parallel Programming

April 20, 2011 — Andrew Moylan, Technical Content Specialist, Technical Communication and Strategy Group

Got questions about *Mathematica*? The Wolfram Blog has answers! We’ll regularly answer selected questions from users around the web. You can submit your question directly to the Q&A Team using this form.

This week’s question comes from Patrick, a student:

**How can I use Sow & Reap across parallel kernels?**

Before we answer this question, a review of the useful functions `Sow` and `Reap` is in order.

`Sow` and `Reap` are used together to build up a list of results during a computation. `Sow`[*expr*] puts *expr* aside to be collected later. `Reap` collects these and returns a list:

The first part of the list is the regular result of the computation. The second part is everything that was “sown”.

`Sow` and `Reap` are ideally suited to situations in which you don’t know in advance how many results you will get. For example, suppose that you want to find simple initial conditions that lead to “interesting” results in Conway’s game of life, the famous two-dimensional cellular automaton:

We’ll call a result “interesting” if it has at least *n* cells alive (live cells have value 1):

Now we’ll check 1,000 random 6X6 initial conditions for the “interesting” property of having at least 200 live cells after 100 steps. Whenever the result is `InterestingQ`, we create and `Sow` a simple visualization:

One interesting result was found and sown. If we want to conduct a larger search, we ought to take advantage of *Mathematica*‘s automatic parallelization.

This brings us to our main question: What’s the right way to `Sow` results in the parallel subkernels, but `Reap` them in the master kernel?

In this case the answer is particularly simple: define a version of `Sow` that always runs on the master kernel.

`SetSharedFunction` specifies that a function should always run on the master kernel.

Now we can apply `Parallelize` to our search. Here’s a search of 10,000 random initial conditions:

Note that `Reap` and `Sow` remain unaffected, and you can use them just as usual in algorithms running on parallel subkernels. Only our new function `ParallelSow` runs on the master kernel.

Click here to download this post as a Computable Document Format (CDF) file.

If you have a question you’d like answered in this blog, you can submit it to the Q&A Team using this form.

## 11 Comments

Love this post! Game of life a of great interest to many. To search through the computation universe is a bit easier using ParallelSow! Thanks a bunch!

I think that to match your description to your code

“at least 250 live cells after 100 steps”

should read

“at least 200 live cells after 100 steps”.

Other than that a really nice example of a parallelizable problem!

Hi Simon,

Thank you for bringing this to our attention. The post has been changed.

Regards,

The Wolfram Blog Team

Excellent! I finally understood Sow&Reap after many years. Keep on doing this type of posts.

The post is great! However, somehow 1+2+3 was calculated as 7 instead of 6…

@rszabolcs, the sum was 1 + 2*3, not 1+2+3.

Thanks very much for posting this reply! Very useful!

Hi rszabolcs, The result was 7 because the calculation was 1 + 2 * 3 rather than 1 + 2 + 3.

Nice post! I particularly like the definition of the “Game of Life” in terms of CellularAutomaton.

This seems like an appropriate place to ask: will you consider renaming SetSharedFunction back to SharedDownValues? A motivating example: Function is usually used as an OwnValue, not a DownValue, so SetSharedFunction is caught in a quandary here.

@rszabolcs It says 1 + 2 * 3 (multiplication of two and three) ;) Otherwise you would’ve found a very interesting bug!

I’m sorry, I’m blind, indeed! :-)

If the ParallelSow and Reap constructs are used with a ParallelDo for iterating across a set of parameters, would the ordering of the ParallelDo be enforced so that a given set of parameters were associated with a reaped list, OR would this correspondence depend on how quickly each kernel finished the computation associated with a given set of parameters?