Wolfram Computation Meets Knowledge

Mathematica Q&A: Sow, Reap, and Parallel Programming

Got questions about Mathematica? The Wolfram Blog has answers! We’ll regularly answer selected questions from users around the web. You can submit your question directly to the Q&A Team using this form.

This week’s question comes from Patrick, a student:

How can I use Sow & Reap across parallel kernels?

Before we answer this question, a review of the useful functions Sow and Reap is in order.

Sow and Reap are used together to build up a list of results during a computation. Sow[expr] puts expr aside to be collected later. Reap collects these and returns a list:

Reap[Sow[1] + Sow[2] x Sow[3]]

{7, {{1,2,3}}}

The first part of the list is the regular result of the computation. The second part is everything that was “sown”.

Sow and Reap are ideally suited to situations in which you don’t know in advance how many results you will get. For example, suppose that you want to find simple initial conditions that lead to “interesting” results in Conway’s game of life, the famous two-dimensional cellular automaton:

Two-dimensional cellular automaton

We’ll call a result “interesting” if it has at least n cells alive (live cells have value 1):

Define interesting

Now we’ll check 1,000 random 6X6 initial conditions for the “interesting” property of having at least 200 live cells after 100 steps. Whenever the result is InterestingQ, we create and Sow a simple visualization:

Check 1000 random conditions

Sow a simple visualization

One interesting result was found and sown. If we want to conduct a larger search, we ought to take advantage of Mathematica‘s automatic parallelization.

This brings us to our main question: What’s the right way to Sow results in the parallel subkernels, but Reap them in the master kernel?

In this case the answer is particularly simple: define a version of Sow that always runs on the master kernel.

Define a version of Sow

SetSharedFunction specifies that a function should always run on the master kernel.

Now we can apply Parallelize to our search. Here’s a search of 10,000 random initial conditions:

Parallelize

Parallelize visualizations

Note that Reap and Sow remain unaffected, and you can use them just as usual in algorithms running on parallel subkernels. Only our new function ParallelSow runs on the master kernel.

Click here to download this post as a Computable Document Format (CDF) file.

If you have a question you’d like answered in this blog, you can submit it to the Q&A Team using this form.

Comments

Join the discussion

!Please enter your comment (at least 5 characters).

!Please enter your name.

!Please enter a valid email address.

11 comments

  1. Love this post! Game of life a of great interest to many. To search through the computation universe is a bit easier using ParallelSow! Thanks a bunch!

    Reply
  2. I think that to match your description to your code
    “at least 250 live cells after 100 steps”
    should read
    “at least 200 live cells after 100 steps”.

    Other than that a really nice example of a parallelizable problem!

    Reply
  3. Excellent! I finally understood Sow&Reap after many years. Keep on doing this type of posts.

    Reply
  4. The post is great! However, somehow 1+2+3 was calculated as 7 instead of 6…

    Reply
  5. @rszabolcs, the sum was 1 + 2*3, not 1+2+3.
    Thanks very much for posting this reply! Very useful!

    Reply
  6. Hi rszabolcs, The result was 7 because the calculation was 1 + 2 * 3 rather than 1 + 2 + 3.

    Reply
  7. Nice post! I particularly like the definition of the “Game of Life” in terms of CellularAutomaton.

    This seems like an appropriate place to ask: will you consider renaming SetSharedFunction back to SharedDownValues? A motivating example: Function is usually used as an OwnValue, not a DownValue, so SetSharedFunction is caught in a quandary here.

    Reply
  8. @rszabolcs It says 1 + 2 * 3 (multiplication of two and three) ;) Otherwise you would’ve found a very interesting bug!

    Reply
  9. I’m sorry, I’m blind, indeed! :-)

    Reply
  10. If the ParallelSow and Reap constructs are used with a ParallelDo for iterating across a set of parameters, would the ordering of the ParallelDo be enforced so that a given set of parameters were associated with a reaped list, OR would this correspondence depend on how quickly each kernel finished the computation associated with a given set of parameters?

    Reply