Droste Effect with Mathematica
The “Droste effect” is when images recursively include themselves. The name comes from Droste brand cocoa powder, which was sold in 1904 in a box that showed a nurse carrying the same box which, in turn, showed the nurse carrying the box, and so on. The simplest form is to use a scale and transform on an image to place an exact copy within it, and then repeat. Take a look at this Demonstration using the original Droste box artwork. But much more interesting results can be achieved when you get complex analysis involved. M.C. Escher was the first to popularize applying conformal mapping to images, but with computers we can easily apply the same ideas to photographs, to get results like this:
The idea isn’t new, but when I came to do it myself, the methods that I found on the web seemed rather unsatisfactory. The various recipes that I looked at involved either too much manual copy-and-paste work on the images, or were prone to resolution mis-matches where the zoomed and shrunken parts of the image met. As usual, I ended up turning to Mathematica, and in this case, gridMathematica.
In essence, the idea is simple. We create an image where a pixel at position {x,y} in our output image is derived from the color at f[{x,y}] in our source image. The magic is in the particular choice of the function f. But before we get to that, we need some plumbing.
Mathematica doesn’t currently contain this arbitrary geometric transformation operation for images, so I need to start by implementing it. One thing that is important is to continuously calculate the color between pixels to avoid resolution matching problems and unnecessary pixelation in areas of the image that get enlarged. My method was to create a linear interpolation of all the pixel colors in each of the RGB channels of image data. This is the most computationally expensive approach, especially with 10-megapixel source images, but gives the quality that I want.
To compensate for my expensive choice, I set everything up for parallel computation from the start using Mathematica 7‘s built-in parallelization tools. So instead of just using Table to generate the grid of pixels, I created a function that calls Table to generate a slice of the final image, then calls that function using ParallelTable, to distribute the work across multiple CPUs. It is 3–4 lines longer, but is half the work of making the code parallel.
Next, I transmit the source image to each worker and set up the interpolating function. This is the expensive part, so I only want to do this once and then be able to make all kinds of images from the source without having to do the work again. The parallelization here is pretty simple: launch the available kernels, distribute the definitons of the program, and then use ParallelEvaluate to send the image over the link and request the remote creation of the interpolation functions.
There is a subtle trick in here of transmitting the image as a string representing the JPG file contents, rather than transmitting the uncompressed real data. It is a much smaller object this way and so faster to transmit.
Set up like this, it is easy for me to grab additional computation kernels via gridMathematica from other computers around the office to speed up the computation.
There is one more necessary bit of code. I’m taking a shortcut in the math that requires the source image to be cropped to the right aspect ratio and centering.
Here is what my source image looks like when cropped:
OK, the boring work is out of the way now. Let’s get to the fun.
The swirl that I am using is based on the peculiar properties of Power in the complex plane. If instead of thinking about our coordinates as an {x,y} pair, we think of them as a complex p=x+I y, then we can use the mapping f[p]:=pc, and then convert back to Cartesian coordinates. It is pretty hard to get your head around the effects of different values of c, especially when you use the wider generalization of the model a+(b+c p)d where {a,b,c,d} are complex. So I turned to the Wolfram Demonstrations Project, where I found a Demonstration on conformal mapping and modified its code using the mapping that interests me.
After a bit of experimentation and head scratching, I worked out the effects of the parameters, calibrations, and good default values and created named options in the relevant places in the magic formula. Now I have my weird twist.
I provide the self-replicating nature by jumping inward whenever trying to access a pixel outside the image, and outward whenever accessing a pixel inside a defined frame.
Aesthetically, I want to cover the join between the images with some kind of frame to make it seem natural. My example is using a picture frame, but I have seen examples using doorways, windows, computer monitors, and so on.
Doing this programatically gets rid of any manual copy-and-paste and also helps to make resolution consistent.
So that’s all the hard work done—now let’s start playing! Initialize the kernels with the source image providing the coordinates of the internal frame (you can get those easily using the coordinate picker on the 2D Drawing palette):
Now generate a 400-pixel image:
There are lots of combinations of parameters, such as one with a double helix:
One spiral in the opposite direction:
One with no spiral, just replication:
One that makes octagons by making two copies per spiral:
And even one with no recursion and no spiral, just two copies wrapped together:
Finally, a big one—a DVD-quality movie zooming into the recursive spiral. This is computationally very expensive, calling each of the 3 interpolation functions of 10,000,000 pixels, around 400,000 times for each of the 60 frames. This is where my forward planning on parallelizing the code really pays off. Of course, being at Wolfram Research, I have a number of gridMathematica licenses available, all broadcasting their presence with Wolfram Lightweight Grid Manager. I open up the Parallel Configuration preferences, and they all magically appear. A few clicks and I have added 16 more kernels to the 2 local ones that I am running on my puny laptop, and without any changes to the code, the movie gets generated about 8 times faster (and 16 times faster than sequential evaluation). I don’t actually know where the code is running, but because the program and source image are all transmitted automatically, I don’t care. You can watch a screencast of me setting it up here.
There are plenty of modifications that can be made to this code; for instance, changing ReplicateRegion will allow non-rectangular frames such as circles, or even color- or transparency-based control of the replication. Try it on your own and see what you can create.