Computational Stippling: Can Machines Do as Well as Humans?
Stippling is a kind of drawing style using only points to mimic lines, edges, and grayscale. The entire drawing consists only of dots on a white background. The density of the points gives the impression of grayscale shading.
Back in 1510, stippling was first invented as an engraving technique, and then became popular in many fields because it requires just one color of ink.
Here is a photo of a fine example taken from an exhibition of lithography and copperplate art (the Centenary of European Engraving Exhibition held at the Hubei Museum of Art in March 2015; in case you’re curious, here is the museum’s official page in English).
The art piece is a lithographic print. From a view of one meter away, it shows remarkable detail and appears realistic. However, looking at it from a much closer distance, you can see it’s made up of hundreds of thousands of handcrafted stipples, with the original marble stone giving it even more texture.
From my point of view, this artistic stippling method is like a simulation of variable solidity using small dotted patterns, like a really dense dot matrix printer, except these patterns are handmade. It is fascinating looking at lithographs, observing how a macro reality emerges from uncountable random dots: the mirror reflection on the floor, the soft sense of the cloths, and the mottling on the building engraved by history. (Photos were taken from the same exhibition.)
As a technique invented five hundred years ago, stippling has become popular in so many fields way beyond its original intention because of its simplicity. It was once the standard for technical illustrations, especially in math and mechanics books. Even today, many people are interested in it. Experts from different domains are still using stippling in daily work.
Typical stippling almost always uses one or two colors. Actually, one can create all sorts of textures and tones without introducing artifacts—one cannot achieve both with techniques like hatching or line drawing—just by varying the density and distribution of the dots. That makes stippling an excellent choice when archaeologists, geologists, and biologists are doing fieldwork and want to record things they find immediately. They don’t want to travel along with heavy painting gear; a single pen is good enough for stippling. Even in modern days, compared to photography, stippling still holds some advantages despite requiring a large amount of time. For example, the author can mark and emphasize any features he or she wants right on the drawing; thus, the technique is an important skill—one that is still being taught and researched today.
Here to support my point, I made these dinnertime paper-napkin stipplings (if they look like a complete disaster to you, please forgive me, as I’m not an expert and they only took about five minutes):
It is also very interesting to note that stippling is such a natural way to distribute dense points that the strategy is actually adopted by many living things, like this one in my dish:
In the computer age, there is a special computer graphics domain called non-photorealistic rendering. Stippling is one of its basic artistic painting styles, and can be used as the foundation of many other synthetic styles, such as mosaic, stained glass, hatching, etc.
Back to the art side. Stippling shows interesting connections with Divisionism and Pointillism, which both belong to Neo-Impressionism.
In popular art, people are still attracted to this ancient art. There are even stipple tattoos on Pinterest. You can watch people doing it with stunning effect. Perhaps you can even learn stippling yourself on the internet.
Needless to say, stippling as a handcrafted art really requires lots of patience. The manual process is time consuming. Moreover, it takes a lot of practice to become skillful.
After trying so hard on the napkin without satisfaction, I had a closer look at the masterpieces. I noticed that the points in the drawings seem random but are actually ordered, a bit like the structure seen in a quasicrystal:
In the above comparison, the left figure corresponds to randomly distributed points, while the right figure corresponds to a distribution like those found in stippling drawings. Obviously, they are very different from each other.
It turns out when using points to approximate grayscale shading, a random set of points with uniform distribution is usually not good enough.
To illustrate that, both images in the comparison below have 63,024 points sampled from the same distribution with the same contrast. If we think of the original grayscale image as a two-dimensional distribution, the local density of the two resulting images are the same for any corresponding position. Therefore, the contrast of the two images must be the same as well, which can be illustrated by resizing them to a small enough scale so the detail of the points won’t distract our eyes.
Nevertheless, the one on the right is a good stippling (or, as it’s usually called, well spaced), while the one on the left has too many unwanted small artifacts—clumps and voids that do not exist in the original image.
Now it is not very hard to see that in the stippling graphics, the triangle formed with any three points nearest to each other is nearly equilateral. This turns out to be the essential property of a “good” stippling.
Given that unique micro scale character, I couldn’t help wondering: is it possible to generate a stippling drawing from any image programmatically?
The answer is yes—as I have just shown one of the results from my generator, and which I will describe in detail in the rest of this blog.
Actually, there are not only lots of papers on this topic, but also well-made applications.
Back to the first comparison example. In order to quantitatively reveal the “well-spaced” property, I draw the DelaunayMesh of the points and observe them:
With the meshes, I can confirm my equilateral triangle conjecture numerically by graphing the distribution of the interior angles of the cells in the meshes:
Recalling the duality between a Delaunay mesh and a Voronoi mesh, and with the help of lots of academic papers, I eventually inferred that my equilateral Delaunay mesh corresponds to the so-called centroidal Voronoi diagram (CVD). Indeed, the CVD is the de facto method for computer-generated stippling. Moreover, there is a dedicated algorithm for it due to Lloyd:
-
1. Generate n random points inside the region of interest
2. Generate the Voronoi diagram of the n points
3. Find the centroid (i.e. center of mass) of each Voronoi cell
4. Use the n centroids as the resulting points
5. If satisfied, stop; otherwise, return to step 1
Here the key steps are the Voronoi diagram generation and the centroid finding. The former is a perfect match for the bounded version of the built-in function VoronoiMesh. The latter, as the Voronoi cells for a closed region are always closed convex polygons, has a simple formula that I’d like to briefly describe as follows for completeness. If you’re not into math, it can be skipped safely without harming the joy of stippling!
Now suppose I have a cell defined by n vertices ordered clockwise (or counterclockwise) {P1=(x1, y1), P2=(x2, y2),… Pn=(xn, yn)}; then its centroid C can be determined as:
As a test, we generate 2,500 uniformly distributed random points in a square region [-1,1]×[-1,1]:
Its Voronoi diagram is:
Lloyd’s algorithm can be expressed as the following findCentroid function:
Here, just to illustrate the principle of the algorithm, and to make things simpler to understand by ensuring that the code structure is similar to the weighted centroidal Voronoi diagram (which I will describe later), I have defined my own findCentroid function. Note that for uniform cases, there is the much more efficient built-in function, MeshCellCentroid:
Each Polygon from the Voronoi mesh can be extracted with MeshPrimitives[...,2], which then should be piped to the findCentroid to complete one iteration:
Now we’re ready to animate the first 50 iteration results to give a rough illustration of the CVD:
There are various ways to show the difference between the point distributions before and after the process:
For example, I can use NearestNeighborGraph to illustrate their connectivity, which will highlight the unwanted voids in the former case:
Alternatively, as was shown earlier, I can compare the statistics of the interior angles. After Lloyd’s algorithm, the mesh angles are much nearer to 60°:
Finally, to give another intuitive impression on the “well-spaced” property from a different point of view, I’d like to compare the discrete Fourier transform of the initial points with the one of the refined points:
Basically, the Fourier transform counts the significances of periodic structures with different periods. The ones with the largest periods (say, a structure repeating every 100 pixels) correspond to the center-most positions, and the ones with the smallest periods (say, a structure repeating every 2 pixels) correspond to the furthest positions. The value at a position indicates the significance of the corresponding structure. In the above plots, the larger values are illustrated with whiter colors. The center white spots correspond to the sum of all the data, which is not of interest here. In the initial case, the significances of different periodic structures are almost the same; this indicates that it is a uniform random distribution (or so-called white noise). The refined case is more interesting. The large-scale periodic structures are all flattened out, illustrated as a black disk region around the center spot, which means the points distribute uniformly on the macro scale. Then there is a bright circle around the dark disk region. The circle shape indicates that the positions in that circle correspond to structures with nearly the same period but in different directions, which are the approximately equilateral triangles in the mesh with nearly the same size but random directions. The outer “rings” correspond to high-order structures, which should not appear in a perfect refined case. So it can be clearly seen that the refined points’ case effectively achieves both isotropism and low-stop filter. This is another facet of view to understand the term “well spaced.” Now I can say that the property is also related to the concept of the “blue color” of noise.
So far, it seems I have solved the puzzle of stippling a uniform area, but how about a non-uniform area—say, the Lena example?
After some trials, I realized Lloyd’s algorithm could not be naively adapted here, as it always smoothed out the distribution and resulted in a uniform CVD. It looked like I had to find another way.
The answer turned out to be a simple modification of Lloyd’s algorithm. However, back when I thought about the problem, a totally different idea showed an interesting possibility. It’s a mathematically beautiful method, which I abandoned only because I did not have enough time to investigate it deeply. So I’d like to talk about it in brief.
Recalling that I was looking for a Delaunay mesh, with cells that resemble equilateral triangles as much as possible, one thing that immediately came to my mind was the conformal map, which transforms between two spaces while preserving angles locally. Therefore, under a carefully designed conformal map, a good stippling on a uniform area is guaranteed to be transformed to a good one on a specified non-uniform area.
A simple example of a conformal map is a complex holomorphic function f, say
This transforms the following stippling points refinedPts uniformly, distributing on square [-1,1]×[-1,1] to a new points distribution transPts:
The result looks very nice in the sense of being well spaced, which can also be confirmed with its Delaunay mesh, its Voronoi mesh, and its connectivity:
So far, so good! However, this theoretically elegant approach is not easily generalizable. Finding the right f for an arbitrary target distribution (e.g. the grayscale distribution of an image) will ask for highly sophisticated mathematical skills. My numerical trials all failed, thus I’m not going to talk about the theoretical details here. If you are interested in them, there is a dedicated research field called computational conformal mapping.
Despite the elegance of the conformal mapping, I was forced back to Lloyd’s algorithm. I had to give it another thought. Yes, it doesn’t work on non-uniform cases, but it’s close! Maybe I can generalize it. After reading through some papers, I confirmed that thought. The popular method for stippling non-uniform areas indeed comes from a modification of the CVD, developed by Adrian Secord, called the weighted centroidal Voronoi diagram.
The new algorithm is similar to Lloyd’s, except that in step three, when looking for the centroid of the Voronoi cell, a variable areal density ρ(P) is considered, which is proportional to the grayscale at the location P. Thus instead of using equation 1, I can calculate the centroid according to the definition:
Clearly the integrations are much more time-consuming than in equation 1. In his paper and master thesis, Secord presents an efficient way to compute them that involves a precomputation of certain integrations. However, I noticed (without theoretical proof) that the weighted CVD can be sufficiently approximated in a much cheaper way if we accept a compromise not to stick to the exact formula of the centroid but emphasize the core idea of choosing C closer to points with larger weights.
My new idea was simple and naive. For a cell of n vertices {P1,…,Pn}, the algorithm acts as follows:
-
1. Compute the geometric centroid C’
2. Compute the weights of the vertices as {w1,…,wn}
3. Compute normalized weights as
4. For every vertex Pk, move it along with the factor of Wk to new position P’k (so the vertex with the largest weight does not move, while the vertex with the smallest weight moves most)
5. Compute the geometric centroid C of the new cell defined by {P’1,…,P’n} as the final result
Written in the Wolfram Language, it’s this findCentroidW function:
Since the convergence of the algorithm was not obvious (and since it’s quite time-consuming), I decided to do an early stopping after dozens of iterations during the numerical experiment.
Now that everything was set up, I felt so confident in trying my functions in the real world! The first example I chose was the Lena photo.
I imported the original image, keeping both the grayscale and color versions for later use in the artistic rendering section:
For the convenience of Interpolation, I used ColorNegate. For a better visual feeling, I enhanced the edges that have large gradients:
I rescaled the image coordinates to the rectangle region [-1,1]×[-1,1] for convenience, and used Interpolation on the whole image to get a smooth description of the grayscale field:
To have a good initial point distribution, I sampled the points so that the local density is proportional to the local grayscale (though I won’t need to have this precisely, as the weighted Voronoi process will smooth the distribution anyway). By taking advantage of ContourPlot, I generated a few regions according to the level set of the densityFunc:
For each region in levelRegions, at first I sampled the points inside it on a regular grid, with the area of the grid cell inversely proportional to the level counter of the region. Then I immediately noticed the regular grid can be a steady state of my algorithm, so to ensure an isotropic result, initial randomness was needed. For that purpose, I then added a dithering effect on the points, with its strength specified by a parameter 0≤κ≤1 (where 0 gives no dithering, while 1 gives a globally random distribution):
The total number of points I had sampled as initPts was huge:
However, their quality was poor:
The weighted Voronoi relaxation process, which is similar to the one in the uniform case, was going to improve the distribution in a dramatic way, though the computation was a bit slow due to the amount of points. Notice that I forced an early stopping after 30 iterations:
In spite of only 30 iterations, the result was in my opinion fairly good (note that some visual artifacts in the following image are due to the rasterizing process during display):
I then applied the similar visual analysis on it, and found out that the pattern of the connectivity graph formed some kind of rather interesting self-similar, multi-resolution tiles:
Statistics on the interior angles of the corresponding DelaunayMesh also indicated I had indeed achieved the well-spaced distribution:
Now I felt quite satisfied!
However, as a person who likes to reinvent my own wheel, I would not stop here!
As I have mentioned before, some artistic rendering effects can be simulated based on the stippling result. Encouraged by the above result, I intended to try replicating some of them.
One effect is a simple hatching.
Inspired by the example under the Properties & Relations section in the documentation of GradientOrientationFilter, I initialized a stroke at each stippling point, with the lengths and orientations controlled by local grayscale and gradient magnitude, respectively:
There seemed to be some artifacts in the dark regions, possibly due to the naive approach, but the whole result was acceptable for me.
Another styling I tried was Pointillism.
For this simulation, I had to take care of the colors:
Then I added random deviations to the colors of each points:
I finalized the process with the morphological Opening operation and matched the histogram with the original color image using HistogramTransform:
However, I’d like to try something more “Impressionist.”
The rendering in the last example used strokes with identical size, but in practice it could often vary, with some local properties of the targets being drawn, so I tried to include that characteristic.
For the special property to be reflected, I chose the ImageSaliencyFilter (but it is, of course, totally fine to choose other ones):
Like the densityFunc, I interpolated the coarseMask and generated some level regions:
Now the stippling points could be grouped according to the coarseLevelRegions:
I then composed the stroke template as a square centered at the given point, with its color determined and randomized in a way similar to that in the last example:
Now I was ready to paint the picture layer by layer (note that the following code can be quite time consuming if you try to display coarseLayers!):
Composing the layers, I got the final Impressionist result:
Note the local vivid color blocks, especially those at the feather and hat regions, and how the rendering still preserved an accurate scene at the macro scale.
With such an accomplishment, I couldn’t wait to try the procedure on other pictures. One of them I want to share here is a photo of a beautiful seacoast taken in Shenzhen, China.
The original photo was taken on my smart phone. I chose it mainly because of the rich texture of the stones and waves. With a bit of performance optimization, I got some quite-gorgeous results in a reasonable time. (Precisely speaking, it’s 35,581 stipples finished in 557 seconds! Much faster than my bare hand!)
The hatching and Pointillism versions were also quite nice:
For now, my adventure has stopped here. However, there are still many things I’m planning to explore in the future. I may try more realistic stroke models. I may try simulating different types of the infiltration of ink on canvas. It will also surely be fun to control a robot arm with a pen drawing the graphics out in the real world on a real canvas. So many possibilities, so much stippling joy! If you love them as well, I’d like to hear suggestions from you, and hope to have some enjoyable discussions.
Download this post as a Computable Document Format (CDF) file. New to CDF? Get your copy for free with this one-time download.
What an enlightening post. I’m going to try this on my workstation first thing tmr!
Hi Mackay! Thank you very much. Hope you had fun on your workstation!
Hi Sylvia,
Very nice work. And nice pictures (even the ones on the napkins).
I have two questions.
Would it be reasonable to use e.g. Lloyd’s method on nonrectangular regions by just rejecting points generated that fall outside the region?
Could you replace Lloyd’s method with a set of quasirandom numbers, in the sense used for quasi-Monte Carlo integration? These are low discepancy sequences which I think is what you are approximating via Lloyd’s iterative approach. Some code for this is on the Mathematica StackExchange forum (maybe you knew that).
http://mathematica.stackexchange.com/questions/83762/heuristic-method-to-create-low-discrepancy-sequences
Hi Daniel,
Thank you very much for your appreciation!
About your first question, I think it’s a good idea if the shape of the interested region is not too complex and if its area is not too small comparing to its minimal bounding box. But when those conditions are not met, there are two alternative choices I would like to go. One is finding the conformal map between the region and the unit square, then directly transform the well-spaced point set on the latter one to the former one. Another way is generating initial points only inside the region and on the boundary contours of it, then doing the normal Lloyd’s iteration with additional constraint on the points on the boundary contours so they can only move along the contour lines. That way, a point set inside the region should keep be inside after any times of Lloyd’s iteration.
About the second question, I’m aware of low discrepancy sequences and have some basic knowledge on their usage in quasi-Monte Carlo, but didn’t know the mentioned post on StackExchange forum. According to your suggestion, I compared Halton sequence (with base 2 and 3) with CVD. The result can be summarized in the following graph:
https://www.dropbox.com/s/7myhe2h37aj4zvg/compare%20with%20Halton%20sequence.png?dl=0
It looks like, in certain sense, the quasi-random method lies between random and CVD. If not sensitive to time consumption, I think CVD might be a better choice than quasi-random method for stippling generation.
It’s worth mention that 2 dimentional low-discrepancy sequences are indeed used to simulate stippling effect in the area of non-photorealistic computer graphics. (e.g. section 2.3 of the book Non-Photorealistic Computer Graphics (by Thomas Strothotte and Stefan Schlechtweg) mentioned Sobol distribution).
And I do think low-discrepancy sequence provide a much better initial point set than the random generated one.
This is wonderful! It is a beautiful combination of mathematics, statistics, computing, image processing and above all art. Durer, who excelled at engraving and Rembrandt who had a genius for stippling techniques in his drawings would have loved this blog. Thank you so much for sharing.
Michael
Hi Michael, thank you very much for your praise! I feel so flattered to be mentioned with the true masters. The stippling algorithms based on CVD are sometimes thought as “too good”, lacking the small imperfections found in handcrafted arts. There are actually some papers on this comparison, like this one:
https://engineering.purdue.edu/purpl/level2/papers/CAe2007_Ross.pdf
Nevertheless, I believe carefully designed disturbance can be added to the CVD results so human will not be able to recognize them from the real handcrafted ones.
Out of curiosity did you ever consider using a modified version of Poisson disk sampling where the radius of the sample disk was varied with the intensity of the pixel at that point? I feel like you could achieve similar results with less complexity.
Hi Robert. Thank you for your suggestion! I was going to using Poisson disk sampling as an alternative method, but eventually didn’t have enough time for it. However, as a comparison between CVD and Poisson sampling, I think methods based on the former one might be more beneficial for parallel computing.
Very interesting work. If you are interesting in brush strokes you may find that separating an image into layers based on hue and then those layers based on wavelets (detail size) will given you areas that correspond to different brush sizes and paint colours. Build the image up in layers from the large areas (low frequency) to the fine details. Order colours for a given brush size from dark to light so that the small highlights are the top-most details, as they are often added last in traditional painting.
Hi Daniel, thank you very much for your insightful suggestion! I’ll try it and see if I can make better results!
Stippling is a kind of drawing style using only points to mimic lines, edges, and grayscale. Sounds perfect for a computer then.
How does this method relate to Floyd-Steinberg dithering?http://wiki.evilmadscientist.com/Producing_a_stippled_image_with_Gimp
Is there a connection to all those dithering algorithms?
https://en.wikipedia.org/wiki/Dither
Hi! Thank you for the in-depth question.
The first and biggest difference from dithering to stippling is their targets. While the former one is used on **raster image** for reduced color palette (please refer to Wolfram language function: ColorQuantize[img,Dithering->True]), the latter one is essentially a **vector graphics** composed with a large group of primitives (e.g. points or disks).
But in another way to put above difference, it’s also one of their relations. Dithering image can be viewed as a special case of stippling, where every primitives are restricted to be unit square (i.e. a pixel) and their positions restricted to pairs of integers.
There is deeper difference from the algorithm aspect.
I believe you must have noticed the evenly spaced points here and there in a dithering image. As you have pointed out, the Floyd-Steinberg dithering algorithm, one of the most famous of its kind, is a type of error-diffusion (https://en.wikipedia.org/wiki/Error_diffusion) algorithm. The principle behind error-diffusion is somewhat related to so-called recurrence relation (https://en.wikipedia.org/wiki/Recurrence_relation), a famous example of whom is the bit-shift map (https://en.wikipedia.org/wiki/Bit_shift_map). This path goes way deep into a beautiful branch of math – symbolic dynamics (https://en.wikipedia.org/wiki/Symbolic_dynamics), where complexity and chaos arise naturally. Put aside the math, noticing the finite precision of color values, I suspect the phenomena of even space among some points in the dithering image comes from a similar reason as it for the periodicity of bit-shift map (https://en.wikipedia.org/wiki/Dyadic_transformation#Periodicity_and_non-periodicity).
On the other hand, stippling seems to be a more predictable algorithm. It’s unstructured appearance could be totally from the initial randomness and an early stopping before converge. Given infinite times of iteration, I would not be surprised to see most of the Centroidal Voronoi cells converging to regular hexagons.
Hope this is helpful to you.