Wolfram Computation Meets Knowledge

Computational Microscopy with the Wolfram Language

Microscopes were invented almost four hundred years ago. But today, there’s a revolution in microscopy (as in so many other fields) associated with computation. We’ve been working hard to make the Wolfram Language a definitive platform for the emerging field of computational microscopy.

It all starts with getting an image of some kind—whether from a light or x-ray microscope, transmission electron microscope (TEM), confocal laser scanning microscope (CLSM), two-photon excitation or a scanning electron microscope (SEM), as well as many more. You can then proceed to enhance images, reconstruct objects and perform measurements, detection, recognition and classification. At last month’s Microscopy & Microanalysis conference, we showed various examples of this pipeline, starting with a Zeiss microscope and a ToupTek digital camera.

Microanalysis tools

Image Acquisition

Use Import to bring standard image file formats into the Wolfram Language. (More exotic file formats generated by microscopes are accessible via BioFormatsLink.) What’s even cooler is that you can also connect to a microscope to stream images directly into CurrentImage.

Once an image is imported, you’re off to the races with all the power of the Wolfram Language.

Brightness Equalization

Often, images acquired by microscopes exhibit uneven illumination. The uneven illumination can be fixed by either adjusting the image background according to a given flat field or by modeling the illumination of the visible background. BrightnessEqualize achieves exactly this.

Here is a raw image of a sugar crystal under the microscope:

img=[image];

Here is a pure image adjustment:

ImageAdjust[img]

And here is the result of brightness equalization using an empirical flat field:

BrightnessEqualize[img, ]

If a flat-field image is not available, construct one. You can segment the background and model its illumination with a second-order polynomial:

mask = ColorNegate[AlphaChannel[RemoveBackground[img]]]

BrightnessEqualize[img, {"Global", 2}, Masking -> mask]

Color Deconvolution

Color deconvolution is a technique to convert images of stained samples into distributions of dye uptake.

Here is a stained sample using hematoxylin C19 and DAB (3,3′-Diaminobenzidine):

img=

The corresponding RGB color for each dye is:

Hematoxylin = ; Diaminobenzidine =

Obtain the transformation matrix from dye concentration to RGB colors:

HDABtoRGB = Transpose[List @@@ {Hematoxylin, Diaminobenzidine}]; MatrixForm[HDABtoRGB]

Compute the inverse transformation from color to dye concentration:

RGBtoHDAB = PseudoInverse[HDABtoRGB]; MatrixForm[RGBtoHDAB]

Perform the actual de-mixing in the log-scale of color intensities, since the color absorption is exponentially proportional to the dye concentration:

logRGB = Log[ImageClip[ColorConvert[img, "RGB"]]] + 2.0;

The color deconvolution into hematoxylin and DAB dye concentration:

HDAB = Map[ImageAdjust, -RGBtoHDAB.ColorSeparate[logRGB]]

False coloring of the dye concentration:

ColorCombine[{{1, 0}, {0, 1}, {0, 1}}.HDAB]

Image Viewing and Manual Measurements

To view large images, use DynamicImage, which is an efficient image pane for zooming, panning, dragging and scrolling in-core or out-of core images:

largeImg = URLDownload[    "http://cache.boston.com/universal/site_graphics/blogs/bigpicture/\ micro_11_14/m08_pollenmix.jpg"];

DynamicImage[largeImg, ImageSize -> 400]

The following code is all it takes to implement a customized interactive interface for radius measurements of circular objects. You can move the position and radius of the superimposed circle via Alt+drag or Command+drag. The radius of the circle is displayed in the top-left corner:

DynamicModule[  {r = 80, center = Import[largeImg, "ImageSize"]/2,    centerMovQ = False},  Panel@EventHandler[    DynamicImage[     largeImg,     Epilog -> {Yellow, Circle[Dynamic[center], Dynamic[r]],        Dynamic@Style[         Text["r=" <> ToString[Round@r] <> "px", Scaled[{0.1, 0.9}]],          FontSize -> 14]},     ImageSize -> 400     ],    {"MouseDown" :> If[       CurrentValue["AltKey"],       If[        EuclideanDistance[center, MousePosition["Graphics"]] < 2 r/3,         center = MousePosition["Graphics"]; centerMovQ = True,         r = EuclideanDistance[center, MousePosition["Graphics"]];         centerMovQ = False        ]       ],     "MouseDragged" :> If[       CurrentValue["AltKey"],       If[        centerMovQ,        center = MousePosition["Graphics"],         r = EuclideanDistance[center, MousePosition["Graphics"]]        ]       ]},    PassEventsDown -> Dynamic[Not[CurrentValue["AltKey"]]]    ]  ]

Focus Stacking

To overcome the shallow depth of field of microscopes, you can collect a focal stack, which is a stack of images, each with a different focal length. You can compress the focal stack into a single image by selectively taking in-focus regions of each image in the stack. The function ImageFocusCombine does exactly that.

stack = {

ImageFocusCombine[stack ]

Here is a reimplementation of ImageFocusCombine to extract the depth and to go one step further and reconstruct a 3D model from a focal stack.

Take the norm of the Laplacian filter as an indicator for a pixel being in or out of focus. The Laplacian filter picks up the high Fourier coefficients, which are subdued first if an image is out of focus:

focusResponses =    Norm[ColorSeparate[LaplacianGaussianFilter[#, {3, 1}]]] & /@ stack; ImageAdjust /@ focusResponses

Then for each pixel, pick the layer that exhibits the largest Laplacian filter norm:

max = Image3DProjection[Image3D[focusResponses], Top, "Max"]; depthVol = nonMaxChannelSuppression[focusResponses];

Multiply the resulting binary volume with the focal stack and add up all layers. Thus, you collect only those pixel values that are in focus:

stack.depthVol

The binary volume depthVol contains the depth information of each pixel. Convert it into a two-dimensional depth map:

depthMap = ImageClip[depthVol.N[Rescale[Range[Length[depthVol]]]]]

The depth information is quite noisy and not equally reliable for all pixel locations. Only edges provide a clear indication if an image region is in focus or not. Thus, use the total focusResponse as a confidence measure for the depth map:

confidenceMap = Total@focusResponses

Take those depth measures with a confidence larger than 0.05 into account:

depthMap *= Binarize[confidenceMap, 0.05]

You can regularize the depth values with MedianFilter and close gaps via FillingTransform:

depthMap = FillingTransform[MedianFilter[depthMap, 3]]

Display the depth map in 3D using the in-focus image as its texture:

ListPlot3D[  Reverse[ImageData[Thumbnail[depthMap]]],  PlotStyle -> Texture[inFocus] ,  BoxRatios -> {1, ImageAspectRatio[depthMap], 1/3},   InterpolationOrder -> 1, Lighting -> {{"Ambient", White}},  Mesh -> False, Axes -> None  ]

An Example with Machine Learning: Pollen Classification

The Wolfram Language has powerful machine learning capabilities that allow implementation of various detection, recognition or classification applications in microscopy.

Here is a small dataset of six flower pollen types that we would like to classify:

pollenData =

Typically, one requires a huge dataset to train a neural network from scratch. However, using other pretrained models, we can do classification using such a small dataset.

Take the VGG-16 network trained on ImageNet available through NetModel:

net = NetModel["VGG-16 Trained on ImageNet Competition Data"]

Remove a few layers at the end that perform the specific classification in this network. This leaves you with a network that generates a feature vector:

featureFunction = Take[net, {1, "fc7"}]

Next, compute the feature vector for all images in the pollen dataset:

training = Flatten[Map[#["Pollen"] &, pollenData]];

features = Map[featureFunction, training];

features // First

The feature vectors live in a 4k-dimensional space. To quickly verify if the feature vectors are suitable to classify the data, reduce the feature space to three dimensions and see that the pollen images appear to group nicely by type:

xyz = DimensionReduce[features, 3, Method -> "TSNE"]; Graphics3D[  MapThread[   Inset[Thumbnail[RemoveBackground@#2, 32], #1] &, {xyz, training}],  BoxRatios -> {1, 1, 1}  ]

To increase the size of the training set and to make it rotation- and reflection-invariant, generate additional data:

MirrorRotate[img_Image] :=   With[{imgs = NestList[ImageRotate, img, 3]},    Join[imgs, Map[ImageReflect, imgs]]]

training =    Flatten@Map[     Thread[Flatten[MirrorRotate /@ #["Pollen"]] -> #["Name"]] &,      pollenData];

With that training data, create a classifier:

pollenClassifier =   Classify[RandomSample@training, FeatureExtractor -> featureFunction]

Test the classifier on some new data samples:

Map[(# -> pollenClassifier[#]) &, {

An Example with Deep Neural Nets: Detecting Mitosis

The previous classifier relied on a pretrained neural network. If you have enough data, you can train a neural network from scratch, a network that automatically learns the relevant features and simultaneously acts as a subsequent classifier.

As an example, let’s talk about detecting cells that undergo mitosis. Here is a simple convolutional neural network that can do the job:

mitosisNet = NetChain[   <|    "layer1" -> {ConvolutionLayer[24, 5, "Stride" -> 2],       BatchNormalizationLayer[], ElementwiseLayer[Ramp]},    "layer2" -> {ConvolutionLayer[48, 3, "Stride" -> 2],       BatchNormalizationLayer[], ElementwiseLayer[Ramp]},    "layer3" -> {ConvolutionLayer[64, 3, "Stride" -> 2],       BatchNormalizationLayer[], ElementwiseLayer[Ramp]},    "layer4" -> {ConvolutionLayer[128, 3, "Stride" -> 2],       BatchNormalizationLayer[], ElementwiseLayer[Ramp]},    "layer5" -> {ConvolutionLayer[128, 3], BatchNormalizationLayer[],       ElementwiseLayer[Ramp]},    "layer6" -> {FlattenLayer[]},    "layer7" -> {LinearLayer[64], ElementwiseLayer[Ramp]},    "layer8" -> {LinearLayer[2], SoftmaxLayer[]}    |>,   "Input" -> NetEncoder[{"Image", 97, "ColorChannels" -> 3}],   "Output" -> NetDecoder[{"Class", {True, False}}]   ]

The data for training and testing has been extracted from the Tumor Proliferation Assessment Challenge 2016. We preprocessed the data into 97×97 images, centered around the actual cells in question.

data =

Use roughly three-quarters of the data for training and the rest for testing:

trainingData = Take[data, 1800];

testData = Drop[data, 1800];

Again, to increase the training set, perform image mirroring and rotation:

trainedMitosisNet = NetTrain[   mitosisNet,   Normal@RandomSample@     AssociationMap[Thread,       KeyMap[Map[MirrorRotate, #] &, trainingData]],    MaxTrainingRounds -> 8,   ValidationSet ->     Normal@RandomSample@      AssociationMap[Thread, KeyMap[Map[MirrorRotate, #] &, testData]]   ]

Training progress readout

Calculate the classifier metrics and verify the effectiveness of the neural network:

cm = ClassifierMeasurements[trainedMitosisNet, Normal@testData]

cm["ConfusionMatrixPlot"]

cm["Accuracy"]

cm["Specificity"]

cm["Error"]

Considering the challenging task, an error rate of less than 10% is comparable to what a pathologist would achieve.

Conclusion

Computational microscopy is an emerging field and an example of how all the different capabilities of the Wolfram Language come to bear. We intend to expand the scope of our functions further to provide the definitive platform for microscope image analysis.

Computational microscopy webinar


Download this post as a Computable Document Format (CDF) file. New to CDF? Get your copy for free with this one-time download.

Comments

Join the discussion

!Please enter your comment (at least 5 characters).

!Please enter your name.

!Please enter a valid email address.