Wolfram Blog News, views, & ideas from the front lines at Wolfram Research 2014-12-18T16:29:18Z http://blog.wolfram.com/feed/atom/ WordPress Wolfram Blog Team http:// <![CDATA[Wolfram|Alpha Apps and Math Course Apps for Windows—Just Released]]> http://blog.internal.wolfram.com/?p=23280 2014-12-18T16:29:18Z 2014-12-18T16:29:03Z Just in time for the holidays—Wolfram|Alpha apps for Windows and Windows Phone have been released! We’re excited to announce that our popular Wolfram|Alpha app and several Wolfram Course Assistant Apps are now available for your Windows 8.1 devices.

The Wolfram|Alpha applications are universal apps, and utilize Windows’ distinct style while bringing to Windows users some of the features people have come to expect from Wolfram|Alpha: a custom keyboard for easily entering queries, a large selection of examples to explore Wolfram|Alpha’s vast knowledgebase, history to view your recent queries, favorites so you can easily answer your favorite questions, the ability to pin specific queries to the start menu, and more.

Windows phone + tablet preview

Wolfram|Alpha screenshots

We’re also happy to announce the release of several of our Course Assistant Apps on Windows 8.1 devices:

These apps also feature our custom keyboards for the quick entry of your homework problems. View Step-by-step solutions to learn how to solve complex math queries, plot 2D or 3D functions, explore topics applicable to your high school and college math courses, and much more.

Precalculus examples

]]>
0
Wolfram Blog Team http:// <![CDATA[Deck the Halls: Tweet-a-Program Holiday Ornament Challenge]]> http://blog.internal.wolfram.com/?p=23277 2014-12-15T16:06:32Z 2014-12-15T16:06:32Z It’s the holiday season, and Wolfram is gearing up for bright lights and winter weather by holding a new Tweet-a-Program challenge. To help us celebrate the holidays, tweet your best holiday ornament-themed lines of Wolfram Language code. As with our other challenges, we’ll use the Wolfram Language to randomly select winning tweets (along with a few of our favorites) to pin, retweet, and share with our followers. If you’re a lucky winner, we’ll send you a free Wolfram T-shirt!

If you need some help getting into the holiday spirit, check out these examples:

Snowflake

Red ornament

Snowman spiral

To win, tweet your submissions to @WolframTaP by 11:59pm PDT on Thursday, January 1. So that you don’t waste valuable code space, we don’t require a hashtag with your submissions. However, we do encourage you to share your code with your friends by retweeting your results with hashtag #HolidayWL.

]]>
0
Malte Lenz <![CDATA[Machine Gun Jetpack: The Real Physics of Improbable Flight]]> http://blog.internal.wolfram.com/?p=22742 2014-12-11T15:55:48Z 2014-12-05T16:00:28Z Could you fly using machine guns as the upward driving force? That’s the question asked in Randall Munroe’s What if? article, “Machine Gun Jetpack.” It turns out you could, because some machine guns have enough thrust to lift their own weight, and then some. In this post, I’ll explore the dynamics of shooting machine guns downward and study the actual forces, velocities, and heights that could be achieved. I’ll also repeat the warning from the What if? post: Please do not try this at home. That’s what we have modeling software for.

Machine gun with a squirrel on top
Machine gun with a squirrel on top

Let’s start smaller than a human, with a gray squirrel from the original story. Put this squirrel on a machine gun, fire it downward at the full automatic setting, and see what happens. I’ll be using Wolfram SystemModeler to model the dynamics of this system.

Model of a machine gun
Model of a machine gun

The image above shows the model of a machine gun. It contains bullet and gun components that are masses that are influenced by gravity. They are easily constructed by combining built-in mechanical components:

Mass influenced by the Earth's gravitational force
Mass influenced by the Earth’s gravitational force

The magazine component is a little more advanced because it ejects the mass of the bullet and the bullet casing as each shot is fired. It does this by taking the initial mass of the full magazine and subtracting the mass of a cartridge multiplied by the number of shots fired, which is given by the shot counter component.

Combining this together with a simple model of a squirrel, a sensor for the position above ground, and a crash detector that stops the simulation when everything crashes on the ground, I now have a complete model.

Model of a squirrel on a machine gun

To get a good simulation, I need to populate the model with parameters for the different components. I will use a gray squirrel, which typically weighs around 0.5 kg (around 1.1 pounds).

Squirrel mass

Then I need some data for our machine gun. I’ll use the ubiquitous AK-47 assault rifle. Here is some basic data about this rifle:

Rifle data

The thrust generated by the gun can be calculated from the mass of the bullet, the velocity of the bullet when leaving the muzzle, and how often the gun is fired:

How often gun is fired

I can then estimate the percentage of each firing interval that is used to actually propel the bullet through the barrel. I am going to make the assumption that the average speed in the barrel is equal to half the final speed:

Estimate percentage of each firing interval

The force during this short time can then be calculated using the thrust:

Calculate using thrust

Now I have all the parameters I need to make our squirrel fly on a machine gun:

Needed parameters for squirrel flying on a machine gun

Now we simulate the squirrel on the machine gun with a single bullet in the gun:

Simulate squirrel on machine gun

Seeing the height over time, I conclude that the squirrel reached a height of about 9 centimeters (3.5 inches) and experienced a flight time of only 0.27 seconds.

Squirrel reached 9cm and flight time was .27s

To put it another way:

Squirrel on top of the gun

That didn’t get the squirrel very far above the ground. The obvious solution to this? Fire more bullets from the gun. A standard magazine has 30 rounds:

Fire more bullets

This gives a flight time of almost 5.8 seconds, and the squirrel reached the dizzying height of 17.6 meters (58 feet). Well, it would be dizzying for humans; for squirrels, it’s probably not so scary.

Squirrel with more bullets fired

Now we’re getting somewhere:

Successful flight of squirrel on machine gun

I have shown that a squirrel can fly on a machine gun. Let’s move on to a human, going directly for the standard magazine size with 30 bullets:

Human with 30 bullets

Simulation of human on gun

One gun is not enough to lift a human very far. I need more guns. Let’s do a parameter sweep with the number of guns varying from 1 to 80:

Flying craft, hopefully

Simulation of human on guns

This shows some interesting patterns. The effect from 50 guns and above can be easily explained. More guns means more power, which means higher flight. The simulations with 15 and 32 guns are a little more interesting, though. Let’s look a little closer at the 15 guns scenario. The red dots show the firing interval, meaning the guns shoot one bullet each every 0.1 seconds:

15 guns scenario

You can see that the craft manages to take off slightly, starts to fall down again, gets off another shot, but then falls farther than the height it had gained. You can also look at the velocity over time:

Velocity over time

For the first shot, the craft starts at a zero velocity standing still on the ground. It gains velocity sharply, but before getting off the next shot, the velocity falls below zero. This means that during one firing cycle, there is a net loss in velocity, resulting in the eventual falling down, even though there are bullets left in the gun. It could then start over from standstill on the ground, doing tiny jumps up and down.

Tiny jumps

The scenario with 32 guns exhibits yet another behavior. The start looks similar to the behavior with 15 guns, where it gains some altitude, but then falls back down because it loses net velocity during each firing cycle. But then at around 2.5 seconds it starts to gain altitude, until all the ammunition is spent at 3 seconds.

This can be explained if you look at the mass of the magazine over time:

Mass of magazine over time

You can see that at each shot, the magazine loses weight because it ejects a bullet and a bullet casing. After a while, this makes the whole craft light enough to gain altitude. This indicates there is some limit to how many bullets you can carry for each machine gun and still be able to fly, which is another interesting parameter you can vary. Let’s try to fly with the following magazine sizes for an AK-47, assuming I create my own custom magazines:

Flying with magazine sizes for an AK-47

Because more guns means more power, I will use a large number of guns, 1,000:

Larger number of guns

When using 1,000 guns, it turns out it is not a good idea to bring 165 bullets for each gun:

Not a good idea to bring 165 bullets for each gun

This is because if you bring too many bullets, the craft becomes too heavy to gain any altitude. Now that I have found a reasonable (if there can be anything reasonable about trying to fly with machine guns) number of bullets to bring along, let’s see the achieved heights when varying the number of guns. I would expect that with more guns, we will gain more height and flight time.

More guns = more height and flight time?

Here is the maximum height achieved with the different number of guns:

Maximum height achieved with different number of guns

It turns out that increasing the number of guns drastically (from 1,500 to 50 million) only gives a marginal increase in the top height achieved. This is because as the number of guns increases, the part of the human carried by each gun decreases, until each gun only carries its own weight plus very little additional mass. This makes the total craft approach the same maximum height as a single gun without any extra weight, and adding more guns will give no more advantage.

In closing, the best machine gun jetpack you can build with AK-47s consists of at least around 5,000 machine guns loaded with 145 bullets each.

How high you can fly using machine guns
How high you can fly using machine guns

Download this post, as a Computable Document Format (CDF) file, and its accompanying models.

]]>
7
Adriana O'Brien http://www.wolfram.com <![CDATA[The Wolfram Language for the Hour of Code]]> http://blog.internal.wolfram.com/?p=23047 2014-12-03T15:36:32Z 2014-12-03T15:32:20Z Get ready, get set… code! It’s the time of year to get thinking about programming with the Hour of Code.

For many years, Wolfram Research has promoted and supported initiatives that encourage computation, programming, and STEM education, and we are always thrilled when efforts are taken by others to do the same. Code.org, in conjunction with Computer Science Education Week, is sponsoring an event to encourage educators and organizations across the country to dedicate a single hour to coding. This hour gives kids (and adults, too!) a taste of what it means to study computer science—and how it can actually be a creative, fun, and fulfilling process. Millions of students participated in the Hour of Code in past years, and instructors are looking for more engaging activities for their students to try. Enter the Wolfram Language.

Built into the Wolfram Language is the technology from Wolfram|Alpha that enables natural language input—and lets students create code just by writing English.

W|A using natural language

In addition to natural language understanding, the Wolfram Language also has lots of built-in functions that let you show core computation concepts with tiny amounts of code (often just one or two functions!).

Plan a city tour

With our newly released cloud products, you can get started for free!

To support the Hour of Code, Wolfram is putting together a workshop for instructors and parents to learn more about programming activities in the Wolfram Language. The workshop takes place December 4, 4–5pm EST. Register for the free event here. During the workshop, we will introduce the basics of the Wolfram Language and walk through several resources to get students coding, as well as demo our upcoming Wolfram Programming Lab offering.

Hour of Code

Learning and experimenting with programming in the Wolfram Language doesn’t have to stop with the Hour of Code. Have students create a tweet-length program with Wolfram’s Tweet-a-Program. Compose a tweet-length Wolfram Language program and tweet it to @WolframTaP. Our Twitter bot will run your program in the Wolfram Cloud and tweet back the result.

Hello World Tweet-a-Program

Learn more about the Wolfram Language with the Wolfram Language Code Gallery. Covering a variety of fields, programming styles, and project sizes, the Wolfram Language Code Gallery shows examples of what can be done with the knowledge-based Wolfram Language—including deployment on the web or elsewhere.

Wolfram Language Code Gallery

There are other training materials and resources for learning the Wolfram Language. Find numerous free and on-demand courses available on our training site. The Wolfram Demonstrations Project is an open source database of close to 10,000 interactive apps that can be used as learning examples.

Wolfram Demonstrations Project

As sponsors of organizations like Computer-Based Math™, which is working toward building a completely new math curriculum with computer-based computation at its heart, and the Mathematica Summer Camp, where high school students with limited programming experience learn to code using Mathematica, we are acutely aware of how important programming is in schools today.

Stephen Wolfram with a student

Congrats on getting your student or child involved with the Hour of Code, and we look forward to seeing what they create!

]]>
0
Piotr Wendykier <![CDATA[Extending Van Gogh’s Starry Night with Inpainting]]> http://blog.internal.wolfram.com/?p=23150 2014-12-11T17:21:22Z 2014-12-01T17:01:24Z Can computers learn to paint like Van Gogh? To some extent—definitely yes! For that, akin to human imitation artists, an algorithm should first be fed the original artists’ creations, and then it will be able to generate a machine take on them. How well? Please judge for yourself.

Second prize in the ZEISS photography competition
Second prize in the ZEISS photography competition

Recently the Department of Engineering at the University of Cambridge announced the winners of the annual photography competition, “The Art of Engineering: Images from the Frontiers of Technology.” The second prize went to Yarin Gal, a PhD student in the Machine Learning group, for his extrapolation of Van Gogh’s painting Starry Night, shown above. Readers can view this and similar computer-extended images at Gal’s website Extrapolated Art. An inpainting algorithm called PatchMatch was used to create the machine art, and in this post I will show how one can obtain similar effects using the Wolfram Language.

The term “digital inpainting” was first introduced in the “Image Inpainting” article at the SIGGRAPH 2000 conference. The main goal of inpainting is to restore damaged parts in an image. However, it is also widely used to remove or replace selected objects.

In the Wolfram Language, Inpaint is a built-in function. The region to be inpainted (or retouched) can be given as an image, a graphics object, or a matrix.

Using Inpaint on Abraham Lincoln image

There are five different algorithms available in Inpaint that one can select using the Method option: “Diffusion,” “TotalVariation,” “FastMarching,” “NavierStokes,” and “TextureSynthesis” (default setting). “TextureSynthesis,” in contrast to other algorithms, does not operate separately on each color channel and it does not introduce any new pixel values. In other words, each inpainted pixel value is taken from the parts of the input image that correspond to zero elements in the region argument. In the example below, it is clearly visible that these properties of the “TextureSynthesis” algorithm make it the method of choice for removing large objects from an image.

TextureSynthesis

The “TextureSynthesis” method is based on the algorithm described in “Image Texture Tools,” a PhD thesis by P. Harrison. This algorithm is an enhanced best-fit approach introduced in 1981 by D. Garber in “Computational Models for Texture Analysis and Texture Synthesis.” Parameters for the “TextureSynthesis” algorithm can be specified via two suboptions: “NeighborCount” (default: 30) and “MaxSamples” (default: 300). The first parameter defines the number of nearby pixels used for texture comparison, and the second parameter specifies the maximum number of samples used to find the best-fit texture.

Let’s go back to the extrapolation of Van Gogh’s painting. First, I import the painting and remove the border.

Extrapolation of Starry Night

Next, I need to extend the image by padding it with white pixels to generate the inpainting region.

Extend image padding on Starry Night

Now I can extrapolate the painting using the “TextureSynthesis” method.

Extrapolation of Starry Night using TextureSynthesis

Not too bad. Different effects can be obtained by changing the values of the “NeighborCount” and “MaxSamples” suboptions.

Different effects by changing values

Our readers should experiment with other parameter values and artworks.*

Experiment with other parameters and artworks

This, perhaps, would make an original gift for the upcoming holidays. A personal artwork or a photo would make a great project. Or just imagine surprising your child by showing him an improvisation of his own drawing, like the one above of a craft by a fourteen-year-old girl. It’s up to your imagination. Feel free to post your experiments on Wolfram Community!

Download this post as a Computable Document Format (CDF) file.

*If you don’t already have Wolfram Language software, you can try it for free via Wolfram Programming Cloud or a trial of Mathematica.

]]>
2
Rita Crook <![CDATA[Benedict Cumberbatch Can Charm Humans, but Can He Fool a Computer?]]> http://blog.internal.wolfram.com/?p=23036 2014-11-26T20:13:52Z 2014-11-26T15:42:00Z The Imitation Game, a movie portraying Alan Turing’s life (who would have celebrated his 100th birthday on Mathematica‘s 23rd birthday—read our blog post), was released this week, which we’ve been looking forward to. Turing machines were one of the focal points of the movie, and we launched a prize in 2007 to determine whether the 2,3 Turing machine was universal.

So of course, Cumberbatch’s promotional video where he impersonates other beloved actors reached us as well, which got me wondering, could Mathematica‘s machine learning capabilities recognize his voice, or could he fool a computer too?

I personally can’t stop myself from chuckling uncontrollably while watching his impressions, however, I wanted to look beyond the entertainment factor.

So I started wondering: Is he actually good at doing these impressions? Or are we all just charmed by his persona?

Is my psyche just being fooled by the meta-language, perhaps? If we take the data of pure voices, does he actually cut the mustard in matching these?

In order to determine the answer, 10 years ago we would have needed to stroll the streets and play audio snippets to 300 people from the James Bond movies, The Shining, Batman, and Cumberbatch’s impression snippets—then survey whether those people were fooled.

But no need, if you have your Mathematica handy!

With Mathematica‘s machine learning capabilities, it’s possible to classify sample voice snippets easily, which means we can determine whether Benedict’s impressions would be able to fool a computer. So I set myself the challenge of building a decent enough database of voice samples, plus I took snippets from each of Benedict’s impression attempts, and I let Mathematica do its magic.

We built a path to each person’s snippet database, which Mathematica exported for analysis:

Classify sample voice snippets

We imported all of the real voices:

Import real voices

The classifier was trained simply by providing the associated real voices to Classify; in the interest of speed, a pre-trained ClassifierFunction was loaded from cfActorWDX.wdx:

Classifier was trained simply by providing the associated real voices to Classify

My audio database needed to include snippets of Benedict’s own voice, snippets of the impersonated actors’ own voices, and the impressions from Cumberbatch. The sources for the training were the following: Alan Rickman, Christopher Walken, Jack Nicholson, John Malkovich, Michael Caine, Owen Wilson, Sean Connery, Tom Hiddleston, and Benedict Cumberbatch. I used a total of 560 snippets, but of course, the more data used, the more reliable the results. The snippets needed to be as “clean” as possible (no laughter, music, chatter, etc. in the background).

These all needed to be exactly the same length (3.00 seconds), and we made sure all snippets were the same length by using this function in the Wolfram Language:

Making sure snippets are all same length

Some weren’t single-channel audio files, so we needed to exclude this factor as an additional feature to optimize our results during the export stage:

Excluding single-channel audio files

Thanks go to Martin Hadley and Jon McLoone for the code.

Drum-roll… time for the verdict!

I have to break everyone’s heart now, and I’m not sure I want to be the one to do it… so I will “blame” Mathematica, because machine learning could indeed mostly tell the difference between the actors’ real voices and the impressions (bar two).

As the results below reveal, Mathematica provides 97–100% confidence on the impressions tested:

Mathematica provides 97-100% confidence on the impressions tested

For most impressions, there is a very small reported probability of any classification other than Benedict Cumberbatch or Alan Rickman.

Probabilities

 

It might be worth noting that Rickman, Connery, and Wilson all have a slow rhythm to their speech, with many pauses (especially noticeable in the snippets I used), which could have confused the algorithm.

Sad Benedict Cumberbatch

Now it’s time to be grown up about this, and not hold it against Benedict. He is still a beloved charmer, after all.

My admiration for him lives on, and I look forward to seeing him in The Imitation Game!

Download the accompanying code for this blog post as a Computable Document Format (CDF) file.

]]>
8
Wolfram Blog Team http:// <![CDATA[Deck the Halls with Lines of Coding]]> http://blog.internal.wolfram.com/?p=22970 2014-12-12T19:35:34Z 2014-11-24T18:56:40Z Cyber savings header

Thanksgiving is just around the corner, and that means you only have five weeks left to knock out your holiday shopping. Never fear, Wolfram is delivering amazing deals to customers across the globe, including North and South America, Australia, and parts of Asia and Africa to inspire a whole new year of computational creativity.

Give the perfect gift to your high school or college student with 20% off of Mathematica Student Edition, or treat yourself to the same discount on Mathematica Home Edition.

Mathematica discounts

Or, for the engineering hobbyist or recreational system designer on your list, get 20% off of SystemModeler Student and Home Editions.

SystemModeler discount

To make things extra merry, we are offering a Cyber Monday deal. US and Canada shoppers will see these discounts increase from 20% to 25% off, for even greater savings!

This holiday season, you can also get an extra three months of Wolfram|Alpha Pro free with a one-year subscription! View Step-by-step solutions for your math and chemistry queries, upload and analyze your own data and files, get extended computation time, and interact with plots and graphs—as well as receive access to Wolfram Problem Generator, where you’ll have unlimited practice problems with Step-by-step solutions.

Wolfram|Alpha Pro discount

Want to give the gift of W|A Pro to a special someone else instead? Send it in the form of an electronic gift card.

These special offers are available until December 31. If after all that you still have more items on your holiday list to check off, visit our online store for exclusive Wolfram merchandise! Happy Holidays!

]]>
5
Matthias Odisio <![CDATA[Removing Haze from a Color Photo Image Using the Near Infrared with the Wolfram Language]]> http://blog.internal.wolfram.com/?p=22862 2014-11-25T22:11:16Z 2014-11-21T19:31:34Z For most of us, taking bad pictures is incredibly easy. Band-Aid or remedy, digital post-processing can involve altering the photographed scene itself. Say you’re trekking through the mountains taking photos of the horizon, or you’re walking down the street and catch a beautiful perspective of the city, or it’s finally the right time to put the new, expensive phone camera to good use and capture the magic of this riverside… Just why do all the pictures look so bad? They’re all foggy! It’s not that you’re a bad photographer—OK, maybe you are—but that you’ve stumbled on a characteristic problem in outdoor photography: haze.

What is haze? Technically, haze is scattered light, photons bumped around by the molecules in the air and deprived of their original color, which they got by bouncing off the objects you are trying to see. The problem gets worse with distance: the more the light has to travel, the more it gets scattered around, and the more the scene takes that foggy appearance.

What can we do? What can possibly help our poor photographer? Science, of course.

Wolfram recently attended and sponsored the 2014 IEEE International Conference on Image Processing (ICIP), which ended October 30 in Paris. It was a good occasion to review the previous years’ best papers at the conference, and we noticed an interesting take on the haze problem proposed by Chen Feng, Shaojie Zhuo, Xiaopeng Zhang, Liang Shen, and Sabine Süsstrunk [1]. Let’s give their method a try and implement their “dehazing” algorithm.

The core idea behind the paper is to leverage the different susceptibilities of the light being scattered, which depend on the wavelength of the light. Light with a larger wavelength, such as red light, is more likely to travel around the dust, the smog, and all the other particles present in the air than shorter wavelength colors, like green or blue. Therefore, the red channel in an image carries better information about the non-hazy content of the scene.

But what if we could go even further? What prevents us from using the part of the spectrum slightly beyond the visible light? Nothing really—save for the fact we need an infrared camera.

Provided we are well equipped, we can then use the four channels of data (near infrared, red, green, and blue) to estimate the haze color and distribution and proceed to remove it from our image.

RGB, IR removal

In order to get some sensible assessments, we need a sound model of how an image is formed. In a general haze model, the content of each pixel is composed of two parts:

  • The light reflected by the objects in the scene (which will be called J)
  • The light scattered by the sky (A)

It is a good approximation to say that the “color of the air” A is constant for a specific place and time, while the “real color” J is different for each pixel. Depending on the amount of air the light had to travel through, a fraction (t) of the real color is transmitted to the camera, and the remaining portion (1-t) is replaced by scattered light.

We can summarize these concepts in a single haze equation:

Single haze equation

We need to determine J, t, and A. Let’s first estimate the global air-light color A. For a moment we will assume that portions of the image are extremely hazed (no transmission, i.e. t = 0). Then we can estimate the color A simply from the pixel values of those extremely hazed regions.

On the image below, a mouse click yields A = HTML box #425261.

Mouse-click color yield

However, our assumption that the transmission is zero in the haziest regions is clearly not verified, as we can always distinguish distant objects through the haze. This means that for images where haze is never intense, it is not possible to pick A with a click of the mouse, and we have to resort to some image processing to see how we can produce a solid estimation for images with all types of haze.

Let’s say first that it has proven difficult to obtain good dehazing results on our example images when reproducing the ICIP paper’s method for estimating the air-light color. As an alternative method, we estimate the air light color using the concept of dark channel.

The so-called dark channel prior is based on the observation that among natural images, it is almost always the case that within the vicinity of each pixel, one of the three channels (red, green, or blue) is much darker than the others, mainly because of the presence of shadows, dark surfaces, and colorful objects.

If for every pixel at least one channel must be naturally dark, we can assume that where this condition does not hold is due to the presence of scattered light—that is, the hazed region we’re looking for. So we look for a good estimation for A intersecting the brightest pixels of our images (maximum haze or illumination) within the region defined by a high value in the dark channel (highest haze).

Highest value in the dark channel

We extract the positions of the brightest pixels in the dark channel images, extract the corresponding pixel values in the hazed image, and finally cluster these pixel values:

Cluster pixel values

The selected pixels marked in red below will be clustered; here they all belong to a single region, but it may not be the case on other images:

Single region

We are looking for the cluster with the highest average luminance:

Cluster with highest average luminance

This is our estimate of the air-light color:

Estimate of air-light color

Looking once more at the equation (1), we’ve made some progress, because we are only left with computing the transmission t and the haze-free pixel value J for each pixel:

Computing the transmission t and the haze-free pixel value J

Since we choose an optimization approach to solve this problem, we first compute coarse estimates, t0 and J0, that will serve as initial conditions for our optimization system.

On to finding a coarse estimate for the transmission t0. Here’s the trick and an assumption: If we assume the transmission does not change too much within a small region of the image (that we are calling Ω), we can think of t0 to be locally constant. Dividing both sides of equation (1) by A and applying the local minimum operator min both on the color channels and the pixels in each region Ω yields:

Coarse estimate for the transmission t0

But Formula in code is exactly the definition of the dark channel of the haze-free image J and, since Ak is a positive number, we infer that this term of the equation is practically zero everywhere, given our prior assumption that natural images have at least one almost zero channel in the pixels of any region. Using this simplification yields:

Yield of simplification

This is the t0 image. The darker the image area, the hazier it is assumed to be:

The darker the image area, the hazier it is assumed to be

Now the real transmission map cannot be that “blocky.” We’ll take care of that in a second. In the ICIP 2013 paper, there is another clever process to make sure we keep a small amount of haze so that the dehazed image still looks natural. This step involves information from the near-infrared image; we describe this step in the companion notebook that you can download at the bottom of this post. Here is an updated transmission map estimate after this step:

Updated transmission map estimate

To further refine this estimate by removing these unwanted block artifacts, we apply a technique named guided filtering. It is beyond the scope of the blog post to describe the details of a guided filter. Let’s just say that here, the guided filtering of the transmission map t0 using the original hazed image as a guide allows us, by jointly processing both the filtered image and the guided image, to realign the gradient of t0 with the gradient of the hazed image—a desired property that was lost due to the blocking artifacts. The function ImageGuidedFilter is defined in the companion notebook a the end of this post.

Guided filtering

As too much dehazing would not look realistic, and too little dehazing would look too, well, hazed, we adjust the transmission map t0 by stretching it to run from 0.1 to 0.95:

Update transmission map

Thanks to our estimates for the air-light color A and the transmission map t0, another manipulation of equation (1) gives us the estimate for the dehazed image J0:

Estimate for dehazed image J0

Estimate for the dehazed image J0
Estimate for the dehazed image J0

You can compare with the original image just by positioning your mouse on top of the graphic:

It’s a good start, but a flat subtraction may be too harsh for certain areas of the image or introduce some undesirable artifacts. In this last part, we will use some optimization techniques to try to reconcile this gap and ask for the help of the infrared image to keep a higher level of detail even in the most hazed region.

The key is in the always useful Bayes’ rule for inference. The question we are asking ourselves here is which pair of t and J is the most likely to produce the observed images IRGB and INIR (the near-infrared image).

In the language of probability, we want to calculate the joint distribution
Joint distribution

Using the Bayes’ theorem, we rewrite it as:

Rewrite

And simplify it assuming that the transmission map t and the reflectance map J are uncorrelated, so their joint probability is simply the product of their individual ones:

Joint probability is simply the product of their individual ones

In order to write this in a form that can be optimized, we now assume that each probability term is distributed according to:

Probability formula

That is, it peaks in correspondence with the “best candidate” x-tilde. This allows us to exploit one of the properties of the exponential function—e-ae-be-c… = e-(a+b+c+…)—and, provided that the addends in the exponent are all positive, to move from the maximization of a probability to the minimization of a polynomial.

We are now left with the task of finding the “best candidate” for each term, so let’s dig a bit into their individual meaning guided by our knowledge of the problem.

The first term is the probability to have a given RGB image given specific t and J. As we are working within the framework of equation (1)—the haze model IRGB = Jt + A(1 – t)—the natural choice is to pick:

||IRGBJt + A(1 – t) ||

The second term relates the color image to the infrared image. We want to leverage the infrared channel for details about the underlying structure, because it is in the infrared image that the small variations are less susceptible to being hidden by haze. We do this by establishing a relationship between the gradients (the 2D derivatives) of the infrared image and the reconstructed image:

||▽J – ▽INIR||

This relation should take into account the distance between the scene element and the camera—being more important for higher distances. Therefore we multiply it by a coefficient inversely related to the transmission:

Multiply it by a coefficient inversely related to the transmission

The last two terms are the transmission and reflection map prior probabilities. This corresponds to what we expect to be the most likely values for each pixel before any observation. Since we don’t have any information in this regard, a safe bet is to assume them equal to a constant, and since we don’t care about which constant, we just say that their derivative is zero everywhere, so the corresponding terms are simply:

||▽t||

And:

||▽J||

Putting all these terms together brings us to the final minimization problem:

Final minimization problem

Where the regularization coefficients λ1,2,3 and the exponents α and β are taken from the ICTP paper.

To resolve this problem, we can insert the initial condition t0 and J0, move a bit around, and see if we are doing better. If that is the case, we can then use the new images (let’s call them t1 and J1) for a second step and calculate t2 and J2. After many iterations, when we feel the new images are not much better than those of the previous step, we stop and extract the final result.

This new image J tends to be slightly darker than the original one; in the paper, a technique called tone mapping is applied to correct for this effect, where the channel values are rescaled in a nonlinear fashion to adjust the illumination:

V’ = Vγ

During our experiments, we found instead that we were better off applying the tone mapping first, as it helped during the optimization.

To help us find the correct value for the exponent γ, we can look at the difference between the low haze—that is, high transmission—parts of the original image IRGB and the reflectance map J0. We want to find a value for γ that makes this difference the smallest possible and adjust J0 accordingly:

Value for gamma
Value for gamma

We now implement a simplified version of the steepest descent algorithm to solve the optimization problem of equation (6). The function IterativeSolver is defined in the companion notebook a the end of this post.

IterativeSolver

When that optimization is done, our final best guess for the amount of haze in the image is:

Best guess amount of haze

And finally, you can see the unhazed result below. To compare it with the original, hazy image, just position your mouse on top of the graphics:

Unhazed result

We encourage you to download the companion CDF notebook to engage deeper in dehazing experiments.

Let’s now leave the peaceful mountains and the award-winning dehazing method from ICIP 2013 and move to Paris, where ICIP 2014 just took place. Wolfram colleagues staffing our booth at the conference confirmed that dehazing (and air pollution) is still an active research topic. Attending such conferences has proven to be excellent opportunities to demonstrate how the Wolfram Language and Mathematica 10 can facilitate research in image processing, from investigation and prototyping to deployment. And we love to interact with experts so we can continue to develop the Wolfram Language in the right direction.

Download this post as a Computable Document Format (CDF) file.

References:

[1] C. Feng, S. Zhuo, X. Zhang, L. Shen, and S. Süsstrunk. “Near-Infrared Guided Color Image Dehazing,” IEEE International Conference on Image Processing, Melbourne, Australia, September 2013 (ICIP 2013).

[2] K. He, J. Sun, X. Tang. “Single Image Haze Removal Using Dark Channel Prior,” IEEE Conference on Computer Vision and Pattern Recognition, Miami, Florida, June 2009 (CVPR’09).

Images taken from:

[3] L. Schaul, C. Fredembach, and S. Süsstrunk. “Color Image Dehazing Using the Near-Infrared,” IEEE International Conference on Image Processing, Cairo, Egypt, November 2009 (ICIP’09).

]]>
1
Emily Suess <![CDATA[2014 Wolfram Innovator Award Winners]]> http://blog.internal.wolfram.com/?p=22822 2014-11-20T16:54:48Z 2014-11-20T16:54:48Z Now in its fourth year, the Wolfram Innovator Awards are an established tradition and one of our favorite parts of the annual Wolfram Technology Conference. This year, Stephen Wolfram presented seven individuals with the award. Join us in celebrating the innovative ways the winners are using Wolfram technologies to advance their industries and fields of research.

Wolfram Innovator Award
Candidates are nominated by Wolfram employees and evaluated by a panel of experts to determine the winners. We are excited to announce the US recipients of the 2014 Innovator Awards:

Prof. Richard J. GaylordProf. Richard J. Gaylord
Professor of Materials Science and Engineering Emeritus, University of Illinois

 

Mark KotanchekMark Kotanchek
CEO, Evolved Analytics LLC

 

John MichopoulosJohn Michopoulos
Head of Computational Multiphysics Systems Laboratory, Naval Research Laboratory

 

Rodrigo MurtaRodrigo Murta
Retail Intelligence Manager, St Marche Supermercados

 

Bruce TorrenceBruce Torrence
Professor of Mathematics, Randolph-Macon College

 

Yves PapegayYves Papegay
Research Scientist, French National Institute for Research in Computer Science and Control

 

Chad SlaughterChad Slaughter
System Architect, Enova Financial

Earlier this year, European Innovator Award winners were announced at the European Wolfram Technology Conference in Frankfurt, Germany:

Dr. János KarsaiDr. János Karsai
Associate Professor, Department of Analysis, University of Szeged
 

Frank ScherbaumFrank Scherbaum
Professor, Institute of Earth and Environmental Sciences, University of Potsdam
 

Congratulations to all of our 2014 Wolfram Innovator Award winners! Read more about our deserving recipients and their accomplishments.

]]>
0
Wolfram Blog Team http:// <![CDATA[Fractal Fun: Tweet-a-Program Mandelbrot Code Challenge]]> http://blog.internal.wolfram.com/?p=22780 2014-12-12T19:33:16Z 2014-11-17T16:55:24Z This week Wolfram will be celebrating Benoit Mandelbrot‘s birthday and his contributions to mathematics by holding a Tweet-a-Program challenge. In honor of Mandelbrot, tweet us your favorite fractal-themed lines of Wolfram Language code. Then, as with our other challenges, we’ll use the Wolfram Language to randomly select winning tweets (along with a few of our favorites) to pin, retweet, and share with our followers. If you win, we’ll send you a free Wolfram T-shirt!

In Tweet-a-Program’s first few exciting months, we’ve already seen a number of awesome fractal examples like these:

First fractal image

Second fractal image

Third fractal image

To win, tweet your submissions to @WolframTaP by the end of the week (11:59pm PDT on Sunday, November 23). So that you don’t waste valuable code space, we don’t require a hashtag with your submissions. However, we do want you to share your code with your friends by retweeting your results with hashtag #MandelbrotWL.

We can’t wait to see what you come up with!

]]>
1