The Wolfram|Alpha applications are universal apps, and utilize Windows’ distinct style while bringing to Windows users some of the features people have come to expect from Wolfram|Alpha: a custom keyboard for easily entering queries, a large selection of examples to explore Wolfram|Alpha’s vast knowledgebase, history to view your recent queries, favorites so you can easily answer your favorite questions, the ability to pin specific queries to the start menu, and more.

We’re also happy to announce the release of several of our Course Assistant Apps on Windows 8.1 devices:

- Algebra: Windows Phone Store or Windows Store
- Calculus: Windows Phone Store or Windows Store
- Multivariable Calculus: Windows Phone Store or Windows Store
- Linear Algebra: Windows Phone Store or Windows Store
- Pre-Algebra: Windows Phone Store or Windows Store
- Precalculus: Windows Phone Store or Windows Store
- Statistics: Windows Phone Store or Windows Store

These apps also feature our custom keyboards for the quick entry of your homework problems. View Step-by-step solutions to learn how to solve complex math queries, plot 2D or 3D functions, explore topics applicable to your high school and college math courses, and much more.

]]>If you need some help getting into the holiday spirit, check out these examples:

To win, tweet your submissions to @WolframTaP by 11:59pm PDT on Thursday, January 1. So that you don’t waste valuable code space, we don’t require a hashtag with your submissions. However, we do encourage you to share your code with your friends by retweeting your results with hashtag #HolidayWL.

]]>

*Machine gun with a squirrel on top*

Let’s start smaller than a human, with a gray squirrel from the original story. Put this squirrel on a machine gun, fire it downward at the full automatic setting, and see what happens. I’ll be using Wolfram *SystemModeler* to model the dynamics of this system.

*Model of a machine gun*

The image above shows the model of a machine gun. It contains bullet and gun components that are masses that are influenced by gravity. They are easily constructed by combining built-in mechanical components:

*Mass influenced by the Earth’s gravitational force*

The magazine component is a little more advanced because it ejects the mass of the bullet and the bullet casing as each shot is fired. It does this by taking the initial mass of the full magazine and subtracting the mass of a cartridge multiplied by the number of shots fired, which is given by the shot counter component.

Combining this together with a simple model of a squirrel, a sensor for the position above ground, and a crash detector that stops the simulation when everything crashes on the ground, I now have a complete model.

To get a good simulation, I need to populate the model with parameters for the different components. I will use a gray squirrel, which typically weighs around 0.5 kg (around 1.1 pounds).

Then I need some data for our machine gun. I’ll use the ubiquitous AK-47 assault rifle. Here is some basic data about this rifle:

The thrust generated by the gun can be calculated from the mass of the bullet, the velocity of the bullet when leaving the muzzle, and how often the gun is fired:

I can then estimate the percentage of each firing interval that is used to actually propel the bullet through the barrel. I am going to make the assumption that the average speed in the barrel is equal to half the final speed:

The force during this short time can then be calculated using the thrust:

Now I have all the parameters I need to make our squirrel fly on a machine gun:

Now we simulate the squirrel on the machine gun with a single bullet in the gun:

Seeing the height over time, I conclude that the squirrel reached a height of about 9 centimeters (3.5 inches) and experienced a flight time of only 0.27 seconds.

To put it another way:

That didn’t get the squirrel very far above the ground. The obvious solution to this? Fire more bullets from the gun. A standard magazine has 30 rounds:

This gives a flight time of almost 5.8 seconds, and the squirrel reached the dizzying height of 17.6 meters (58 feet). Well, it would be dizzying for humans; for squirrels, it’s probably not so scary.

Now we’re getting somewhere:

I have shown that a squirrel can fly on a machine gun. Let’s move on to a human, going directly for the standard magazine size with 30 bullets:

One gun is not enough to lift a human very far. I need more guns. Let’s do a parameter sweep with the number of guns varying from 1 to 80:

This shows some interesting patterns. The effect from 50 guns and above can be easily explained. More guns means more power, which means higher flight. The simulations with 15 and 32 guns are a little more interesting, though. Let’s look a little closer at the 15 guns scenario. The red dots show the firing interval, meaning the guns shoot one bullet each every 0.1 seconds:

You can see that the craft manages to take off slightly, starts to fall down again, gets off another shot, but then falls farther than the height it had gained. You can also look at the velocity over time:

For the first shot, the craft starts at a zero velocity standing still on the ground. It gains velocity sharply, but before getting off the next shot, the velocity falls below zero. This means that during one firing cycle, there is a net loss in velocity, resulting in the eventual falling down, even though there are bullets left in the gun. It could then start over from standstill on the ground, doing tiny jumps up and down.

The scenario with 32 guns exhibits yet another behavior. The start looks similar to the behavior with 15 guns, where it gains some altitude, but then falls back down because it loses net velocity during each firing cycle. But then at around 2.5 seconds it starts to gain altitude, until all the ammunition is spent at 3 seconds.

This can be explained if you look at the mass of the magazine over time:

You can see that at each shot, the magazine loses weight because it ejects a bullet and a bullet casing. After a while, this makes the whole craft light enough to gain altitude. This indicates there is some limit to how many bullets you can carry for each machine gun and still be able to fly, which is another interesting parameter you can vary. Let’s try to fly with the following magazine sizes for an AK-47, assuming I create my own custom magazines:

Because more guns means more power, I will use a large number of guns, 1,000:

When using 1,000 guns, it turns out it is not a good idea to bring 165 bullets for each gun:

This is because if you bring too many bullets, the craft becomes too heavy to gain any altitude. Now that I have found a reasonable (if there can be anything reasonable about trying to fly with machine guns) number of bullets to bring along, let’s see the achieved heights when varying the number of guns. I would expect that with more guns, we will gain more height and flight time.

Here is the maximum height achieved with the different number of guns:

It turns out that increasing the number of guns drastically (from 1,500 to 50 million) only gives a marginal increase in the top height achieved. This is because as the number of guns increases, the part of the human carried by each gun decreases, until each gun only carries its own weight plus very little additional mass. This makes the total craft approach the same maximum height as a single gun without any extra weight, and adding more guns will give no more advantage.

In closing, the best machine gun jetpack you can build with AK-47s consists of at least around 5,000 machine guns loaded with 145 bullets each.

*How high you can fly using machine guns*

Download this post, as a Computable Document Format (CDF) file, and its accompanying models.

]]>For many years, Wolfram Research has promoted and supported initiatives that encourage computation, programming, and STEM education, and we are always thrilled when efforts are taken by others to do the same. Code.org, in conjunction with Computer Science Education Week, is sponsoring an event to encourage educators and organizations across the country to dedicate a single hour to coding. This hour gives kids (and adults, too!) a taste of what it means to study computer science—and how it can actually be a creative, fun, and fulfilling process. Millions of students participated in the Hour of Code in past years, and instructors are looking for more engaging activities for their students to try. Enter the Wolfram Language.

Built into the Wolfram Language is the technology from Wolfram|Alpha that enables natural language input—and lets students create code just by writing English.

In addition to natural language understanding, the Wolfram Language also has lots of built-in functions that let you show core computation concepts with tiny amounts of code (often just one or two functions!).

With our newly released cloud products, you can get started for free!

To support the Hour of Code, Wolfram is putting together a workshop for instructors and parents to learn more about programming activities in the Wolfram Language. The workshop takes place December 4, 4–5pm EST. Register for the free event here. During the workshop, we will introduce the basics of the Wolfram Language and walk through several resources to get students coding, as well as demo our upcoming Wolfram Programming Lab offering.

Learning and experimenting with programming in the Wolfram Language doesn’t have to stop with the Hour of Code. Have students create a tweet-length program with Wolfram’s Tweet-a-Program. Compose a tweet-length Wolfram Language program and tweet it to @WolframTaP. Our Twitter bot will run your program in the Wolfram Cloud and tweet back the result.

Learn more about the Wolfram Language with the Wolfram Language Code Gallery. Covering a variety of fields, programming styles, and project sizes, the Wolfram Language Code Gallery shows examples of what can be done with the knowledge-based Wolfram Language—including deployment on the web or elsewhere.

There are other training materials and resources for learning the Wolfram Language. Find numerous free and on-demand courses available on our training site. The Wolfram Demonstrations Project is an open source database of close to 10,000 interactive apps that can be used as learning examples.

As sponsors of organizations like Computer-Based Math™, which is working toward building a completely new math curriculum with computer-based computation at its heart, and the *Mathematica* Summer Camp, where high school students with limited programming experience learn to code using *Mathematica*, we are acutely aware of how important programming is in schools today.

Congrats on getting your student or child involved with the Hour of Code, and we look forward to seeing what they create!

]]>

**Second prize in the ZEISS photography competition**

Recently the Department of Engineering at the University of Cambridge announced the winners of the annual photography competition, “The Art of Engineering: Images from the Frontiers of Technology.” The second prize went to Yarin Gal, a PhD student in the Machine Learning group, for his extrapolation of Van Gogh’s painting *Starry Night*, shown above. Readers can view this and similar computer-extended images at Gal’s website Extrapolated Art. An inpainting algorithm called PatchMatch was used to create the machine art, and in this post I will show how one can obtain similar effects using the Wolfram Language.

The term “digital inpainting” was first introduced in the “Image Inpainting” article at the SIGGRAPH 2000 conference. The main goal of inpainting is to restore damaged parts in an image. However, it is also widely used to remove or replace selected objects.

In the Wolfram Language, `Inpaint` is a built-in function. The region to be inpainted (or retouched) can be given as an image, a graphics object, or a matrix.

There are five different algorithms available in `Inpaint` that one can select using the `Method` option: “`Diffusion`,” “`TotalVariation`,” “`FastMarching`,” “`NavierStokes`,” and “`TextureSynthesis`” (default setting). “`TextureSynthesis`,” in contrast to other algorithms, does not operate separately on each color channel and it does not introduce any new pixel values. In other words, each inpainted pixel value is taken from the parts of the input image that correspond to zero elements in the region argument. In the example below, it is clearly visible that these properties of the “`TextureSynthesis`” algorithm make it the method of choice for removing large objects from an image.

The “`TextureSynthesis`” method is based on the algorithm described in “Image Texture Tools,” a PhD thesis by P. Harrison. This algorithm is an enhanced best-fit approach introduced in 1981 by D. Garber in “Computational Models for Texture Analysis and Texture Synthesis.” Parameters for the “`TextureSynthesis`” algorithm can be specified via two suboptions: “`NeighborCount`” (default: 30) and “`MaxSamples`” (default: 300). The first parameter defines the number of nearby pixels used for texture comparison, and the second parameter specifies the maximum number of samples used to find the best-fit texture.

Let’s go back to the extrapolation of Van Gogh’s painting. First, I import the painting and remove the border.

Next, I need to extend the image by padding it with white pixels to generate the inpainting region.

Now I can extrapolate the painting using the “`TextureSynthesis`” method.

Not too bad. Different effects can be obtained by changing the values of the “`NeighborCount`” and “`MaxSamples`” suboptions.

Our readers should experiment with other parameter values and artworks.*

This, perhaps, would make an original gift for the upcoming holidays. A personal artwork or a photo would make a great project. Or just imagine surprising your child by showing him an improvisation of his own drawing, like the one above of a craft by a fourteen-year-old girl. It’s up to your imagination. Feel free to post your experiments on Wolfram Community!

Download this post as a Computable Document Format (CDF) file.

*If you don’t already have Wolfram Language software, you can try it for free via Wolfram Programming Cloud or a trial of *Mathematica*.

So of course, Cumberbatch’s promotional video where he impersonates other beloved actors reached us as well, which got me wondering, could *Mathematica*‘s machine learning capabilities recognize his voice, or could he fool a computer too?

I personally can’t stop myself from chuckling uncontrollably while watching his impressions, however, I wanted to look beyond the entertainment factor.

So I started wondering: Is he *actually* good at doing these impressions? Or are we all just charmed by his persona?

Is my psyche just being fooled by the meta-language, perhaps? If we take the data of pure voices, does he actually cut the mustard in matching these?

In order to determine the answer, 10 years ago we would have needed to stroll the streets and play audio snippets to 300 people from the James Bond movies, *The Shining*, *Batman*, and Cumberbatch’s impression snippets—then survey whether those people were fooled.

But no need, if you have your *Mathematica* handy!

With *Mathematica*‘s machine learning capabilities, it’s possible to classify sample voice snippets easily, which means we can determine whether Benedict’s impressions would be able to fool a computer. So I set myself the challenge of building a decent enough database of voice samples, plus I took snippets from each of Benedict’s impression attempts, and I let *Mathematica* do its magic.

We built a path to each person’s snippet database, which *Mathematica* exported for analysis:

We imported all of the real voices:

The classifier was trained simply by providing the associated real voices to `Classify`; in the interest of speed, a pre-trained `ClassifierFunction` was loaded from cfActorWDX.wdx:

My audio database needed to include snippets of Benedict’s own voice, snippets of the impersonated actors’ own voices, and the impressions from Cumberbatch. The sources for the training were the following: Alan Rickman, Christopher Walken, Jack Nicholson, John Malkovich, Michael Caine, Owen Wilson, Sean Connery, Tom Hiddleston, and Benedict Cumberbatch. I used a total of 560 snippets, but of course, the more data used, the more reliable the results. The snippets needed to be as “clean” as possible (no laughter, music, chatter, etc. in the background).

These all needed to be exactly the same length (3.00 seconds), and we made sure all snippets were the same length by using this function in the Wolfram Language:

Some weren’t single-channel audio files, so we needed to exclude this factor as an additional feature to optimize our results during the export stage:

Thanks go to Martin Hadley and Jon McLoone for the code.

Drum-roll… time for the verdict!

I have to break everyone’s heart now, and I’m not sure I want to be the one to do it… so I will “blame” *Mathematica*, because machine learning could indeed mostly tell the difference between the actors’ real voices and the impressions (bar two).

As the results below reveal, *Mathematica* provides 97–100% confidence on the impressions tested:

For most impressions, there is a very small reported probability of any classification other than Benedict Cumberbatch or Alan Rickman.

It might be worth noting that Rickman, Connery, and Wilson all have a slow rhythm to their speech, with many pauses (especially noticeable in the snippets I used), which could have confused the algorithm.

Now it’s time to be grown up about this, and not hold it against Benedict. He is still a beloved charmer, after all.

My admiration for him lives on, and I look forward to seeing him in *The Imitation Game*!

Download the accompanying code for this blog post as a Computable Document Format (CDF) file.

]]>Thanksgiving is just around the corner, and that means you only have five weeks left to knock out your holiday shopping. Never fear, Wolfram is delivering amazing deals to customers across the globe, including North and South America, Australia, and parts of Asia and Africa to inspire a whole new year of computational creativity.

Give the perfect gift to your high school or college student with 20% off of *Mathematica Student Edition*, or treat yourself to the same discount on *Mathematica Home Edition*.

Or, for the engineering hobbyist or recreational system designer on your list, get 20% off of *SystemModeler* *Student* and *Home Editions*.

To make things extra merry, we are offering a Cyber Monday deal. US and Canada shoppers will see these discounts increase from 20% to 25% off, for even greater savings!

This holiday season, you can also get an extra three months of Wolfram|Alpha Pro free with a one-year subscription! View Step-by-step solutions for your math and chemistry queries, upload and analyze your own data and files, get extended computation time, and interact with plots and graphs—as well as receive access to Wolfram Problem Generator, where you’ll have unlimited practice problems with Step-by-step solutions.

Want to give the gift of W|A Pro to a special someone else instead? Send it in the form of an electronic gift card.

These special offers are available until December 31. If after all that you still have more items on your holiday list to check off, visit our online store for exclusive Wolfram merchandise! Happy Holidays!

]]>What is haze? Technically, haze is scattered light, photons bumped around by the molecules in the air and deprived of their original color, which they got by bouncing off the objects you are trying to see. The problem gets worse with distance: the more the light has to travel, the more it gets scattered around, and the more the scene takes that foggy appearance.

What can we do? What can possibly help our poor photographer? Science, of course.

Wolfram recently attended and sponsored the 2014 IEEE International Conference on Image Processing (ICIP), which ended October 30 in Paris. It was a good occasion to review the previous years’ best papers at the conference, and we noticed an interesting take on the haze problem proposed by Chen Feng, Shaojie Zhuo, Xiaopeng Zhang, Liang Shen, and Sabine Süsstrunk [1]. Let’s give their method a try and implement their “dehazing” algorithm.

The core idea behind the paper is to leverage the different susceptibilities of the light being scattered, which depend on the wavelength of the light. Light with a larger wavelength, such as red light, is more likely to travel around the dust, the smog, and all the other particles present in the air than shorter wavelength colors, like green or blue. Therefore, the red channel in an image carries better information about the non-hazy content of the scene.

But what if we could go even further? What prevents us from using the part of the spectrum slightly beyond the visible light? Nothing really—save for the fact we need an infrared camera.

Provided we are well equipped, we can then use the four channels of data (near infrared, red, green, and blue) to estimate the haze color and distribution and proceed to remove it from our image.

In order to get some sensible assessments, we need a sound model of how an image is formed. In a general haze model, the content of each pixel is composed of two parts:

- The light reflected by the objects in the scene (which will be called
**J**) - The light scattered by the sky (
**A**)

It is a good approximation to say that the “color of the air” **A** is constant for a specific place and time, while the “real color” **J** is different for each pixel. Depending on the amount of air the light had to travel through, a fraction (**t**) of the real color is transmitted to the camera, and the remaining portion (1-**t**) is replaced by scattered light.

We can summarize these concepts in a single haze equation:

We need to determine **J**, **t**, and **A**. Let’s first estimate the global air-light color **A**. For a moment we will assume that portions of the image are extremely hazed (no transmission, i.e. **t** = 0). Then we can estimate the color **A** simply from the pixel values of those extremely hazed regions.

On the image below, a mouse click yields **A** = .

However, our assumption that the transmission is zero in the haziest regions is clearly not verified, as we can always distinguish distant objects through the haze. This means that for images where haze is never intense, it is not possible to pick **A** with a click of the mouse, and we have to resort to some image processing to see how we can produce a solid estimation for images with all types of haze.

Let’s say first that it has proven difficult to obtain good dehazing results on our example images when reproducing the ICIP paper’s method for estimating the air-light color. As an alternative method, we estimate the air light color using the concept of dark channel.

The so-called dark channel prior is based on the observation that among natural images, it is almost always the case that within the vicinity of each pixel, one of the three channels (red, green, or blue) is much darker than the others, mainly because of the presence of shadows, dark surfaces, and colorful objects.

If for every pixel at least one channel must be naturally dark, we can assume that where this condition does not hold is due to the presence of scattered light—that is, the hazed region we’re looking for. So we look for a good estimation for **A** intersecting the brightest pixels of our images (maximum haze or illumination) within the region defined by a high value in the dark channel (highest haze).

We extract the positions of the brightest pixels in the dark channel images, extract the corresponding pixel values in the hazed image, and finally cluster these pixel values:

The selected pixels marked in red below will be clustered; here they all belong to a single region, but it may not be the case on other images:

We are looking for the cluster with the highest average luminance:

This is our estimate of the air-light color:

Looking once more at the equation (1), we’ve made some progress, because we are *only* left with computing the transmission **t** and the haze-free pixel value **J** for each pixel:

Since we choose an optimization approach to solve this problem, we first compute coarse estimates, **t0** and **J0**, that will serve as initial conditions for our optimization system.

On to finding a coarse estimate for the transmission **t0**. Here’s the trick and an assumption: If we assume the transmission does not change too much within a small region of the image (that we are calling Ω), we can think of **t0** to be locally constant. Dividing both sides of equation (1) by **A** and applying the local minimum operator *min* both on the color channels and the pixels in each region Ω yields:

But is exactly the definition of the dark channel of the haze-free image **J** and, since **A _{k}** is a positive number, we infer that this term of the equation is practically zero everywhere, given our prior assumption that natural images have at least one almost zero channel in the pixels of any region. Using this simplification yields:

This is the **t0** image. The darker the image area, the hazier it is assumed to be:

Now the real transmission map cannot be that “blocky.” We’ll take care of that in a second. In the ICIP 2013 paper, there is another clever process to make sure we keep a small amount of haze so that the dehazed image still looks natural. This step involves information from the near-infrared image; we describe this step in the companion notebook that you can download at the bottom of this post. Here is an updated transmission map estimate after this step:

To further refine this estimate by removing these unwanted block artifacts, we apply a technique named guided filtering. It is beyond the scope of the blog post to describe the details of a guided filter. Let’s just say that here, the guided filtering of the transmission map **t0** using the original hazed image as a guide allows us, by jointly processing both the filtered image and the guided image, to realign the gradient of **t0** with the gradient of the hazed image—a desired property that was lost due to the blocking artifacts. The function `ImageGuidedFilter` is defined in the companion notebook a the end of this post.

As too much dehazing would not look realistic, and too little dehazing would look too, well, hazed, we adjust the transmission map **t0** by stretching it to run from 0.1 to 0.95:

Thanks to our estimates for the air-light color **A** and the transmission map **t0**, another manipulation of equation (1) gives us the estimate for the dehazed image **J0**:

You can compare with the original image just by positioning your mouse on top of the graphic:

It’s a good start, but a flat subtraction may be too harsh for certain areas of the image or introduce some undesirable artifacts. In this last part, we will use some optimization techniques to try to reconcile this gap and ask for the help of the infrared image to keep a higher level of detail even in the most hazed region.

The key is in the always useful Bayes’ rule for inference. The question we are asking ourselves here is which pair of **t** and **J** is the most likely to produce the observed images ** I_{RGB}** and

In the language of probability, we want to calculate the joint distribution

Using the Bayes’ theorem, we rewrite it as:

And simplify it assuming that the transmission map **t** and the reflectance map **J** are uncorrelated, so their joint probability is simply the product of their individual ones:

In order to write this in a form that can be optimized, we now assume that each probability term is distributed according to:

That is, it peaks in correspondence with the “best candidate” . This allows us to exploit one of the properties of the exponential function—*e ^{-a}e^{-b}e^{-c}*… =

We are now left with the task of finding the “best candidate” for each term, so let’s dig a bit into their individual meaning guided by our knowledge of the problem.

The first term is the probability to have a given RGB image given specific **t** and **J**. As we are working within the framework of equation (1)—the haze model *I*^{RGB} = *Jt* + *A*(1 – *t*)—the natural choice is to pick:

||*I*_{RGB} – *Jt* + *A*(1 – t) ||

The second term relates the color image to the infrared image. We want to leverage the infrared channel for details about the underlying structure, because it is in the infrared image that the small variations are less susceptible to being hidden by haze. We do this by establishing a relationship between the gradients (the 2D derivatives) of the infrared image and the reconstructed image:

||▽*J* – ▽*I*_{NIR}||

This relation should take into account the distance between the scene element and the camera—being more important for higher distances. Therefore we multiply it by a coefficient inversely related to the transmission:

The last two terms are the transmission and reflection map prior probabilities. This corresponds to what we expect to be the most likely values for each pixel before any observation. Since we don’t have any information in this regard, a safe bet is to assume them equal to a constant, and since we don’t care about which constant, we just say that their derivative is zero everywhere, so the corresponding terms are simply:

||▽*t*||

And:

||▽*J*||

Putting all these terms together brings us to the final minimization problem:

Where the regularization coefficients λ_{1,2,3} and the exponents α and β are taken from the ICTP paper.

To resolve this problem, we can insert the initial condition **t0** and **J0**, move a bit around, and see if we are doing better. If that is the case, we can then use the new images (let’s call them **t1** and **J1**) for a second step and calculate **t2** and **J2**. After many iterations, when we feel the new images are not much better than those of the previous step, we stop and extract the final result.

This new image **J** tends to be slightly darker than the original one; in the paper, a technique called tone mapping is applied to correct for this effect, where the channel values are rescaled in a nonlinear fashion to adjust the illumination:

*V’* = *V ^{γ}*

During our experiments, we found instead that we were better off applying the tone mapping first, as it helped during the optimization.

To help us find the correct value for the exponent *γ*, we can look at the difference between the low haze—that is, high transmission—parts of the original image ** I_{RGB}** and the reflectance map

We now implement a simplified version of the steepest descent algorithm to solve the optimization problem of equation (6). The function ` IterativeSolver` is defined in the companion notebook a the end of this post.

When that optimization is done, our final best guess for the amount of haze in the image is:

And finally, you can see the unhazed result below. To compare it with the original, hazy image, just position your mouse on top of the graphics:

We encourage you to download the companion CDF notebook to engage deeper in dehazing experiments.

Let’s now leave the peaceful mountains and the award-winning dehazing method from ICIP 2013 and move to Paris, where ICIP 2014 just took place. Wolfram colleagues staffing our booth at the conference confirmed that dehazing (and air pollution) is still an active research topic. Attending such conferences has proven to be excellent opportunities to demonstrate how the Wolfram Language and *Mathematica* 10 can facilitate research in image processing, from investigation and prototyping to deployment. And we love to interact with experts so we can continue to develop the Wolfram Language in the right direction.

Download this post as a Computable Document Format (CDF) file.

References:

[1] C. Feng, S. Zhuo, X. Zhang, L. Shen, and S. Süsstrunk. “Near-Infrared Guided Color Image Dehazing,” IEEE International Conference on Image Processing, Melbourne, Australia, September 2013 (ICIP 2013).

[2] K. He, J. Sun, X. Tang. “Single Image Haze Removal Using Dark Channel Prior,” IEEE Conference on Computer Vision and Pattern Recognition, Miami, Florida, June 2009 (CVPR’09).

Images taken from:

[3] L. Schaul, C. Fredembach, and S. Süsstrunk. “Color Image Dehazing Using the Near-Infrared,” IEEE International Conference on Image Processing, Cairo, Egypt, November 2009 (ICIP’09).

]]>Professor of Materials Science and Engineering Emeritus, University of Illinois

**Mark Kotanchek**

CEO, Evolved Analytics LLC

**John Michopoulos**

Head of Computational Multiphysics Systems Laboratory, Naval Research Laboratory

**Rodrigo Murta**

Retail Intelligence Manager, St Marche Supermercados

Professor of Mathematics, Randolph-Macon College

**Yves Papegay**

Research Scientist, French National Institute for Research in Computer Science and Control

**Chad Slaughter**

System Architect, Enova Financial

Earlier this year, European Innovator Award winners were announced at the European Wolfram Technology Conference in Frankfurt, Germany:

Associate Professor, Department of Analysis, University of Szeged

Professor, Institute of Earth and Environmental Sciences, University of Potsdam

Congratulations to all of our 2014 Wolfram Innovator Award winners! Read more about our deserving recipients and their accomplishments.

]]>In Tweet-a-Program’s first few exciting months, we’ve already seen a number of awesome fractal examples like these:

To win, tweet your submissions to @WolframTaP by the end of the week (11:59pm PDT on Sunday, November 23). So that you don’t waste valuable code space, we don’t require a hashtag with your submissions. However, we do want you to share your code with your friends by retweeting your results with hashtag #MandelbrotWL.

We can’t wait to see what you come up with!

]]>