*Composite Materials in Piping Applications: Design, Analysis and Optimization of Subsea and Onshore Pipelines*

Dimitrios G. Pavlou explains the design, analysis, and performance of composite materials in oil, gas, water, and waste-water piping. The text is accompanied by a CD-ROM containing algorithms for pipe design and analysis using *Mathematica*.

*Topology of Digital Images: Visual Pattern Discovery in Proximity Spaces*

James F. Peters carries forward recent work on visual patterns and structures in digital images and introduces a near set-based topology in this book. To provide a better understanding of digital images, *Mathematica* scripts are used to illustrate the fabric and essential features of images.

*Green’s Functions in the Theory of Ordinary Differential Equations*

In this text written for graduate students and researchers interested in the theoretical underpinnings of boundary value problem solutions, Alberto Cabada provides a complete and exhaustive study of Green’s functions, including two appendices with *Mathematica* content.

*Introduction to Partial Differential Equations for Scientists and Engineers Using Mathematica *

Written by Kuzman Adzievski and Abul Hasan Siddigi, this textbook gives a mathematical introduction to PDEs at the undergraduate level. The authors use

*Advanced Engineering Mathematics *

This book by Larry Turyn provides accessible and comprehensive mathematical preparation for advanced undergraduate and beginning graduate students taking engineering courses. It includes an online student manual with *Mathematica* solutions.

*Nonlinear Mechanics of Thin-Walled Structures: Asymptotics, Direct Approach and Numerical Analysis*

Yury Vetyukov presents a hybrid approach to the mechanics of thin bodies. Analytical conclusions and closed-form solutions of particular problems are validated against numerical results. The majority of the book’s simulations were performed in the Wolfram *Mathematica* environment, and compact source code is included.

*Celestial Dynamics: Chaoticity and Dynamics of Celestial Systems *

Rudolph Dvorak and Christoph Lhotka convey the sophisticated tools needed to calculate exo-planet motion and interplanetary space flight using *Mathematica*. The book includes supplements for practical use of formulas.

*Splines and Compartment Models: An Introduction *

Karl-Ernst Biebler and Michael Wodny present methods of mathematical modeling from two points of view. Splines provide a general approach, while compartment models serve as examples for context related to modeling. The book includes 16 *Mathematica* programs for solving selected problems.

*Differentsial’naya geometriya v srede Mathematica: Informatsionnye i komp’yuternye tekhnologii v matematike*

(*Differential Geometry with Mathematica*)

This new Russian-language book by Tat’yana Kapustina discusses key problems in differential geometry and provides *Mathematica* code for their solutions.

*Fundamentos Para Cálculo usando Wolfram|Alpha e Scilab*

(*Foundations of Calculus—Using Wolfram|Alpha and Scilab*)

Written in Portuguese, this text by José de Oliveira Siqueira addresses the transition in calculus between secondary and higher education, using Wolfram|Alpha examples throughout.

*Fundamentos de Métodos quantitativos, Aplicados em Administracao, Economia, Contabilidade e Atuaria usando Wolfram|Alpha e Scilab*

(*Foundations of Quantitative Methods in Business Administration, Economics, Accounting and Actuarial Science—Using Wolfram|Alpha and Scilab*)

Also by José de Oliveira Siqueira, *Foundations of Quantitative Methods* presents major mathematical methods applied to various social sciences with Wolfram|Alpha examples.

SpinDynamica is an open-source package that Professor Levitt continues to work on as a hobby in his spare time, but the SpinDynamica community also contributes add-ons to bring additional functionality to researchers.

Professor Levitt graciously agreed to answer a few of our questions about his work, *Mathematica*, and SpinDynamica. He’s hopeful that as word spreads, others will submit add-ons that enhance the core functionality of SpinDynamica.

**What is your history in the field of magnetic resonance?**

I’ve been researching in magnetic resonance since I was an undergraduate project student in Oxford in the late 1970s. I went on to do a PhD in Oxford, researching in nuclear magnetic resonance (NMR) with Ray Freeman. After that, I went off on a long sequence of postdoctoral positions. I worked with Richard Ernst in Zürich, who later won the Nobel Prize for his work on NMR.

I researched at MIT for about five years, and then became a professor in Stockholm, Sweden, before moving back to the UK in 2001. I now lead a magnetic resonance section at the University of Southampton. Most of my research has involved developing the theory and technology of NMR. It’s an amazingly rich field, since NMR is time-dependent quantum mechanics in action, and allows an instant coupling between a theoretical idea, a numerical simulation, and a real experiment.

There are now many thousands of distinct NMR experiments, involving different sequences of radio frequency pulses and switched magnetic fields, providing information on everything from biomolecular structure to cancer diagnosis to quantum computing. It really is a staggeringly versatile field of research, and I feel very lucky to have stumbled into it and to have made my career in it.

**When did you begin working with Mathematica? **

I started using *Mathematica* seriously for magnetic resonance research in the 1990s in Stockholm. During my PhD and in Zürich, I had written a lot of low-level code for controlling an NMR spectrometer, as well as graphical FORTRAN simulations of NMR experiments. Later on, while I was at MIT, I developed a lot of FORTRAN computer code for simulating magnetic resonance experiments, which I tried to make as general as possible. However, I always recognized the limitations and inelegance of the language.

When I first encountered *Mathematica* I remember a sense of recognition like, “Wow, this is exactly the computer language I would have invented myself if I had known how.” However I do remember at that time *Mathematica* seemed slow in execution and there would be times of frustration. Nevertheless I stuck with it. Happily, the progress of hardware and continued development of *Mathematica* made my commitment worthwhile.

**What can you tell us about SpinDynamica and how you created it?**

I started to use *Mathematica* seriously for NMR research in Stockholm, partly in combination with a book that I was writing (*Spin Dynamics*), for which I wanted to generate informative graphics and check the equations. At that time, I did experiment with creating a set of modules for numerical simulations of NMR experiments, as well as generating analytical results, but I did not develop this very far.

Several other numerical simulation packages for NMR came out. Although they were numerically fast for specific classes of problems, I still felt that they were not as general and as elegant as I would like. Furthermore, our group was getting into experiments that required certain types of numerical simulation that were not catered for. So at some point in the early 2000s I set about seriously developing general packages for both symbolic and numerical calculations of magnetic resonance, within the *Mathematica* environment.

**How do you use SpinDynamica in your research?**

*Mathematica* in general, and SpinDynamica in particular, have become completely central to how I develop and test theoretical ideas. So it’s not as if I develop an idea and then test it with SpinDynamica—I actually use SpinDynamica as a tool to develop the idea in the first place. It’s a bit hard to explain, but it works for me. There’s something about *Mathematica* that seems to match perfectly the way I think and create.

**Is there an interesting example or discovery you’ve come across while working with Mathematica and SpinDynamica?**

A central topic of research in our group concerns something called long-lived spin states. These are certain quantum states of coupled magnetic nuclei that are very weakly coupled to the environment. They may be used for storing quantum information in nuclear spin systems for long times. (We have demonstrated over 30 minutes, which is an incredibly long time for a quantum effect in a room-temperature liquid.)

In the jargon of magnetic resonance, the equilibration of the nuclear quantum system with the environment is called relaxation. So these special nuclear spin states have very slow relaxation. It is a surprising fact, but true, that although the relaxation theory of NMR has been extensively developed with thousands of research papers since the 1960s and several Nobel prizes along the way, the existence of these states had been overlooked.

It was only when the symmetry properties of the relaxation were examined with *Mathematica* (using a precursor of SpinDynamica) that the presence of such states was predicted, and then demonstrated experimentally by our group in 2004. Our group is intensively researching the theory of these states and their exploitation in practical NMR experiments and, hopefully, in clinical MRI as well. Amongst other things, we are working with collaborators to develop agents that use long-lived states to detect cancer.

**What impact do you think SpinDynamica could have on future magnetic resonance research?**

That is hard to predict. There are several simulation packages in the community, many of which require less user intelligence, and which have a much faster execution for specific problems, than SpinDynamica. SpinDynamica is immensely powerful, but it does require that users have a good theoretical understanding in order to use it.

That weakness could be addressed by including additional packages for simulating common experimental situations without major theoretical understanding. The problem is that, at the moment, SpinDynamica remains a hobby project that is developed almost exclusively by me in my spare time. So although it is a superb tool for our particular branch of research, which demands a high theoretical level, there are many aspects that are rather undeveloped, including some important functionality that I have simply never had time to develop.

Nevertheless, I think the core functionality of SpinDynamica is powerful and stable, and I hope that the community will take it and build on it. That is slowly starting to happen. I have taught several graduate-level courses using SpinDynamica to explain the quantum-mechanical concepts of magnetic resonance, so there is take-up by a small but growing group of scientists. I think the impact will become much greater when I find time to write up a proper scientific paper on the architecture and functionality of SpinDynamica. Unfortunately my schedule makes that unlikely to happen soon.

]]>Last month while at SXSW 2014, Wolfram helped provide support for Slashathon, the first-ever music-focused hackathon. Hosted by Slash from Guns N’ Roses, the winning hack will be used to help release Slash’s new album. Wolfram provided mentoring for the competition in the form of onsite coding experts and technology access.

Also last month, Wolfram supported both hackBCA and HackPrinceton in New Jersey for high school and college students, respectively. In addition to having Wolfram programming experts available as mentors, Stephen Wolfram attended both of these events, where he spoke about the Wolfram Language and what the Wolfram technology stack is making possible.

At hackBCA, several projects made use of the Wolfram|Alpha API and the emerging Wolfram Cloud platform. We also saw some neat uses of Wolfram technologies at HackPrinceton. The Wolf Cocoa team developed a solution for making OS X apps by creating Wolfram Language bindings to the Objective-C runtime. Another group, Pokebble, used the Wolfram|Alpha API to enable users to play Pokémon on the wearable Pebble smart watch. And the third place overall software project winner, α-TeX, used the Wolfram Cloud to enable users to embed computed results into L^{A}T_{E}X.

This weekend Wolfram is again going where the coders are. Which isn’t far, as HackIllinois—the first-ever student-run hackathon at the University of Illinois at Urbana-Champaign—will be happening right down the road from Wolfram’s headquarters. Over 1,000 college students from across the country will be converging on the UIUC campus to imagine, learn, and launch their latest ideas as mobile apps, web apps, or other software and hardware projects.

As an event sponsor, Wolfram will be on hand to give a tech talk and demo our technologies, and to provide other event support. We’re excited to see what the winning teams can produce in only 36 hours!

Whether for educational purposes or fully commercial applications, we’re glad to see hackathons catching on as a way to develop the next generation of cutting-edge programmers. Maybe we’ll see you or your students at future hackathons. In the meantime, happy coding!

]]>If you missed Stephen live in Austin—and even if you didn’t—the “speaker’s cut” of his featured talk, “Injecting Computation Everywhere,” was posted to his Blog last week. In it, Stephen presents his vision of a future where there is no distinction between code and data, and showcases the Wolfram Language through examples and demos using Wolfram Programming Cloud, Data Science Platform, and other upcoming Wolfram technologies.

Response to Stephen’s talk was overwhelmingly positive, as attendees were inspired and impressed by the possibilities of the Wolfram technology stack. *Business Insider*, *Popular Science*, and *VentureBeat* were just a few of the media outlets on hand to cover the event. In other favorable receptions: Immediately following his featured talk, a book signing of Stephen’s award-winning work, *A New Kind of Science*, was so well attended that the SXSW bookstore ran out of copies!

All this activity paved the way for interesting conversations in the Wolfram booth throughout the event, where attendees of every age and level had the opportunity to see the Wolfram Cloud and Wolfram Language in action, talk to Wolfram experts, and get hands-on experience with our technologies, including the Wolfram Language and *Mathematica* running on the Raspberry Pi.

One of the most popular activities in the Wolfram booth at SXSW was “live-coding” with Stephen and other Wolfram team members. Some neat examples of these spontaneous coding demos—from color-mapping countries of the world by GDP and computing stock values over time, to webcam face detection and pop art creation—can be seen and discussed further in the online Wolfram Community.

And in a unique mashup, *Rolling Stone* captured the moment when computational genius met musical genius at Slashathon, the first-ever music-focused hackathon. The event was hosted by Slash from Guns N’ Roses, and Wolfram provided mentoring for the competition in the form of onsite coding experts and technology access.

If we missed you at SXSW 2014, perhaps we’ll see you in Austin next year. In the meantime, consider joining us for our European Wolfram Technology Conference in Germany in May. You can also look for Wolfram in Boston in May at Bio-IT World, or in Indianapolis in June at the ASEE Annual Conference. Bookmark our events page for updates on future trade shows and conferences where you can connect with Wolfram!

]]>*Two weeks ago I spoke at SXSW Interactive in Austin, TX. Here’s a slightly edited transcript (it’s the “speaker’s cut”, including some demos I had to abandon during the talk):*

Well, I’ve got a lot planned for this hour.

Basically, I want to tell you a story that’s been unfolding for me for about the last 40 years, and that’s just coming to fruition in a really exciting way. And by *just* coming to fruition, I mean pretty much today. Because I’m planning to show you today a whole lot of technology that’s the result of that 40-year story—that I’ve never shown before, and that I think is going to be pretty important.

I always like to do live demos. But today I’m going to be pretty extreme. Showing you a lot of stuff that’s very very fresh. And I hope at least a decent fraction of it is going to work.

OK, here’s the big theme: taking computation seriously. Really understanding the idea of computation. And then building technology that lets one inject it everywhere—and then seeing what that means.

I’ve pretty much been chasing this idea for 40 years. I’ve been kind of alternating between science and technology—and making these bigger and bigger building blocks. Kind of making this taller and taller stack. And every few years I’ve been able to see a bit farther. And I think making some interesting things. But in the last couple of years, something really exciting has happened. Some kind of grand unification—which is leading to a kind of Cambrian explosion of technology. Which is what I’m going to be showing you pieces of for the first time here today.

But just for context, let me tell you a bit of the backstory. Forty years ago, I was a 14-year-old kid who’d just started using a computer—which was then about the size of a desk. I was using it not so much for its own sake, but instead to try to figure out things about physics, which is what I was really interested in. And I actually figured out a few things—which even still get used today. But in retrospect, I think the most important thing I figured out was kind of a meta thing. That the better the tools one uses, the further one can get. Like I was never good at doing math by hand, which in those days was a problem if you wanted to be a physicist. But I realized one could do math by computer. And I started building tools for that. And pretty soon me with my tools were better than almost anyone at doing math for physics.

And back in 1981—somewhat shockingly in those days for a 21-year-old professor type—I turned that into my first product and my first company. And one important thing is that it made me realize that products can really drive intellectual thinking. I needed to figure out how to make a language for doing math by computer, and I ended up figuring out these fundamental things about computation to be able to do that. Well, after that I dived back into basic science again, using my computer tools.

And I ended up deciding that while math was fine, the whole idea of it really needed to be generalized. And I started looking at the whole universe of possible formal systems—in effect the whole computational universe of possible programs. I started doing little experiments. Kind of pointing my computational telescope into this computational universe, and seeing what was out there. And it was pretty amazing. Like here are a few simple programs.

Some of them do simple things. But some of them—well, they’re not simple at all.

This is my all-time favorite, because it’s the first one like this that I saw. It’s called rule 30, and I still have it on the back of my business cards 30 years later.

Trivial program. Trivial start. But it does something crazy. It sort of just makes complexity from nothing. Which is a pretty interesting phenomenon. That I think, by the way, captures a big secret of how things work in nature. And, yes, I’ve spent years studying this, and it’s really interesting.

But when I was first studying it, the big thing I realized was: I need better tools. And basically that’s why I built *Mathematica*. It’s sort of ironic that *Mathematica* has math in its name. Because in a sense I built it to get beyond math. In *Mathematica* my original big idea was to kind of drill down below all the math and so on that one wanted to do—and find the computational bedrock that it could all be built on.
And that’s how I ended up inventing the language that’s in *Mathematica*. And over the years, it’s worked out really well. We’ve been able to build ever more and more on it.

And in fact *Mathematica* celebrated its 25th anniversary last year—and in those 25 years it’s gotten used to invent and discover and learn a zillion things—in pretty much all the universities and big companies and so on around the world. And actually I myself managed to carve out a decade to actually use *Mathematica* to do science myself. And I ended up discovering lots of things—scientific, technological and philosophical—and wrote this big book about them.

Well, OK, back when I was a kid something I was always interested in was systematizing information. And I had this idea that one day one should be able to automate being able to answer questions about basically anything. I figured out a lot about how to answer questions about math computations. But somehow I imagined that to do this in general, one would need some kind of general artificial intelligence—some sort of brain-like AI. And that seemed very hard to make.

And every decade or so I would revisit that. And conclude that, yes, that was still hard to make. But doing the science I did, I realized something. I realized that if one even just runs a tiny program, it can end up doing something of sort of brain-like complexity.

There really isn’t ultimately a distinction between brain-like intelligence, and this. And that’s got lots of implications for things like free will versus determinism, and the search for extraterrestrial intelligence. But for me it also made me realize that you shouldn’t need a brain-like AI to be able to answer all those questions about things. Maybe all you need is just computation. Like the kind we’d spent years building in *Mathematica*.

I wasn’t sure if it was the right decade, or even the right century. But I guess that’s the advantage of having a simple private company and being in charge; I just decided to do the experiment anyway. And, I’m happy to say, it turned out it was possible. And we built Wolfram|Alpha.

You type stuff in, in natural language. And it uses all the curated data and knowledge and methods and algorithms that we’ve put into it, to basically generate a report about what you asked. And, yes, if you’re a Wolfram|Alpha user, you might notice that Wolfram|Alpha on the web just got a new spiffier look yesterday. Wolfram|Alpha knows about all sorts of things. Thousands of domains, covering a really broad area. Trillions of pieces of data.

And indeed, every day many millions of people ask it all sorts of things—directly on the website, or through its apps or things like Siri that use it.

Well, OK, so we have *Mathematica*, which has this kind of bedrock language for describing computations—and for doing all sorts of technical computations. And we also have Wolfram|Alpha—which knows a lot about the world—and which people interact with in this sort of much messier way through natural language. Well, *Mathematica* has been growing for more than 25 years, Wolfram|Alpha for nearly 5. We’ve continually been inventing ways to take the basic ideas of these systems further and further.
But now something really big and amazing has happened. And actually for me it was catalyzed by another piece: the cloud.

Now I didn’t think the cloud was really an intellectual thing. I thought it was just sort of a utility. But I was wrong. Because I finally understood how it’s the missing piece that lets one take kind of the two big approaches to computation in *Mathematica* and in Wolfram|Alpha and make something just dramatically bigger from them.

Now, I’ve got to tell you that what comes out of all of this is pretty intellectually complicated. But it’s also very very directly practical. I always like these situations. Where big ideas let one make actually really useful new products. And that’s what’s happened here. We’ve taken one big idea, and we’re making a bunch of products—that I hope will be really useful. And at some level each product is pretty easy to explain. But the most exciting thing is what they all mean together. And that’s what I’m going to try to talk about here. Though I’ll say up front that even though I think it’s a really important story, it’s not an easy story to tell.

But let’s start. At the core of pretty much everything is what we call the Wolfram Language. Which is something we’re just starting to release now.

The core of the Wolfram Language has been sort of incubating in *Mathematica* for more than 25 years. It’s kind of been proven there. But what just happened is that we got all these new ideas and technology from Wolfram|Alpha, and from the Cloud. And they’ve let us make something that’s really qualitatively different. And that I’m very excited about.

So what’s the idea? It’s really to make a language that’s knowledge based. A language where built right into the language is huge amounts of knowledge about computation and about the world. You see, most computer languages kind of stay close to the basic operations of the machine. They give you lots of good ways to manage code you build. And maybe they have add-on libraries to do specific things.

But our idea with the Wolfram Language is kind of the opposite. It’s to make a language that has as much built in as possible. Where the language itself does as much as possible. To make everything as automated as possible for the programmer.

OK. Well let’s give it a try.

You can use the Wolfram Language completely interactively, using the notebook interface we built for *Mathematica*.

OK, that’s good. Let’s do something a little harder:

Yup, that’s a big number. Kind of looks like a bunch of random digits. Might be like 60,000 data points of sensor data.

How do we analyze it? Well, the Wolfram Language has all that stuff built in.

So like here’s the mean:

And the skewness:

Or hundreds of other statistical tests. Or visualizations.

That’s kind of weird actually. But let me not get derailed trying to figure out why it looks like that.

OK. Here’s something completely different. Let’s have the Wolfram Language go to some kind volunteer’s Facebook account and pull out their friend network:

OK. So that’s a network. The Wolfram Language knows how to deal with those. Like let’s compute how that breaks into communities:

Let’s try something different. Let’s get an image from this little camera:

OK. Well now let’s do something to that. We can just take that image and feed it to a function:

So now we’ve gotten the image broken into little pieces. Let’s make that dynamic:

Let’s rotate those around:

Let’s like even sort them. We can make some funky stuff:

OK. That’s kind of cool. Why don’t we tweet it?

OK. So the whole point is that the Wolfram Language just intrinsically knows a lot of stuff. It knows how to analyze networks. It knows how to deal with images—doing all the fanciest image processing. But it also knows about the world. Like we could ask it when the sun rose this morning here:

Or the time from sunrise to sunset today:

Or we could get the current recorded air temperature here:

Or the time series for the past day:

OK. Here’s a big thing. Based on what we’ve done for Wolfram|Alpha, we can understand lots of natural language. And what’s really powerful is that we can use that to refer to things in the real world.

Let’s just type `control-= nyc`:

And that just gives us the entity of New York City. So now we can find the temperature difference between here and New York City:

OK. Let’s do some more:

Let’s find the lengths of those borders:

Let’s put that in a grid:

Or maybe let’s make a word cloud out of that:

Or we could find all the former Soviet countries:

And let’s find their flags:

And let’s like find which is closest to the French flag:

Pretty neat, eh?

Or let’s take the first few former Soviet republics. And generate maps of their capital cities. With 10-mile discs marked:

I think it’s pretty amazing that you can do that kind of thing right from inside a programming language, with just a line of code.

And, you know, there’s a huge amount of knowledge built into the Wolfram Language. We’ve been building this for more than a quarter of a century.

There’s knowledge about algorithms. And about the world.

There are two big principles here. The first is maximum automation: automate as much as possible. You define what you want the language to do, then it’s up to it to figure out how to do it. There might be hundreds of algorithms for doing different cases of something. But what we want to do is to make a meta-algorithm that selects the best way to do it. So kind of all the human has to do is to define their goal, then it’s up to the system to do things in the way that’s fastest, most accurate, best looking.

Like here’s an example. There’s a function `Classify` that tries to classify things. You just type `Classify`.
Like here’s a very small training set of handwritten digits:

And this makes a classifier.

Which we can then apply to something we draw:

OK, well here’s another big thing about the Wolfram Language: coherence. Unification. We want to make everything in the language fit together. Even though it’s a huge system, if you’re doing something over here with geographic data, we want to make sure it fits perfectly with what you’re doing over there with networks.

I’ve spent a decent fraction of the last 25 years of my life implementing the kind of design discipline that’s needed. It’s been fascinating, but it’s been hard work. Spending all that time to make things obvious. To make it so it’s easy for people to learn and remember and guess. But you know, having all these building blocks fit together: that’s also where the most powerful new algorithms come from. And we’ve had a great time inventing tons and tons of new algorithms that are really only possible in our language—where we have all these different areas integrated.

And there’s actually a really fundamental reason that we can do this kind of integration. It’s because the Wolfram Language has this very fundamental feature of being symbolic. If you just type `x` into the language, it doesn’t give some error about *x* being undefined. `x` is just a thing—symbolic `x`—that the language can deal with. Of course that’s very nice for math.

But as far as I am concerned, one of the big discoveries is that this idea of a symbolic language is incredibly powerful for zillions of other things too. Everything in our language is symbolic. Math expressions.

Or entities, like Austin, TX:

Or like a piece of graphics. Here’s a sphere:

Here are a bunch of cylinders:

And because everything is just a symbolic expression, we could pick this up, and, like, do image processing on it:

You know, everything is just a symbolic expression. Like another example is interfaces. Here’s a symbolic slider:

Here’s a whole array of sliders:

You know, once everything is symbolic, there’s just a whole lot you can do. Here’s nesting some purely symbolic function *f*:

Here’s nesting, like, a function that makes a frame:

And here’s symbolically nesting, like, an interface element:

My gosh, it’s a fractal interface!

You know, once things are symbolic, it’s really easy to hook everything up. Like here’s a plot:

And now it’s trivial to make it interactive:

You can do that with anything:

OK. Here’s another thing that can be made symbolic: documents.

The document I’m typing into here is just another symbolic expression. And you can create whatever you want in it symbolically.

Like here’s some text. We could twirl it around if we want to:

All just symbolic expressions.

OK. So here’s yet another thing that’s a symbolic expression: code. Every piece of code in the Wolfram Language is just a symbolic expression, that can be picked up and manipulated, and passed around, and run, wherever you want. That’s incredibly important for programming. Because it means you can build things in a really modular way. Every piece can stand on its own.

It’s also important for another reason: it’s a great way to deal with the cloud, sort of treating it as a giant active repository for symbolic lumps of computation. And in fact we’ve built this whole infrastructure for that, that I’m going to demo for the first time here today.

Well, let’s say we have a symbolic expression:

Now we can just deploy it to the Cloud like this:

And we’ve got a symbolic `CloudObject`, with a URL we can go to from anywhere. And there’s our material.

Now let’s make this not static content, but an actual program. And on the web, a good way to do that is to have an API. But with our whole notion of everything being symbolic, we can represent that as just another symbolic expression:

And now we can deploy that to the Cloud:

And we’ve got an Instant API. Now we can just fill in an API parameter ?size=150 and we can run this from anywhere on the web:

And every time what’ll happen is that you’ll be calling that piece of Wolfram Language code in the Wolfram Cloud, and getting the result back. OK.

Here’s another thing to do: make a form. Just change the `APIFunction` to a `FormFunction`:

Now what we’ve got is a form:

Let’s add a feature:

Now let’s fill some values into the form:

And when we press Submit, here’s the result:

OK. Let’s try a different case. Here’s a form that takes two cities, and draws a map of the path between them:

Let’s deploy it in the Cloud:

Now let’s fill in the form:

And when we press Submit, here’s what we get:

One line of code and an actual little web app! It’s got quite a bit of technology inside it. Like you see these fields. They’re what we call smart fields. That leverage our natural language understanding stack:

If you don’t give a city, here’s what happens:

When you do give a city, the system is automatically interpreting the inputs as city entities. Let me show you what happens inside. Let’s just define a form that just returns a list of its inputs:

Now if we enter cities, we just get Wolfram Language symbolic entity objects. Which of course we can then compute with:

All right, let’s try something else.

Let’s do a sort of modern programming example. Let’s make a silly app that shows us pictures through the eyes of a cat or a dog. OK, let’s build the framework:

Now let’s pull in an actual algorithm for dog vision. Color channels, and acuity.

OK. Let’s deploy with that:

Now we can send that over as an app. But first let’s build an icon for it:

And now let’s deploy it as a public app:

Now let’s go to the Wolfram Cloud app on an iPad:

And there’s the app we just published:

Now we click that icon—and there we have it: a mobile app running against the Wolfram Language in the Cloud:

And we can just use the iPad camera to input a picture, and then run the app on it:

Pretty neat, eh?

OK, but there’s more. Actually, let me tell you about the first product that’s coming out of our Wolfram Language technology stack. It should be available very soon. We call it the Wolfram Programming Cloud.

It’s all the stuff I’m showing you, but all happening in the Cloud. Including the programming. And, yes, there’s a desktop version too.

OK, so here’s the Programming Cloud:

Deploy from the Cloud. Define a function and just use `CloudDeploy[]`:

Or use the GUI:

Oh, another thing is to take CDF and deploy it to run in the Cloud.

Let’s take some code from the Wolfram Demonstrations Project. Actually, as it happens, this was the very first Demonstration I wrote when were originally building that site:

Now here’s the deployed Cloud CDF:

It just needs a web browser. And gives arbitrary interactivity by running against the Wolfram Engine in the Cloud.

OK, well, using this technology, another product we’re building is our Data Science Platform.

And the idea is that data comes in, from all sorts of sources. And then we have all these automatic ways to analyze it. Using sort of a giant meta-algorithm. As well as using all the knowledge of the actual world that we have.

Well, then you can program whatever you want with the Wolfram Language. And in the end you can make reports. On demand, like from an API or an app. Or just on a schedule. And we can use our whole CDF symbolic documents to set up these reports.

Like here’s a template for a report on the state of my email inbox. It’s just defined as a symbolic document. That I go ahead and edit.

And then programmatically generate reports from:

You know, there are some really spectacular things we can do with data using our whole symbolic language technology stack. And actually just recently we realized that we can use it to make a very clean unification and generalization of SQL and NoSQL databases. And we’re implementing that in sort of four transparent levels. In memory. In files. In databases. And distributed.

But OK. Another thing is that we’ve got a really good way to represent individual pieces of data. We call it WDF—the Wolfram Data Framework.

And basically what it is, is taking the kind of algorithmic ontology that we built for Wolfram|Alpha—and that we know works—and exposing that. And using our natural language understanding to be able to take unstructured data, and automatically convert it to something that’s structured and computable. And that for example our Data Science Platform can do really good things with.

Well, OK. Here’s another thing. A rapidly increasing source of data out there in the world are connected devices. And we’ve been pretty deeply involved with those. And actually one thing I wanted to do recently was just to find out what devices there are out there. So we started our Connected Devices Project, to just curate the devices out there—just like we curate all sorts of other things in Wolfram|Alpha.

We have about 2500 devices in here now, growing every day. And, yes, we’re using WDF to organize this, and, yes, all this data is available from Wolfram|Alpha.

Well, OK. So there are all these devices. And they measure things and do things. And at some point they typically make web contact. And one thing we’re doing—with our Data Science Platform and everything—is to create a really smooth infrastructure for handling things from there on. For visualizing and analyzing and computing everything that comes from that Internet of Things.

You know, even for devices that haven’t yet made web contact, it can be a bit messier, but we’ve got a framework for handling those too. Like here’s an accelerometer connected to an Arduino:

Let’s see if we can get that data into the Wolfram Language. It’s not too hard:

And now we can immediately plot this:

So that’s connecting a device to the Wolfram Language. But there’s something else coming too. And that’s actually putting the Wolfram Language onto devices. And this is where 25 years of tight software engineering pays back. Because as soon as devices run things like Linux, we can run the Wolfram Language on them. And actually there’s now a preliminary version of the Wolfram Language bundled with the standard operating system for every Raspberry Pi.

It’s pretty neat being able to have little $25 devices that persistently run the Wolfram Language. And connect to sensors and actuators and things. And every little computer out there just gets represented as yet another symbolic object in the Wolfram Language. And, like, it’s trivial to use the built-in parallel computation capabilities of the Wolfram Language to pull data from lots of such machines.

And going forward, you can expect to see the Wolfram Language running on lots of embedded processors. There’s another kind of embedding we’re interested in too. And that’s software embedding. We want to have a Universal Deployment System for the Wolfram Language.

Given a Wolfram Language program, there are lots of ways to deploy it.

Here’s one: being able to call Wolfram Language code from other languages.

And we have a really easy way to do that. There’s a GUI, but in the Wolfram Language, you can just take an API function, and say: create embed code for this for Python. Or Java. Or whatever.

And you can then just insert that code in your external program, and it’ll call the Wolfram Cloud to get a computation done. Actually, there are going to be ways to do this from inside IDEs, like Wolfram *Workbench*.

This is really easy to set up, and as I said, it just calls the Wolfram Cloud to run Wolfram Language code. But there’s even another concept. There’s an Embedded Wolfram Engine that you can run locally too. And essentially the same code will then work. But now you’re running on your local machine, not in the Cloud. And things get pretty interesting, being able to put Embedded Wolfram Engines inside all kinds of software, to immediately add all that knowledge-based capability, and all those algorithms, and natural language and so on. Here’s what the Embedded Wolfram Engine looks like inside the Unity Game Engine IDE:

Well, talking of embedding, let me mention yet another part of our technology stack. The Wolfram Language is supposed to describe the world. And so what about describing devices and machines and so on.

Well, conveniently enough we have a product related to our *Mathematica* business called *SystemModeler*, which does large-scale system modeling and simulation:

And now that’s all getting integrated into the Wolfram Language too.

So here’s a representation of a rectifier circuit:

And this is all it takes to simulate this device:

And to plot parameters from the simulation:

And here’s yet another thing. We’re taking the natural language understanding capabilities that we created for Wolfram|Alpha, and we’re setting them up to be customizable. Now of course that’s big when one’s querying databases, or controlling devices. It’s also really interesting when one’s interacting with simulations. Looking at some machine out in the field, and being able to figure out things about it by talking to one’s mobile device, and then getting a simulation done in the Cloud.

There are lots of possibilities. But OK, so how can people actually use these things? Well, in the next couple of weeks there’ll be an open sandbox on the web for people to use the Wolfram Language. We’ve got a gallery of examples that gives good places to start.

Oh, as well as 100,000 live examples in the Wolfram Language documentation.

And, OK, the Wolfram Programming Cloud is also coming very soon. And it’ll be completely free to start developing with it, and even to do small-scale deployments.

So what does this mean?

Well, I think it’s pretty exciting. Because I think we just really changed the economics of going from algorithmic ideas to deployed products. If you come by our booth at the South By trade show, we’ll be doing a bunch of live coding there. And perhaps we’ll even be able to create little products for people right there. But I think our Programming Cloud is going to open up a surge of algorithmic startups. And I’ll be really interested to see what comes out.

OK. Here’s another thing that’s going to change I think: programming education. I think the Wolfram Language is sort of uniquely good for education. Because it’s a language where you get to do real things incredibly easily. You get to see computation at work in an incredibly powerful way. And, by the way, rather effortlessly see a bunch of modern computer science ideas… and immediately connect to the real world.

And the natural language aspect makes it really easy to get started. For serious programmers, I think having snippets of natural language programming, particularly in places where one’s connecting to the real world, is very powerful. But for people getting started, it’s really nice to be able to create things just with natural language.

Like here we can just say:

And have the code generated automatically.

We’re really interested in all the educational possibilities here. Certainly there’s the raw material for a zillion great hackathon projects.

You know, every summer for the past dozen years we’ve done a very successful summer school about the new kind of science I’ve worked on:

Where we’re effectively doing real-time science. We’ve also for a few years had a summer camp for high-school students:

And we’re using our experience here to build out a bunch of ways to use the Wolfram Language for programming education. You know, we’ve been involved in education for a long time—more than 25 years. *Mathematica* is incredibly widely used there. Wolfram|Alpha I’m happy to say has become sort of a universal tool for students.

There’s more and more coming.

Like here’s a version of Wolfram|Alpha in Chinese that’s coming soon:

Here’s a Problem Generator created with the Wolfram Language and available through Wolfram|Alpha Pro:

And we’re going to be doing all sorts of elaborate educational analytics and things through our Cloud system. You know, there are just so many possibilities. Like we have our CDF—Computable Document Format—that people have used for quite a few years to make interactive Demonstrations.

In fact here’s our site with nearly 10,000 of them:

And now with our Cloud system we can just run all of these directly in a web browser, using Cloud CDF, so they become easy to integrate into web learning environments. Like here’s an example that just got done by Versal:

Well, OK, at kind of the other end of things from education, there’s a lot going on in the corporate area. We’ve been doing large-scale custom deployments of Wolfram|Alpha for several years. But now with our Data Science Platform coming, we’ve got a kind of infinitely customizable version of that. And of course everything is integrated between cloud and desktop. And we’re going to have private clouds too.

But all this is just the beginning. Because what we’ve got with the whole Wolfram Language stack is a kind of universal platform for creating products. And we’ve got a whole sequence of products in the pipeline. It’s an exciting feeling having all this stuff that we’ve been doing for more than a quarter of a century come together like this.

Of course, it’s big challenge dealing with all the possibilities. I mean, we’re just a little private company with about 700—admittedly very talented—people.

We’ve started spinning off companies. Like Touch Press which makes iPad ebooks.

And we’ll be doing more of that, though we need more entrepreneurs. And we might even take investors.

But, OK, what about the broader future?

I think about that a fair amount. I don’t have time to say much here. But let me say just a few things. In what we’ve done with computation and knowledge, we’re trying to take the knowledge of our civilization, and put it in computable form. So we can essentially inject it everywhere. In something like Wolfram|Alpha, we’re essentially doing on-demand computation. You ask for something, and Wolfram|Alpha will do it.

Increasingly, we’re going to have preemptive computation. We’re building towards that a lot with the Wolfram Language. Being able to model the world, and make predictions about what’s going to happen. Being able to tell you what you might want to do next. In fact, whenever you use the Wolfram Language interactively, you’ll see this little Suggestions Bar that’s using some fairly fancy computation to suggest what to do next.

But the real way to have that work is to use knowledge about you. I’ve been an enthusiast of personal analytics for a long time. Like here’s a 25-year history of my diurnal email rhythm:

And as we have more sensors and outsource more of our memory, our machines will be better and better at telling us what to do. And at some level the machines take over just because the humans tend to follow the auto-suggests they make.

But OK. Here’s something I realized recently. I’m interested in history, and I was visiting the archives of Gottfried Leibniz, who lived about 300 years ago, and had a lot of rather modern ideas about computing. But in his time he had only one—very primitive—proto-computer that he built:

Today we have billions of computers. So I was thinking about the extrapolation. And I realized that one day there won’t just be lots more computers—everything will actually be made of computers.

Biology has already a little bit figured out this idea. But one day it won’t be worth making anything out of dumb materials; instead everything will be made out of stuff that’s completely programmable.

So what does that mean? Well, of course it really blurs the distinction between hardware and software. And it means that these languages we create sort of become what everything is made of. You know, I’ve been interested for a long time in the fundamental theory of physics. And in fact with a bunch of science I’ve done, I think there’s a real possibility that we’ve finally got a new way to find such a theory. In effect a way to find our physical universe out in the computational universe of all possible universes.

But here’s the funny thing: once everything is made of computers, even though it’ll be really cool to find the fundamental theory of physics—and I still want to do it—it’s not going to matter so much. Because in effect that actually physics is just the machine code for the universe. But everything we deal with is on top of a layer that we can program however we want.

Well, OK, what does that mean for us humans? No doubt we’ll get to deploy in that sort of much-more-than-biology-programmable world. Where in effect you can just build any universe for yourself. I sort of imagine this moment where there’s a box of a trillion souls. Running in whatever pieces of the computational universe they want.

And what happens? Well, there’s lots of computation going on. But from the science I’ve done—and particularly the Principle of Computational Equivalence—I think it’s sort of a very Copernican situation. I don’t think there’s anything fundamentally different about that computation, from what goes on all over the universe, and even in rather simple programs.

And at some level the only thing that’s special about that particular box of a trillion souls is that it’s based on our particular history. Now, you know, I deal with all this tech stuff. But I happen to like people; I guess that’s why I’ve liked building a company, and mentoring lots of people. And in a sense seeing how much is possible, and how much can sort of be generalized and virtualized with technology, actually makes me think people are more important rather than less. Because when everything is possible, what matters is just what one wants or chooses to do.

It’s sort of a big version of what we’re doing with the Wolfram Language. Humans define the goals, then technology automatically tries to achieve them. And the more we can inject computation into everything, the more this becomes possible. And, you know, I happen to think that the injection of computation into everything will be a defining feature—perhaps the defining feature—of this time in history.

And I have to say I’m personally pleased to have lived at the right time to make some contribution to this. It’s a great privilege. And I’m very pleased to have been able to tell you a little bit about it here today.

Thank you very much.

To comment, please visit the original post at the Stephen Wolfram Blog »

]]>Brocato teaches his doctoral students the importance of understanding formal and fundamental viewpoints, and his goal is to prepare them to collaborate across disciplines with others in the field of engineering.

Watch Brocato discuss how he and his students use *Mathematica* as a collaborative tool.

*This video is in French, so be sure to click the CC button in the lower right-hand corner for captions.*

As Brocato points out, *Mathematica* isn’t just for doctoral students. In addition to bridging the communication gap between architects and engineers, it allows first- and second-year students to develop a better understanding of the foundation of architecture.

View other *Mathematica* success stories on our Customer Stories pages.

Here’s a short video demo I just made. It’s amazing to me how much of this is based on things I hadn’t even thought of just a few months ago. Knowledge-based programming is going to be much bigger than I imagined…

To comment, please visit the original post at the Stephen Wolfram Blog »

]]>In my previous blog post I described how to write MapReduce algorithms in *Mathematica* using the *HadoopLink* package. Now let’s go a little deeper and write a more serious MapReduce algorithm.

I’ve blogged in the past about some of the cool genomics features in Wolfram|Alpha. You can even search the human genome for DNA sequences you’re interested in. Biologists often need to search for the locations of DNA fragments they find in the lab, in order to know what animal the fragment belongs to, or what chromosome it’s from. Let’s use *HadoopLink* to build a genome search engine!

As before, we load the `HadoopLink` package:

And establish a link to the Hadoop master node:

We’ll grab the small human mitochondrial genome from `GenomeData` to illustrate the idea:

First we split the genome up into individual bases (A, T, C, or G):

These are going to be our key-value pairs (k1, v1). The values are the start position of each base in the genome:

{k1, v1} = {base, position}

Basically we just created an index for the genome.

Export the index to the Hadoop Distributed File System (HDFS):

For our query, we’ll use a sequence of 11 bases that we know is in the genome:

(We’re using a sequence with a lot of repetition in it in order to give our search algorithm a bit of an extra challenge.)

Our genome searcher should return the position 515 for this query:

Now we need a mapper and a reducer.

Recall from part 1 that the mapper takes a key-value pair (step 1) and emits another pair (step 2):

Our genome search mapper will receive bases from the genome index as input and will output key-value pairs for every location in the query where the index base occurs (you’ll see why in a minute):

(1) Input: {k1, v1} = {index base, genome position}

The output key is the genome position, and the output value is the query position:

(2) Output: {k2, v2} = {genome position, query position}

What’s the difference between the genome position and the query position? The query position is the position of the base in the query, whereas the genome position is a position in the whole genome.

For example, say the mapper gets a key-value pair with base A at position 517:

The query positions for base A in query sequence GCACACACACA are 3, 5, 7, 9, and 11:

Here’s the sequence with those positions highlighted:

The mapper only has a single key-value pair with one index base, in addition to the query sequence. It doesn’t have the rest of the genome to compare those to, so it has to find all of the potential ways the query could line up with base A at position 517:

Here the colors match up each of the As in the query (horizontal) with their resulting genome position (vertical). Take for example the A at base 3 in the query (in green). When you line it up with the A at index position 517, the query would start at genome position 515 (517 – 3 + 1 = 515) (also in green).

Similarly, the red base at query position 5 makes the query line up at genome position 513 (also in red). And the same goes for query position 7 with genome position 511 (purple), query position 9 with genome position 509 (orange), and query position 11 with genome position 507 (brown).

Only one of these alignments is correct. In this case, it’s query position 3 (in green) that makes the query line up with the genome. But the mapper doesn’t know that, it just emits all of the potential matches.

Now, since the reducer collects on keys, it will collect all the bases that match at the same genome position:

Input: {k2, {v2 …}} = {genome position, {query positions …}}

Now for a given genome position, the reducer finds a match whenever the values form a complete sequence of query positions:

If the reducer finds a complete match, it emits the genome position:

Output: {k3, v3} = {query sequence, genome position}

Let’s run our genome searcher now, with the query GCACACACACA:

(See part 1 for an introduction to the `HadoopMapReduceJob` function.)

And import the genome matches from HDFS:

The matching genome position is 515, which is correct! Our genome search engine is working!

Now let’s run another search on a query that should match at two different positions on the genome:

This query should match at positions 10 and 2277:

Yep, it found both matches!

Now let’s scale this up to the whole human genome. The first step is to create the index, this time for the whole genome, not just the mitochondrion. To do that, I downloaded the whole human genome as text files from a government server and imported the text files into HDFS:

There’s one text file per chromosome containing the raw sequence for the chromosome:

I then ran a simple MapReduce job to create key-value pairs for the index on HDFS, which look like this:

[hs_ref_GRCh37.p13_alts.fa, 121] G

[hs_ref_GRCh37.p13_alts.fa, 122] A

[hs_ref_GRCh37.p13_alts.fa, 123] A

[hs_ref_GRCh37.p13_alts.fa, 124] T

[hs_ref_GRCh37.p13_alts.fa, 125] T

[hs_ref_GRCh37.p13_alts.fa, 126] C

[hs_ref_GRCh37.p13_alts.fa, 127] A

[hs_ref_GRCh37.p13_alts.fa, 128] G

[hs_ref_GRCh37.p13_alts.fa, 129] C

[hs_ref_GRCh37.p13_alts.fa, 130] T

One slight difference from above is the key is now {chromosome, genome position} and the value is now the base at that position. I did that so I could put the chromosome in the key. So I’ll make a small change to the mapper to account for the new key:

The reducer is exactly the same as before.

Let’s run a search using the same sequence again:

This time we get matches for the whole genome:

And we can keep pushing the algorithm further. How about searching for approximate matches instead of exact matches? It’s a simple change to the reducer, where we specify the fraction of the query that needs to match:

This isn’t the most efficient way to search a genome, but it shows how easy it is to prototype and run MapReduce algorithms in *Mathematica*. If you want to know more, check out my recent talk. And pull requests are always welcome on the *HadoopLink* GitHub Repo!

Download this post as a Computable Document Format (CDF) file.

So what shall we make? I think the best gift is a DIY one—especially if it says a lot without even making a sound. Below you see a 3D-printed silver earring in the shape of a sound wave recorded while a person is saying “I love you.”

This inspired me to do a little exploration into sound forms and 3D printing. We start from a sound recording. And because the word “valentine” also means “gift,” there is more than one meaning if you 3D-print the word and then say, while giving it to your loved one, “Here is my valentine for you.” I thought that in my case it would be symbolic to make *Mathematica* say “Valentine” to all our readers:

I recorded it with another instance of *Mathematica* with a nifty built-in Sound Recorder:

Which can be run as:

Once `SystemDialogInput` gets the data, it automatically produces an interface, which you can see above, that can be assigned to a variable I called “raw.” This “raw” data needs to be brushed up a bit. There are usually long, silent moments at the beginning and the end of an audio recording due to time spent pressing recording interface buttons. Deletion of these moments can be automated. The underlying structure of the sound player interface is a list of real values in the range {-1, 1} :

Where the last number 32,000 is the sampling rate. Here are a few ways to automatically crop silent background moments. One approach is based on `Split`, a function that can take a condition to isolate low-magnitude values in data.

The other method is based on `ImageCrop`, which is a built-in image processing algorithm to crop off extra background from images. The trick is turning sound data temporarily to a two-pixel-wide image, cropping, and getting the data back. Results are similar.

Imagine we would like to 3D-print a shape reminiscent of the sound representation shown in the player interface. If we look carefully, this shape is not entirely symmetric with respect to the horizontal axis:

If we choose to neglect this asymmetry, one simple way is to pick the part that is above or below the horizontal axis and work only with it. We could approximate the outer shape by a sort of envelope curve and rotate that curve around the horizontal axis to get a symmetric 3D shape. The envelope is easily built by partitioning data in non-overlapping intervals and finding a maximum point in each such interval. How detailed the envelope is depends on the size of the partitioning bins.

There are a few ways to rotate the envelope around the horizontal axis, for example, we could use `Interpolation` and `RevolutionPlot3D`. But for better control of smoothness of shape, I decided to go with `BSplineFunction`, which can produce surfaces and high-dimensional manifolds. To better understand it, let’s consider a simple curve case. Here is a smooth spline through our envelope points.

I dropped a few end points to make the pendant’s ends look nicer. We also re-scaled the dataset in space. Our shape was spreading horizontally by thousands of units and vertically by two units. With the Wolfram Language, it’s easy to visualize by just changing the `AspectRatio` option for graphics, but in a 3D printer we will get a thin, overstretched object. Above we re-scaled the envelope to be about 30 units long and 10 wide—a size appropriate for a small piece of jewelry, perhaps a pendant if the unit is one millimeter.

Now we will place data points in 3D space and rotate the whole dataset six times with a Pi/6 increment around the horizontal axis to complete half of a full rotation. Then we will use the `BSplineFunction` for a smooth surface.

I explicitly specified the option `SplineClosed` so you would know how to produce closed surfaces in a specific direction, if need be. I decided to leave my surface fully open and make half a circular pendant in order for you to see how the option `PlotStyle` -> `Thickness`[x] influences the appearance of the surface. First of all, the option `Thickness` must be applied to any open surface for a 3D printer to process the data properly. But also the physical wall thickness is an important characteristic for 3D printers and 3D-printing websites like Shapeways to test the object for this criteria. Generally, too-thin walls cannot be printed due to 3D printer and/or printing material limitations. Thickness expands surfaces in both directions—inward and outward. The image clearly shows that, because red control points are now sunk inside the thick surface. In the case of a thin surface, the points are usually located outside the surface.

To deliver our object to the printing devices or services, we need to export it to STL format. The Wolfram Language supports this and quite a few other 3D formats. We need to be careful that the resulting shape is smooth enough, but is not overpopulated with unnecessary polygons, because that will make its byte size large and generally difficult to process for printing. For this we will explicitly specify the option `PlotPoints` to control the fineness of the mesh separately along and around the pendant. For better control and smoothness, we set `MaxRecursion -> 0` to prevent automated recursive subdivisions. Note how `RegionFunction` is used to make a hole in the pendant for a ring, link, thread, chain, or similar connector.

The object is ready to be exported to STL. We could import it back into *Mathematica* to see if it’s OK, but it’s also good to view it in some other modeling/printing software to ensure compatibility and faultiness. I will use the popular and free tool MeshLab:

This is a magnified version of a real object that is supposed to be three centimeters long. Now perhaps you can see another reason I decided to go with half of a full rotation. Viewed in the cross section, the cut wall outlines the vividly original sound wave shape. I also tested out the STL file by uploading it to the Shapeways site, which tests 3D objects for their capacity to be 3D-printed in various materials. For example, this specific 30-millimeter-long piece passed the test on wall thickness to be printable in raw bronze or silver.

It’s also possible to preview an STL object as if it were made of a specific material using the `Texture` functionality. One way to go about it is to find an image with an approximate color palette that you’d like to apply to the object. Let’s import from Wikipedia this historical bronze medal from the 1980 Olympic Games in Moscow…

…and select a small, more or less uniform region, but still with enough of a rich color palette to represent realistic bronze. We will texture-map it onto every polygon of our STL object. Note that `VertexTextureCoordinates` are randomly rotated to improve the realism of the texture by reducing its obvious repetitiveness:

If you are interested in exploring more about 3D printing with the Wolfram Language, check out other Wolfram users’ work, for example, Henry Segerman or George W. Hart. For an excellent source on 3D-printing tips and tricks (such as surface normals, water-tightness, verification tools, splines, NURBS, etc.), I refer you to our free online video: Scan, Convert, and Print: Playing with 3D Objects in *Mathematica*. See the attached notebook below for the complete code, and feel free to experiment with voices, messages, and jewelry types. I’m sure in no time you’ll come up with your own artistic ideas. No matter how much programing you know, if you got to this point, you have already “spoken” a little bit of the Wolfram Language. And I hope it will help you to get creative, productive, or even romantic, and spend your holidays with a few original ideas. Happy Valentine’s Day!

Download this post as a Computable Document Format (CDF) file.

]]>Our students continue to impress us with their abilities and the work that they produce in just two short weeks. Last year students created projects on many different topics. Here are just a few:

* “Knight Paths,” Taylor McCreary

* “Blended Fonts,” Mary Giambrone

* “Playing Blackjack,” Seokin Yeh

Make sure to check out all of the cool Demonstrations created at the *Mathematica* Summer Camp.

What do you want to create this summer? Make your visions a reality by joining the *Mathematica* Summer Camp 2014. We look forward to seeing you in July!