Wolfram Blog » Software Development http://blog.wolfram.com News, views, and ideas from the front lines at Wolfram Research. Thu, 07 Dec 2017 16:17:22 +0000 en hourly 1 http://wordpress.org/?v=3.2.1 The Practical Business of Ontology: A Tale from the Front Lines http://blog.wolfram.com/2017/07/19/the-practical-business-of-ontology-a-tale-from-the-front-lines/ http://blog.wolfram.com/2017/07/19/the-practical-business-of-ontology-a-tale-from-the-front-lines/#comments Wed, 19 Jul 2017 19:08:40 +0000 Stephen Wolfram http://blog.internal.wolfram.com/?p=37344

The Philosophy of Chemicals

“We’ve just got to decide: is a chemical like a city or like a number?” I spent my day yesterday—as I have for much of the past 30 years—designing new features of the Wolfram Language. And yesterday afternoon one of my meetings was a fast-paced discussion about how to extend the chemistry capabilities of the language.

At some level the problem we were discussing was quintessentially practical. But as so often turns out to be the case for things we do, it ultimately involves some deep intellectual issues. And to actually get the right answer—and to successfully design language features that will stand the test of time—we needed to plumb those depths, and talk about things that usually wouldn’t be considered outside of some kind of philosophy seminar.

Thinker

Part of the issue, of course, is that we’re dealing with things that haven’t really ever come up before. Traditional computer languages don’t try to talk directly about things like chemicals; they just deal with abstract data. But in the Wolfram Language we’re trying to build in as much knowledge about everything as possible, and that means we have to deal with actual things in the world, like chemicals.

We’ve built a whole system in the Wolfram Language for handling what we call entities. An entity could be a city (like New York City), or a movie, or a planet—or a zillion other things. An entity has some kind of name (“New York City”). And it has definite properties (like population, land area, founding date, …).

We’ve long had a notion of chemical entities—like water, or ethanol, or tungsten carbide. Each of these chemical entities has properties, like molecular mass, or structure graph, or boiling point.

And we’ve got many hundreds of thousands of chemicals where we know lots of properties. But all of these are in a sense concrete chemicals: specific compounds that we could put in a test tube and do things with.

But what we were trying to figure out yesterday is how to handle abstract chemicals—chemicals that we just abstractly construct, say by giving an abstract graph representing their chemical structures. Should these be represented by entities, like water or New York City? Or should they be considered more abstract, like lists of numbers, or, for that matter, mathematical graphs?

Well, of course, among the abstract chemicals we can construct are chemicals that we already represent by entities, like sucrose or aspirin or whatever. But here there’s an immediate distinction to make. Are we talking about individual molecules of sucrose or aspirin? Or about these things as bulk materials?

At some level it’s a confusing distinction. Because, we might think, once we know the molecular structure, we know everything—it’s just a matter of calculating it out. And some properties—like molar mass—are basically trivial to calculate from the molecular structure. But others—like melting point—are very far from trivial.

OK, but is this just a temporary problem that one shouldn’t base a long-term language design on? Or is it something more fundamental that will never change? Well, conveniently enough, I happen to have done a bunch of basic science that essentially answers this: and, yes, it’s something fundamental. It’s connected to what I call computational irreducibility. And for example, the precise value of, say, the melting point for an infinite amount of some material may actually be fundamentally uncomputable. (It’s related to the undecidability of the tiling problem; fitting in tiles is like seeing how molecules will arrange to make a solid.)

So by knowing this piece of (rather leading-edge) basic science, we know that we can meaningfully make a distinction between bulk versions of chemicals and individual molecules. Clearly there’s a close relation between, say, water molecules, and bulk water. But there’s still something fundamentally and irreducibly different about them, and about the properties we can compute for them.

At Least the Atoms Should Be OK

Alright, so let’s talk about individual molecules. Obviously they’re made of atoms. And it seems like at least when we talk about atoms, we’re on fairly solid ground. It might be reasonable to say that any given molecule always has some definite collection of atoms in it—though maybe we’ll want to consider “parametrized molecules” when we talk about polymers and the like.

But at least it seems safe to consider types of atoms as entities. After all, each type of atom corresponds to a chemical element, and there are only a limited number of those on the periodic table. Now of course in principle one can imagine additional “chemical elements”; one could even think of a neutron star as being like a giant atomic nucleus. But again, there’s a reasonable distinction to be made: almost certainly there are only a limited number of fundamentally stable types of atoms—and most of the others have ridiculously short lifetimes.

There’s an immediate footnote, however. A “chemical element” isn’t quite as definite a thing as one might imagine. Because it’s always a mixture of different isotopes. And, say, from one tungsten mine to another, that mixture might change, giving a different effective atomic mass.

And actually this is a good reason to represent types of atoms by entities. Because then one just has to have a single entity representing tungsten that one can use in talking about molecules. And only if one wants to get properties of that type of atom that depend on qualifiers like which mine it’s from does one have to deal with such things.

In a few cases (think heavy water, for example), one will need to explicitly talk about isotopes in what is essentially a chemical context. But most of the time, it’s going to be enough just to specify a chemical element.

To specify a chemical element you just have to give its atomic number Z. And then textbooks will tell you that to specify a particular isotope you just have to say how many neutrons it contains. But that ignores the unexpected case of tantalum. Because, you see, one of the naturally occurring forms of tantalum (180mTa) is actually an excited state of the tantalum nucleus, which happens to be very stable. And to properly specify this, you have to give its excitation level as well as its neutron count.

In a sense, though, quantum mechanics saves one here. Because while there are an infinite number of possible excited states of a nucleus, quantum mechanics says that all of them can be characterized just by two discrete values: spin and parity.

Every isotope—and every excited state—is different, and has its own particular properties. But the world of possible isotopes is much more orderly than, say, the world of possible animals. Because quantum mechanics says that everything in the world of isotopes can be characterized just by a limited set of discrete quantum numbers.

We’ve gone from molecules to atoms to nuclei, so why not talk about particles too? Well, it’s a bigger can of worms. Yes, there are the well-known particles like electrons and protons that are pretty easy to talk about—and are readily represented by entities in the Wolfram Language. But then there’s a zoo of other particles. Some of them—just like nuclei—are pretty easy to characterize. You can basically say things like: “it’s a particular excited state of a charm-quark-anti-charm-quark system” or some such. But in particle physics one’s dealing with quantum field theory, not just quantum mechanics. And one can’t just “count elementary particles”; one also has to deal with the possibility of virtual particles and so on. And in the end the question of what kinds of particles can exist is a very complicated one—rife with computational irreducibility. (For example, what stable states there can be of the gluon field is a much more elaborate version of something like the tiling problem I mentioned in connection with melting points.)

Maybe one day we’ll have a complete theory of fundamental physics. And maybe it’ll even be simple. But exciting as that will be, it’s not going to help much here. Because computational irreducibility means that there’s essentially an irreducible distance between what’s underneath, and what phenomena emerge.

And in creating a language to describe the world, we need to talk in terms of things that can actually be observed and computed about. We need to pay attention to the basic physics—not least so we can avoid setups that will lead to confusion later. But we also need to pay attention to the actual history of science, and actual things that have been measured. Yes, there are, for example, an infinite number of possible isotopes. But for an awful lot of purposes it’s perfectly useful just to set up entities for ones that are known.

The Space of Possible Chemicals

But is it the same in chemistry? In nuclear physics, we think we know all the reasonably stable isotopes that exist—so any additional and exotic ones will be very short-lived, and therefore probably not important in practical nuclear processes. But it’s a different story in chemistry. There are tens of millions of chemicals that people have studied (and, for example, put into papers or patents). And there’s really no limit on the molecules that one might want to consider, and that might be useful.

But, OK, so how can we refer to all these potential molecules? Well, in a first approximation we can specify their chemical structures, by giving graphs in which every node is an atom, and every edge is a bond.

What really is a “bond”? While it’s incredibly useful in practical chemistry, it’s at some level a mushy concept—some kind of semiclassical approximation to a full quantum mechanical story. There are some standard extra bits: double bonds, ionization states, etc. But in practice chemistry is very successfully done just by characterizing molecular structures by appropriately labeled graphs of atoms and bonds.

OK, but should chemicals be represented by entities, or by abstract graphs? Well, if it’s a chemical one’s already heard of, like carbon dioxide, an entity seems convenient. But what if it’s a new chemical that’s never been discussed before? Well, one could think about inventing a new entity to represent it.

Any self-respecting entity, though, better have a name. So what would the name be? Well, in the Wolfram Language, it could just be the graph that represents the structure. But maybe one wants something that seems more like an ordinary textual name—a string. Well, there’s always the IUPAC way of naming chemicals with names like 1,1′-{[3-(dimethylamino)propyl]imino}bis-2-propanol. Or there’s the more computer-friendly SMILES version: CC(CN(CCCN(C)C)CC(C)O)O. And whatever underlying graph one has, one can always generate one of these strings to represent it.

There’s an immediate problem, though: the string isn’t unique. In fact, however one chooses to write down the graph, it can’t always be unique. A particular chemical structure corresponds to a particular graph. But there can be many ways to draw the graph—and many different representations for it. And in fact even the (“graph isomorphism”) problem of determining whether two representations correspond to the same graph can be difficult to solve.

What Is a Chemical in the End?

OK, so let’s imagine we represent a chemical structure by a graph. At first, it’s an abstract thing. There are atoms as nodes in the graph, but we don’t know how they’d be arranged in an actual molecule (and e.g. how many angstroms apart they’d be). Of course, the answer isn’t completely well defined. Are we talking about the lowest-energy configuration of the molecule? (What if there are multiple configurations of the same energy?) Is the molecule supposed to be on its own, or in water, or whatever? How was the molecule supposed to have been made? (Maybe it’s a protein that folded a particular way when it came off the ribosome.)

Well, if we just had an entity representing, say, “naturally occurring hemoglobin”, maybe we’d be better off. Because in a sense that entity could encapsulate all these details.

But if we want to talk about chemicals that have never actually been synthesized it’s a bit of a different story. And it feels as if we’d be better off just with an abstract representation of any possible chemical.

But let’s talk about some other cases, and analogies. Maybe we should just treat everything as an entity. Like every integer could be an entity. Yes, there are an infinite number of them. But at least it’s clear what names they should be given. With real numbers, things are already messier. For example, there’s no longer the same kind of uniqueness as with integers: 0.99999… is really the same as 1.00000…, but it’s written differently.

What about sequences of integers, or, for that matter, mathematical formulas? Well, every possible sequence or every possible formula could conceivably be a different entity. But this wouldn’t be particularly useful, because much of what one wants to do with sequences or formulas is to go inside them, and transform their structure. But what’s convenient about entities is that they’re each just “single things” that one doesn’t have to “go inside”.

So what’s the story with “abstract chemicals”? It’s going to be a mixture. But certainly one’s going to want to “go inside” and transform the structure. Which argues for representing the chemical by a graph.

But then there’s potentially a nasty discontinuity. We’ve got the entity of carbon dioxide, which we already know lots of properties about. And then we’ve got this graph that abstractly represents the carbon dioxide molecule.

We might worry that this would be confusing both to humans and programs. But the first thing to realize is that we can distinguish what these two things are representing. The entity represents the bulk naturally occurring version of the chemical—whose properties have potentially been measured. The graph represents an abstract theoretical chemical, whose properties would have to be computed.

But obviously there’s got to be a bridge. Given a concrete chemical entity, one of the properties will be the graph that represents the structure of the molecule. And given a graph, one will need some kind of ChemicalIdentify function, that—a bit like GeoIdentify or maybe ImageIdentify—tries to identify from the graph what chemical entity (if any) has a molecular structure that corresponds to that graph.

Philosophy Meets Chemistry Meets Math Meets Physics…

As I write out some of the issues, I realize how complicated all this may seem. And, yes, it is complicated. But in our meeting yesterday, it all went very quickly. Of course it helps that everyone there had seen similar issues before: this is the kind of thing that’s all over the foundations of what we do. But each case is different.

And somehow this case got a bit deeper and more philosophical than usual. “Let’s talk about naming stars”, someone said. Obviously there are nearby stars that we have explicit names for. And some other stars may have been identified in large-scale sky surveys, and given identifiers of some kind. But there are lots of stars in distant galaxies that will never have been named. So how should we represent them?

That led to talking about cities. Yes, there are definite, chartered cities that have officially been assigned names–and we probably have essentially all of these right now in the Wolfram Language, updated regularly. But what about some village that’s created for a single season by some nomadic people? How should we represent it? Well, it has a certain location, at least for a while. But is it even a definite single thing, or might it, say, devolve into two villages, or not a village at all?

One can argue almost endlessly about identity—and even existence—for many of these things. But ultimately it’s not the philosophy of such things that we’re interested in: we’re trying to build software that people will find useful. And so what matters in the end is what’s going to be useful.

Now of course that’s not a precise thing to know. But it’s like for language design in general: think of everything people might want to do, then see how to set up primitives that will let people do those things. Does one want some chemicals represented by entities? Yes, that’s useful. Does one want a way to represent arbitrary chemical structures by graphs? Yes, that’s useful.

But to see what to actually do, one has to understand quite deeply what’s really being represented in each case, and how everything is related. And that’s where the philosophy has to meet the chemistry, and the math, and the physics, and so on.

I’m happy to say that by the end of our hour-long meeting yesterday (informed by about 40 years of relevant experience I’ve had, and collectively 100+ years from people in the meeting), I think we’d come up with the essence of a really nice way to handle chemicals and chemical structures. It’s going to be a while before it’s all fully worked out and implemented in the Wolfram Language. But the ideas are going to help inform the way we compute and reason about chemistry for many years to come. And for me, figuring out things like this is an extremely satisfying way to spend my time. And I’m just glad that in my long-running effort to advance the Wolfram Language I get to do so much of it.


To comment, please visit the copy of this post at the Stephen Wolfram Blog »

]]>
http://blog.wolfram.com/2017/07/19/the-practical-business-of-ontology-a-tale-from-the-front-lines/feed/ 0
Our Readers’ Favorite Stories from 2016 http://blog.wolfram.com/2017/01/03/our-readers-favorite-stories-from-2016/ http://blog.wolfram.com/2017/01/03/our-readers-favorite-stories-from-2016/#comments Tue, 03 Jan 2017 18:14:42 +0000 John Moore http://blog.internal.wolfram.com/?p=34441 Story image collage

It’s been a busy year here at the Wolfram Blog. We’ve written about ways to avoid the UK’s most unhygienic foods, exciting new developments in mathematics and even how you can become a better Pokémon GO player. Here are some of our most popular stories from the year.

Today We Launch Version 11!

Geo projections in the Wolfram Language

In August, we launched Version 11 of Mathematica and the Wolfram Language. The result of two years of development, Version 11 includes exciting new functionality like the expanded map generation enabled by satellite images. Here’s what Wolfram CEO Stephen Wolfram had to say about the new release in his blog post:

OK, so what’s the big new thing in Version 11? Well, it’s not one big thing; it’s many big things. To give a sense of scale, there are 555 completely new functions that we’re adding in Version 11—representing a huge amount of new functionality (by comparison, Version 1 had a total of 551 functions altogether). And actually that function count is even an underrepresentation—because it doesn’t include the vast deepening of many existing functions.

Finding the Most Unhygienic Food in the UK

Map of Oxford

Using the Wolfram Language, John McLoone analyzes government data about food safety inspections to create visualizations of the most unhygienic food in the UK. The post is a treasure trove of maps and charts of food establishments that should be avoided at all costs, and includes McLoone’s greatest tip for food safety: “If you really care about food hygiene, then the best advice is probably just to never be rude to the waiter until after you have gotten your food!”

Finding Pokémon GO’s Shortest Tour to Compute ’em All!

Poké-Spikey

Bernat Espigulé-Pons creates visualizations of Pokémon across multiple generations of the game and then uses WikipediaData, GeoDistance and FindShortestTour to create a map to local Pokémon GO gyms. If you’re a 90s kid or an avid gamer, Espigulé-Pons’s Pokémon genealogy is perfect gamer geek joy. If you’re not, this post might just help to explain what all those crowds were doing in your neighborhood park earlier this year.

Behind Wolfram|Alpha’s Mathematical Induction-Based Proof Generator

Induction-based proof generator

Connor Flood writes about creating “the world’s first online syntax-free proof generator using induction,” which he designed using Wolfram|Alpha. With a detailed explanation of the origin of the concept and its creation from development to prototyping, this post provides a glimpse into the ways that computational thinking applications are created.

An Exact Value for the Planck Constant: Why Reaching It Took 100 Years

EntityValue[{Pierre-Simon Laplace, Adrien-Marie Legendre, Joseph-Louis Lagrange, Antoine-Laurent de Lavoisier, Marquis de Condorcet}, {"Entity","Image"}]//Transpose//Grid

Wolfram|Alpha Chief Scientist Michael Trott returns with a post about the history of the discovery of the exact value of the Planck constant, covering everything from the base elements of superheroes to the redefinition of the kilogram.

Launching the Wolfram Open Cloud: Open Access to the Wolfram Language

Wolfram Open Cloud, Programming Lab and Development Platform

In January of 2016, we launched the Wolfram Open Cloud to—as Stephen Wolfram says in his blog post about the launch—“let anyone in the world use the Wolfram Language—and do sophisticated knowledge-based programming—free on the web.” You can read more about this integrated cloud-based computing platform in his January post.

On the Detection of Gravitational Waves by LIGO

Gravitational waves GIF

In February, the Laser Interferometer Gravitational-Wave Observatory (LIGO) announced that it had confirmed the first detection of a gravitational wave. Wolfram software engineer Jason Grigsby explains what gravitational waves are and why the detection of them by LIGO is such an exciting landmark in experimental physics.

Computational Stippling: Can Machines Do as Well as Humans?

Pointilism image of a beach

Silvia Hao uses Mathematica to recreate the renaissance engraving technique of stippling: a kind of drawing style using only points to mimic lines, edges and grayscale. Her post is filled with intriguing illustrations and is a wonderful example of the intersection of math and illustration/drawing.

Newest Wolfram Technologies Books Cover Range of STEM Topics

Wolfram tech books

In April, we reported on new books that use Wolfram technology to explore a variety of STEM topics, from data analysis to engineering. With resources for teachers, researchers and industry professionals and books written in English, Japanese and Spanish, there’s a lot of Wolfram reading to catch up on!

Announcing Wolfram Programming Lab

Wolfram Programming Lab startup screen

The year 2016 also saw the launch of Wolfram Programming Lab, an interactive online platform for learning to program in the Wolfram Language. Programming Lab includes a digital version of Stephen Wolfram’s 2016 book, An Elementary Introduction to the Wolfram Language, as well as Explorations for programmers already familiar with other languages and numerous examples for those who learn best by experimentation.

]]>
http://blog.wolfram.com/2017/01/03/our-readers-favorite-stories-from-2016/feed/ 0
My Wolfram Tech Conference 2016 Highlights http://blog.wolfram.com/2016/11/04/my-wolfram-tech-conference-2016-highlights/ http://blog.wolfram.com/2016/11/04/my-wolfram-tech-conference-2016-highlights/#comments Fri, 04 Nov 2016 16:30:15 +0000 Zach Littrell http://blog.internal.wolfram.com/?p=33630 Here are just a handful of things I heard while attending my first Wolfram Technology Conference:

  • “We had a nearly 4-billion-time speedup on this code example.”
  • “We’ve worked together for over 9 years, and now we’re finally meeting!”
  • “Coding in the Wolfram Language is like collaborating with 200 or 300 experts.”
  • “You can turn financial data into rap music. Instead, how about we turn rap music into financial data?”

As a first-timer from the Wolfram Blog Team attending the Technology Conference, I wanted to share with you some of the highlights for me—making new friends, watching Wolfram Language experts code and seeing what the Wolfram family has been up to around the world this past year.

Images from the 2016 Wolfram Tech Conference

Over one hundred talks

I was only able to attend one talk at a time, and with over a hundred talks going on over three days, there was no way I could see everything—but what I saw, I loved. Tuesday evening, Stephen Wolfram kicked off the event with his fantastic keynote presentation, giving an overview of the present and future of Wolfram Research, demoing live the new features of the Wolfram Language and setting the stage for the rest of the conference.

Stephen Wolfram keynote

Ask the Experts Panel Q&A

The nice thing about the Technology Conference is that if you’ve had a burning question about how something in the Wolfram Language works, you won’t get a better opportunity to ask the developers face to face. When someone in the audience asked about storing chemical data, the panel asked, “Is Michael Trott in the room?” And sure enough, Michael Trott was sitting a few seats down from me, and he stood up and addressed the question. Now that’s convenient.

Ask the Experts panel

The Channel Framework: Essentials and Applications

Probably my favorite speaker was Igor Bakshee, a senior research associate here at Wolfram. He described our new publish-subscribe service, the Channel Framework, which allows asynchronous communication between Wolfram systems without dealing with the details of specific senders and receivers. I especially appreciated Igor’s humor and patience as messages came in from someone in the audience: he raised his hands and insisted it was indeed someone else sending them.

Channel Framework talk

Computational Approaches to Authorship Attribution in a Corpus of 12th-Century Latin Texts

This talk was the one I was most looking forward to, and it was exactly what I wanted. Jakub Kabala talked about how he used Mathematica to compare 12th-century Latin texts in his search to determine if the monk of Lido and Gallus Anonymus were actually the same author. Jakub’s talk will also be in our upcoming virtual conference, so be sure to check that out!

Jakub Kabala

Keeping the Vision: Computation Jockey

It would be downright silly of me to not mention the extremely memorable duo Thomas Carpenter and Daniel “Scantron” Reynolds. The team used Wolfram Language code and JLink to infuse traditional disc jockey and video jockey art with abstract mathematics and visualizations. The experience was made complete when Daniel passed special glasses throughout the audience.

Computation Jockey
glasses

Code competitions and awards

We had the best Wolfram Language programmers all in one place, so of course there had to be competitions! This included both our annual One-Liner Competition and our first after-hours live coding competition on Wednesday night. Phil Maymin won both competitions. Incidentally, in between winning competitions, Phil also gave an energetic presentation, “Sports and eSports Analytics with the Wolfram Language.” Thanks to everyone who participated. Be sure to check out our upcoming blog post on the One-Liner Competition.

Live coding competition

Thursday night at Stephen’s Keynote Dinner, six Wolfram Innovator Awards were given out. The Wolfram Innovator Award is our opportunity to recognize people and organizations that have helped bring Wolfram technologies into use around the world. Congratulations again to this year’s recipients, Bryan Minor, Richard Scott, Brian Kanze, Samer Adeeb, Maik Meusel and Ruth Dover!

Innovation Award winners

Wolfram social

Like many Wolfram employees around the world, I usually work remote, so a big reason I was eager to go to the Wolfram Technology Conference was to meet people! I got to meet coworkers that I normally only email or talk on the phone with, and I got to speak with people who actually use our technologies and hear what they’ve been up to. After almost every talk, I’d see people shaking hands, trading business cards and exchanging ideas. It was easy to be social at the Technology Conference—everyone there shared an interest in and passion for Wolfram technologies, and the fun was figuring out what that passion was. And Wolfram gave everyone plenty of opportunities for networking and socializing, with lunches, dinners and meet-ups throughout the conference.

Social

See you next year!

Attending the Wolfram Technology Conference has been the highlight of my year. The speakers were great across the board, and a special thanks goes to the technical support team that dealt with network and display issues in stride. I strongly encourage everyone interested in Wolfram technologies to register for next year’s conference, and if you bump into me, please feel free to say hi!

Collage

]]>
http://blog.wolfram.com/2016/11/04/my-wolfram-tech-conference-2016-highlights/feed/ 2
New Wolfram Language Books http://blog.wolfram.com/2016/08/26/new-wolfram-language-books/ http://blog.wolfram.com/2016/08/26/new-wolfram-language-books/#comments Fri, 26 Aug 2016 15:38:33 +0000 Zach Littrell http://blog.internal.wolfram.com/?p=32028 We are constantly surprised by what fascinating applications and topics Wolfram Language experts are writing about, and we’re happy to again share with you some of these amazing authors’ works. With topics ranging from learning to use the Wolfram Language on a Raspberry Pi to a groundbreaking book with a novel approach to calculations, you are bound to find a publication perfect for your interests.

Getting Started with Wolfram Language and Mathematica for Raspberry Pi, Essentials of Programming in Mathematica, Geospatial Algebraic Computations, Theory and Applications

Getting Started with Wolfram Language and Mathematica for Raspberry Pi, Kindle Edition

If you’re interested in the Raspberry Pi and how the Wolfram Language can empower the device, then you ought to check out this ebook by Agus Kurniawan. The author takes you through the essentials of coding with the Wolfram Language in the Raspberry Pi environment. Pretty soon you’ll be ready to try out computational mathematics, GPIO programming and serial communication with Kurniawan’s step-by-step approach.

Essentials of Programming in Mathematica

Whether you are already familiar with programming or completely new to it, Essentials of Programming in Mathematica provides an excellent example-driven introduction for both self-study and a course in programming. Paul Wellin, an established authority on Mathematica and the Wolfram Language, covers the language from first principles to applications in natural language processing, bioinformatics, graphs and networks, signal analysis, geometry, computer science and much more. With tips and insight from a Wolfram Language veteran and more than 350 exercises, this volume is invaluable for both the novice and advanced Wolfram Language user.

Geospatial Algebraic Computations, Theory and Applications, Third Edition

Advances in geospatial instrumentation and technology such as laser scanning have resulted in tons of data—and this huge amount of data requires robust mathematical solutions. Joseph Awange and Béla Paláncz have written this enhanced third edition to respond to these new advancements by including robust parameter estimation, multi-objective optimization, symbolic regression and nonlinear homotopy. The authors cover these disciplines with both theoretical explorations and numerous applications. The included electronic supplement contains these theoretical and practical topics with corresponding Mathematica code to support the computations.

Boundary Integral Equation Methods and Numerical Solutions: Thin Plates on an Elastic Foundation, Micromechanics with Mathematica, Tendências Tecnológicas em Computação e Informática (Portuguese), The End of Error: Unum Computing

Boundary Integral Equation Methods and Numerical Solutions: Thin Plates on an Elastic Foundation

For graduate students and researchers, authors Christian Constanda, Dale Doty and William Hamill present a general, efficient and elegant method for solving the Dirichlet, Neumann and Robin boundary value problems for the extensional deformation of a thin plate on an elastic foundation. Utilizing Mathematica’s computational and graphics capabilities, the authors discuss both analytical and highly accurate numerical solutions for these sort of problems, and both describe the methodology and derive properties with full mathematical rigor.

Micromechanics with Mathematica

Seiichi Nomura demonstrates the simplicity and effectiveness of Mathematica as the solution to practical problems in composite materials, requiring no prior programming background. Using Mathematica’s computer algebra system to facilitate mathematical analysis, Nomura makes it practical to learn micromechanical approaches to the behavior of bodies with voids, inclusions and defects. With lots of exercises and their solutions on the companion website, students will be taken from the essentials, such as kinematics and stress, to applications involving Eshelby’s method, infinite and finite matrix media, thermal stresses and much more.

Tendências Tecnológicas em Computação e Informática (Portuguese)

For Portuguese students and researchers interested in technological trends in computation and informatics, this book is a real treat. The authors—Leandro Augusto Da Silva, Valéria Farinazzo Martins and João Soares De Oliviera Neto—gathered studies from both research and the commercial sector to examine the topics that mark current technological development. Read about how challenges in contemporary society encourage new theories and their applications in software like Mathematica. Topics include the semantic web, biometry, neural networks, satellite networks in logistics, parallel computing, geoprocessing and computation in forensics.

The End of Error: Unum Computing

Written with Mathematica by John L. Gustafson, one of the foremost experts in high-performance computing and the inventor of Gustafson’s law, The End of Error: Unum Computing explains a new approach to computer arithmetic: the universal number (unum). The book discusses this new number type, which encompasses all IEEE floating-point formats, obtains more accurate answers, uses fewer bits and solves problems that have vexed engineers and scientists for decades. With rich illustrations and friendly explanations, it takes no more than high-school math to learn about Gustafson’s novel and groundbreaking unum.

Want to find even more Wolfram technologies books? Visit Wolfram Books to discover books ranging across both topics and languages.

]]>
http://blog.wolfram.com/2016/08/26/new-wolfram-language-books/feed/ 0
Scientific Bug Hunting in the Cloud: An Unexpected CEO Adventure http://blog.wolfram.com/2015/04/16/scientific-bug-hunting-in-the-cloud-an-unexpected-ceo-adventure/ http://blog.wolfram.com/2015/04/16/scientific-bug-hunting-in-the-cloud-an-unexpected-ceo-adventure/#comments Thu, 16 Apr 2015 18:39:22 +0000 Stephen Wolfram http://blog.internal.wolfram.com/?p=25667 The Wolfram Cloud Needs to Be Perfect

The Wolfram Cloud is coming out of beta soon (yay!), and right now I’m spending much of my time working to make it as good as possible (and, by the way, it’s getting to be really great!). Mostly I concentrate on defining high-level function and strategy. But I like to understand things at every level, and as a CEO, one’s ultimately responsible for everything. And at the beginning of March I found myself diving deep into something I never expected…

Here’s the story. As a serious production system that lots of people will use to do things like run businesses, the Wolfram Cloud should be as fast as possible. Our metrics were saying that typical speeds were good, but subjectively when I used it something felt wrong. Sometimes it was plenty fast, but sometimes it seemed way too slow.

We’ve got excellent software engineers, but months were going by, and things didn’t seem to be changing. Meanwhile, we’d just released the Wolfram Data Drop. So I thought, why don’t I just run some tests myself, maybe collecting data in our nice new Wolfram Data Drop?

A great thing about the Wolfram Language is how friendly it is for busy people: even if you only have time to dash off a few lines of code, you can get real things done. And in this case, I only had to run three lines of code to find a problem.

First, I deployed a web API for a trivial Wolfram Language program to the Wolfram Cloud:

In[1]:= CloudDeploy[APIFunction[{}, 1 &]]

Then I called the API 50 times, measuring how long each call took (% here stands for the previous result):

In[2]:= Table[First[AbsoluteTiming[URLExecute[%]]], {50}]

Then I plotted the sequence of times for the calls:

In[3]:= ListLinePlot[%]

And immediately there seemed to be something crazy going on. Sometimes the time for each call was 220 ms or so, but often it was 900 ms, or even twice that long. And the craziest thing was that the times seemed to be quantized!

I made a histogram:

In[4]:= Histogram[%%, 40]

And sure enough, there were a few fast calls on the left, then a second peak of slow calls, and a third “outcropping” of very slow calls. It was weird!

I wondered whether the times were always like this. So I set up a periodic scheduled task to do a burst of API calls every few minutes, and put their times in the Wolfram Data Drop. I left this running overnight… and when I came back the next morning, this is what I saw:

Graph of API calls, showing strange, large-scale structure

Even weirder! Why the large-scale structure? I could imagine that, for example, a particular node in the cluster might gradually slow down (not that it should), but why would it then slowly recover?

My first thought was that perhaps I was seeing network issues, given that I was calling the API on a test cloud server more than 1000 miles away. So I looked at ping times. But apart from a couple of weird spikes (hey, it’s the internet!), the times were very stable.

Ping times

 

Something’s Wrong inside the Servers

OK, so it must be something on the servers themselves. There’s a lot of new technology in the Wolfram Cloud, but most of it is pure Wolfram Language code, which is easy to test. But there’s also generic modern server infrastructure below the Wolfram Language layer. Much of this is fundamentally the same as what Wolfram|Alpha has successfully used for half a dozen years to serve billions of results, and what webMathematica started using even nearly a decade earlier. But being a more demanding computational system, the Wolfram Cloud is set up slightly differently.

And my first suspicion was that this different setup might be causing something to go wrong inside the webserver layer. Eventually I hope we’ll have pure Wolfram Language infrastructure all the way down, but for now we’re using a webserver system called Tomcat that’s based on Java. And at first I thought that perhaps the slowdowns might be Java garbage collection. Profiling showed that there were indeed some “stop the world” garbage-collection events triggered by Tomcat, but they were rare, and were taking only milliseconds, not hundreds of milliseconds. So they weren’t the explanation.

By now, though, I was hooked on finding out what the problem was. I hadn’t been this deep in the trenches of system debugging for a very long time. It felt a lot like doing experimental science. And as in experimental science, it’s always important to simplify what one’s studying. So I cut out most of the network by operating “cloud to cloud”: calling the API from within the same cluster. Then I cut out the load balancer, that dispatches requests to particular nodes in a cluster, by locking my requests to a single node (which, by the way, external users can’t do unless they have a Private Cloud). But the slowdowns stayed.

So then I started collecting more-detailed data. My first step was to make the API return the absolute times when it started and finished executing Wolfram Language code, and compare those to absolute times in the wrapper code that called the API. Here’s what I saw:

The blue line shows the API-call times from before the Wolfram Language code was run; the gold line, after.

The blue line shows times before the Wolfram Language code is run; the gold line after. I collected this data in a period when the system as a whole was behaving pretty badly. And what I saw was lots of dramatic slowdowns in the “before” times—and just a few quantized slowdowns in the “after” times.

Once again, this was pretty weird. It didn’t seem like the slowdowns were specifically associated with either “before” or “after”. Instead, it looked more as if something was randomly hitting the system from the outside.

One confusing feature was that each node of the cluster contained (in this case) 8 cores, with each core running a different instance of the Wolfram Engine. The Wolfram Engine is nice and stable, so each of these instances was running for hours to days between restarts. But I wondered if perhaps some instances might be developing problems along the way. So I instrumented the API to look at process IDs and process times, and then for example plotted total process time against components of the API call time:

Total process time plotted against components of the API call time

And indeed there seemed to be some tendency for “younger” processes to run API calls faster, but (particularly noting the suppressed zero on the x axis) the effect wasn’t dramatic.

 

What’s Eating the CPU?

I started to wonder about other Wolfram Cloud services running on the same machine. It didn’t seem to make sense that these would lead to the kind of quantized slowdowns we were seeing, but in the interest of simplifying the system I wanted to get rid of them. At first we isolated a node on the production cluster. And then I got my very own Wolfram Private Cloud set up. Still the slowdowns were there. Though, confusingly, at different times and on different machines, their characteristics seemed to be somewhat different.

On the Private Cloud I could just log in to the raw Linux system and start looking around. The first thing I did was to read the results from the “top” and “ps axl” Unix utilities into the Wolfram Language so I could analyze them. And one thing that was immediately obvious was that lots of “system” time was being used: the Linux kernel was keeping very busy with something. And in fact, it seemed like the slowdowns might not be coming from user code at all; they might be coming from something happening in the kernel of the operating system.

So that made me want to trace system calls. I hadn’t done anything like this for nearly 25 years, and my experience in the past had been that one could get lots of data, but it was hard to interpret. Now, though, I had the Wolfram Language.

Running the Linux “strace” utility while doing a few seconds of API calls gave 28,221,878 lines of output. But it took just a couple of lines of Wolfram Language code to knit together start and end times of particular system calls, and to start generating histograms of system-call durations. Doing this for just a few system calls gave me this:

System-call durations--note the clustering...

Interestingly, this showed evidence of discrete peaks. And when I looked at the system calls in these peaks they all seemed to be “futex” calls—part of the Linux thread synchronization system. So then I picked out only futex calls, and, sure enough, saw sharp timing peaks—at 250 ms, 500 ms and 1s:

System-call durations for just the futex calls--showing sharp timing peaks

But were these really a problem? Futex calls are essentially just “sleeps”; they don’t burn processor time. And actually it’s pretty normal to see calls like this that are waiting for I/O to complete and so on. So to me the most interesting observation was actually that there weren’t other system calls that were taking hundreds of milliseconds.

 

The OS Is Freezing!

So… what was going on? I started looking at what was happening on different cores of each node. Now, Tomcat and other parts of our infrastructure stack are all nicely multithreaded. Yet it seemed that whatever was causing the slowdown was freezing all the cores, even though they were running different threads. And the only thing that could do that is the operating system kernel.

But what would make a Linux kernel freeze like that? I wondered about the scheduler. I couldn’t really see why our situation would lead to craziness in a scheduler. But we looked at the scheduler anyway, and tried changing a bunch of settings. No effect.

Then I had a more bizarre thought. The instances of the Wolfram Cloud I was using were running in virtual machines. What if the slowdown came from “outside The Matrix”? I asked for a version of the Wolfram Cloud running on bare metal, with no VM. But before that was configured, I found a utility to measure the “steal time” taken by the VM itself—and it was negligible.

By this point, I’d been spending an hour or two each day for several days on all of this. And it was time for me to leave for an intense trip to SXSW. Still, people in our cloud-software engineering team were revved up, and I left the problem in their capable hands.

By the time my flight arrived there was already another interesting piece of data. We’d divided each API call into 15 substeps. Then one of our physics-PhD engineers had compared the probability for a slowdown in a particular substep (on the left) to the median time spent in that substep (on the right):

Bars on the left show the probability for a slowdown in particular substeps; bars on the right show the median time spent in each of those substeps

With one exception (which had a known cause), there was a good correlation. It really looked as if the Linux kernel (and everything running under it) was being hit by something at completely random times, causing a “slowdown event” if it happened to coincide with the running of some part of an API call.

So then the hunt was on for what could be doing this. The next suspicious thing noticed was a large amount of I/O activity. In the configuration we were testing, the Wolfram Cloud was using the NFS network file system to access files. We tried tuning NFS, changing parameters, going to asynchronous mode, using UDP instead of TCP, changing the NFS server I/O scheduler, etc. Nothing made a difference. We tried using a completely different distributed file system called Ceph. Same problem. Then we tried using local disk storage. Finally this seemed to have an effect—removing most, but not all, of the slowdown.

We took this as a clue, and started investigating more about I/O. One experiment involved editing a huge notebook on a node, while running lots of API calls to the same node:

Graph of system time, user time, and API time spent editing a huge notebook--with quite a jump while the notebook was being edited and continually saved
The result was interesting. During the period when the notebook was being edited (and continually saved), the API times suddenly jumped from around 100 ms to 500 ms. But why would simple file operations have such an effect on all 8 cores of the node?

 

The Culprit Is Found

We started investigating more, and soon discovered that what seemed like “simple file operations” weren’t—and we quickly figured out why. You see, perhaps five years before, early in the development of the Wolfram Cloud, we wanted to experiment with file versioning. And as a proof of concept, someone had inserted a simple versioning system named RCS.

Plenty of software systems out there in the world still use RCS, even though it hasn’t been substantially updated in nearly 30 years and by now there are much better approaches (like the ones we use for infinite undo in notebooks). But somehow the RCS “proof of concept” had never been replaced in our Wolfram Cloud codebase—and it was still running on every file!

One feature of RCS is that when a file is modified even a tiny bit, lots of data (even several times the size of the file itself) ends up getting written to disk. We hadn’t been sure how much I/O activity to expect in general. But it was clear that RCS was making it needlessly more intense.

Could I/O activity really hang up the whole Linux kernel? Maybe there’s some mysterious global lock. Maybe the disk subsystem freezes because it doesn’t flush filled buffers quickly enough. Maybe the kernel is busy remapping pages to try to make bigger chunks of memory available. But whatever might be going on, the obvious thing was just to try taking out RCS, and seeing what happened.

And so we did that, and lo and behold, the horrible slowdowns immediately went away!

So, after a week of intense debugging, we had a solution to our problem. And repeating my original experiment, everything now ran cleanly, with API times completely dominated by network transmission to the test cluster:

Clean run times! Compare this to the In[3] image above.

 

The Wolfram Language and the Cloud

What did I learn from all this? First, it reinforced my impression that the cloud is the most difficult—even hostile—development and debugging environment that I’ve seen in all my years in software. But second, it made me realize how valuable the Wolfram Language is as a kind of metasystem, for analyzing, visualizing and organizing what’s going on inside complex infrastructure like the cloud.

When it comes to debugging, I myself have been rather spoiled for years—because I do essentially all my programming in the Wolfram Language, where debugging is particularly easy, and it’s rare for a bug to take me more than a few minutes to find. Why is debugging so easy in the Wolfram Language? I think, first and foremost, it’s because the code tends to be short and readable. One also typically writes it in notebooks, where one can test out, and document, each piece of a program as one builds it up. Also critical is that the Wolfram Language is symbolic, so one can always pull out any piece of a program, and it will run on its own.

Debugging at lower levels of the software stack is a very different experience. It’s much more like medical diagnosis, where one’s also dealing with a complex multicomponent system, and trying to figure out what’s going on from a few measurements or experiments. (I guess our versioning problem might be the analog of some horrible defect in DNA replication.)

My whole adventure in the cloud also very much emphasizes the value we’re adding with the Wolfram Cloud. Because part of what the Wolfram Cloud is all about is insulating people from the messy issues of cloud infrastructure, and letting them instead implement and deploy whatever they want directly in the Wolfram Language.

Of course, to make that possible, we ourselves have needed to build all the automated infrastructure. And now, thanks to this little adventure in “scientific debugging”, we’re one step closer to finishing that. And indeed, as of today, the Wolfram Cloud has its APIs consistently running without any mysterious quantized slowdowns—and is rapidly approaching the point when it can move out of beta and into full production.


To comment, please visit the copy of this post at the Stephen Wolfram Blog »

]]>
http://blog.wolfram.com/2015/04/16/scientific-bug-hunting-in-the-cloud-an-unexpected-ceo-adventure/feed/ 0
New Wolfram Technologies Books http://blog.wolfram.com/2015/02/09/new-wolfram-technologies-books/ http://blog.wolfram.com/2015/02/09/new-wolfram-technologies-books/#comments Mon, 09 Feb 2015 21:21:19 +0000 Jenna Giuffrida http://blog.internal.wolfram.com/?p=23550 We are once again thrilled by the wide variety of topics covered by authors around the world using Wolfram technologies to write their books and explore their disciplines. These latest additions range from covering the basics for students to working within specialties like continuum mechanics.

Books

Computational Explorations in Magnetron Sputtering
Magnetron sputtering is a widely used industrial process for depositing thin films. E. J. McInerney walks you through the physics of magnetron sputtering in a step-by-step fashion using Mathematica syntax and functions. The reader is encouraged to explore this fascinating topic by actively following along with a series of simple computer models. By working through these models, readers can build their intuition of PVD processes and develop a deeper appreciation of the underlying physics.

Continuum Mechanics using Mathematica: Fundamentals, Methods, and Applications, second edition
In this second edition of Continuum Mechanics using Mathematica, Antonio Romano and Addolorata Marasco take a methodological approach that familiarizes readers with the mathematical tools required to correctly define and solve problems in continuum mechanics. It provides a solid basis for a deeper study of more challenging and specialized problems related to nonlinear elasticity, polar continua, mixtures, piezoelectricity, ferroelectricity, magneto-fluid mechanics, and state changes.

Mathematica Data Visualization: Create and Prototype Interactive Data Visualizations Using Mathematica
Nazmus Saquib begins by introducing you to the Mathematica environment and the basics of dataset loading and cleaning. You will then learn about different kinds of widely used datasets, time series, and scientific, statistical, information, and map visualizations. Each topic is demonstrated by walking you through an example project. Along the way, the dynamic interactivity and graphics packages are also introduced. This book teaches you how to build visualizations from scratch, quickly and efficiently.

Books

Interactive Computational Geometry, A Taxonomic Approach
This ebook by Jim Arlow is an interactive introduction to some of the fundamental algorithms of computational geometry. The code base, which is in the Wolfram Language, is integrated into the text and is fully executable. This book is delivered as an interactive CDF (Computable Document Format) file that is viewable in the free CDF Player available from Wolfram, or can be opened in Mathematica. Readers are encouraged to have a copy of Mathematica in order to get the most out of this text.

学生が学ぶMathematica入門 (Introduction to Mathematica for Students)
Souji Otabe provides a quick introduction to Mathematica for students. Engineers and scientists need to know about symbolic computation, but they also need to know numeric computation to analyze experimental data. In this ebook, Mathematica is treated as a general problem-solving tool for science and engineering students. Readers will study basic operations of Mathematica first, and then they will learn how Mathematica can be applied to engineering and science.

Mathematik fur Informatiker (Mathematics for Computer Scientists) Band 2: Analysis und Statistik 3. Auflage
The second edition of Gerald and Susanne Teschl’s textbook introduces mathematical foundations that are concise, yet vivid and easy to follow. They are illustrated using numerous worked-out examples mixed with applications to computer science, historic background, and connections to related areas. The end of every chapter offers a self-test and warmup problems (with detailed solutions), as well as more advanced problems. Complementary sections introduce Mathematica to visualize the subjects taught, thereby fostering comprehension of the material.

Books

Neoclassical Physics
In this introductory text by Mark A. Cunningham, physics concepts are introduced as a means of understanding experimental observations, not as a sequential list of facts to be memorized. Numerous exercises are provided that utilize Mathematica software to help students explore how the language of mathematics is used to describe physical phenomena. Students will obtain much more detailed information about covered topics and will also gain proficiency with Mathematica, a powerful tool with many potential uses in subsequent courses.

Pathways Through Applied and Computational Physics
This book is the collaborative effort of Nicolo Barbero, Matteo Delfino, Carlo Palmisano, and Gianfranco Zosi to illustrate the role that different branches of physics and mathematics play in the execution of actual experiments. The unique feature of the book is that all the subjects addressed are strictly interconnected within the context of the execution of a single experiment with very high accuracy, namely the redetermination of the Avogadro constant NA, one of the fundamental physical constants. Another essential feature is the focus on the role of Mathematica, an invaluable, fully integrated software environment for handling diverse scientific and technical computations.

Probability: An Introduction with Statistical Applications, second edition
Thoroughly updated, John J. Kinney’s second edition of this title features a comprehensive exploration of statistical data analysis as an application of probability. The new edition provides an introduction to statistics with accessible coverage of reliability, acceptance sampling, confidence intervals, hypothesis testing, and simple linear regression. Encouraging readers to develop a deeper intuitive understanding of probability, the author presents illustrative geometrical presentations and arguments without the need for rigorous mathematical proofs. This book features an appendix dedicated to the use of Mathematica and a companion website containing the referenced data sets.

]]>
http://blog.wolfram.com/2015/02/09/new-wolfram-technologies-books/feed/ 2
Summer Internships http://blog.wolfram.com/2014/10/16/summer-internships/ http://blog.wolfram.com/2014/10/16/summer-internships/#comments Thu, 16 Oct 2014 18:13:02 +0000 Jenna Giuffrida http://blog.internal.wolfram.com/?p=20927 Summer has drawn to a close, and so too have our annual internships. Each year Wolfram welcomes a new group of interns to work on an exciting array of projects ranging all the way from Bell polynomials to food science. It was a season for learning, growth, and making strides across disciplinary and academic divides. The Wolfram interns are an invaluable part of our team, and they couldn’t wait to tell us all about their time here. Here are just a few examples of the work that was done.

2014 summer interns

Paco Jain
Paco Jain
Wolfram|Alpha Scientific Content,
Wolfram|Alpha
“This summer, I worked on adding scientific content to the physical systems domain in Wolfram|Alpha. While there is a lot to learn, everyone I worked with seemed enthusiastic to help me get up to speed, and I was able to form several valuable mentoring relationships. I also felt that I was given the resources and responsibility I needed to allow me to make meaningful contributions to the Wolfram|Alpha product. The experience has me already thinking about pursuing a full-time position at Wolfram!” As of October 2014, Paco is employed at Wolfram Research full time.

Daniel McDonald

Daniel McDonald
Wolfram|Alpha Scientific Content,
Bell Polynomials and Recursive Algorithms
“This summer at Wolfram|Alpha I worked as the Special Functions Intern. My primary project was reading mathematical literature in order to extract and verify formulas that could be useful for The Wolfram Functions Site as well as for possible Mathematica implementation. The most interesting part of my work involved creating a compendium of information about Mathematica‘s BellY function that computes various types of Bell polynomials, which are used in Faà di Bruno’s formula for computing arbitrary derivatives of the composition f(g) (as well as in generalizations of this formula for computing arbitrary derivatives of compositions of arbitrary depth). I devised an original functional recurrence that suggested a quick recursive algorithm for computing generalized Bell polynomials; as this algorithm ran much faster than Mathematica‘s at the time, it was implemented into Mathematica 10.0.1. This recurrence and thus the algorithm (with different base cases) can be applied in a more general environment, and I am currently drafting a paper to submit to an algorithms journal.”
Mark Peterson
Mark Peterson
Scientific Information Group,
Wolfram Demonstrations Project

“During my internship in the Scientific Information Group at Wolfram Research, my work has primarily been centered on the Wolfram Demonstrations Project. Essentially, Demonstrations are self-contained programs written in the Wolfram Language that are designed to appeal to the user in a highly intuitive and interactive way. Whether working on the Project directly or on alternate applications for its material, my time has been spent developing this sort of content.”
Visualizing the Thomson Problem
Jake Wood
Jake Wood
Mathematica Algorithms R&D,
Mathematica GeoGraphics
“Joining the Wolfram team earlier this summer was an exciting professional milestone for me. I am a big fan of not only the software that has come from Wolfram, but also the mission and ambition to proliferate and nurture big ideas. My patient mentor explained that I was to figure out how to make the generated maps in GeoGraphics (new in Mathematica 10) move around and update from mouse clicking and dragging. Additionally, the maps needed to be zoomable, similar to maps online used for navigation. Right now my prototypes deal with the maps themselves instead of the verbose layers of graphics data that Mathematica is capable of imbuing. In the future, though, who knows. Getting the panning and zooming to work proved a difficult task; however, the brunt of the summer was spent on improving the performance speed. No one wants to use an interactive map that is insufferably unresponsive. The utility of this application is pretty clear, as it is similar to programs that people already use daily.”
Jessica Zhang
Jessica Zhang
User Experience,
WolframTones

“People would think as a User Experience Designer I would only be designing detailed features within a product or workflow. However, at Wolfram, I not only got to do those things, I also got to take part in the bigger decision-making design processes, even as an intern. I was given the opportunity to learn a variety of skills that are important and also at the cutting edge of the field. Technical skills include wireframing, wireflowing, diagramming, and interface design. Oh, and also using the espresso machine!”
Andrew Blanchard
Andrew Blanchard
Wolfram|Alpha Scientific Content,
Named Physical Effects

“For my internship with Wolfram Alpha, I assembled a list of named physical effects. A typical effect provides a link between measurable physical quantities, which are already incorporated into Wolfram|Alpha. Thus, making information about known physical effects computable enables the exploration of relationships between measurable quantities. In addition, the searchable data provides a window into the relationship between the discovery of new effects and advances in the field of physics. By making scientific information searchable, Wolfram|Alpha is providing a wonderful service for researchers, students, and anyone curious about exploring science.”
Surojit Ganguli
Surojit Ganguli
Wolfram|Alpha Socioeconomic Content,
Computational Capabilities
“I was part of the team that was involved in increasing the computational capabilities of Wolfram|Alpha in the domain of vehicle dynamics. As a Computational Science and Engineering Minor at UIUC, the opportunity to explore the various ways in which computations are being performed at Wolfram was in itself a rewarding experience. As an additional bonus, I definitely improved in the area of functional programming by using Mathematica.”
Ying Qin
Ying Qin
Wolfram|Alpha Scientific Content,
Food Data

“I’ve been working on expanding food-related information in the Wolfram Knowledgebase. Among other things, this included the characterization and classification of food; I did research involving USDA data and other data sources. I was also working on expanding the food glossary, which gives a more detailed description of the available content. Furthermore, using my knowledge as a Food Science student, I was able to do things like classify fatty acids into groups. My advice to prospective interns is that you shouldn’t hesitate to apply even though your major is not computer science or engineering. As a Food Science major, I was happy to get involved here, and felt like it was a truly valuable experience.”

It’s been an amazing summer all around, and we couldn’t be happier with the contributions our interns have made. While we are sad to see some of them go, we are excited by the new talent that has been added to our team and can’t wait to see what next year will bring!

]]>
http://blog.wolfram.com/2014/10/16/summer-internships/feed/ 0
Q&A with SpinDynamica Creator Malcolm Levitt http://blog.wolfram.com/2014/04/16/qa-with-spindynamica-creator-malcolm-levitt/ http://blog.wolfram.com/2014/04/16/qa-with-spindynamica-creator-malcolm-levitt/#comments Wed, 16 Apr 2014 15:15:54 +0000 Wolfram Blog Team http://blog.internal.wolfram.com/?p=18923 Professor Malcolm Levitt is Head of Magnetic Resonance at the University of Southampton and a leader in the field of magnetic resonance research. In the early 2000s, he began programming SpinDynamica—a set of Mathematica packages that run spin dynamical calculations—to explore magnetic resonance concepts and develop experiments.

Composite pulse animation

SpinDynamica is an open-source package that Professor Levitt continues to work on as a hobby in his spare time, but the SpinDynamica community also contributes add-ons to bring additional functionality to researchers.

Professor Levitt graciously agreed to answer a few of our questions about his work, Mathematica, and SpinDynamica. He’s hopeful that as word spreads, others will submit add-ons that enhance the core functionality of SpinDynamica.

What is your history in the field of magnetic resonance?

I’ve been researching in magnetic resonance since I was an undergraduate project student in Oxford in the late 1970s. I went on to do a PhD in Oxford, researching in nuclear magnetic resonance (NMR) with Ray Freeman. After that, I went off on a long sequence of postdoctoral positions. I worked with Richard Ernst in Zürich, who later won the Nobel Prize for his work on NMR.

I researched at MIT for about five years, and then became a professor in Stockholm, Sweden, before moving back to the UK in 2001. I now lead a magnetic resonance section at the University of Southampton. Most of my research has involved developing the theory and technology of NMR. It’s an amazingly rich field, since NMR is time-dependent quantum mechanics in action, and allows an instant coupling between a theoretical idea, a numerical simulation, and a real experiment.

There are now many thousands of distinct NMR experiments, involving different sequences of radio frequency pulses and switched magnetic fields, providing information on everything from biomolecular structure to cancer diagnosis to quantum computing. It really is a staggeringly versatile field of research, and I feel very lucky to have stumbled into it and to have made my career in it.

When did you begin working with Mathematica?

I started using Mathematica seriously for magnetic resonance research in the 1990s in Stockholm. During my PhD and in Zürich, I had written a lot of low-level code for controlling an NMR spectrometer, as well as graphical FORTRAN simulations of NMR experiments. Later on, while I was at MIT, I developed a lot of FORTRAN computer code for simulating magnetic resonance experiments, which I tried to make as general as possible. However, I always recognized the limitations and inelegance of the language.

When I first encountered Mathematica I remember a sense of recognition like, “Wow, this is exactly the computer language I would have invented myself if I had known how.” However I do remember at that time Mathematica seemed slow in execution and there would be times of frustration. Nevertheless I stuck with it. Happily, the progress of hardware and continued development of Mathematica made my commitment worthwhile.

What can you tell us about SpinDynamica and how you created it?

I started to use Mathematica seriously for NMR research in Stockholm, partly in combination with a book that I was writing (Spin Dynamics), for which I wanted to generate informative graphics and check the equations. At that time, I did experiment with creating a set of modules for numerical simulations of NMR experiments, as well as generating analytical results, but I did not develop this very far.

Several other numerical simulation packages for NMR came out. Although they were numerically fast for specific classes of problems, I still felt that they were not as general and as elegant as I would like. Furthermore, our group was getting into experiments that required certain types of numerical simulation that were not catered for. So at some point in the early 2000s I set about seriously developing general packages for both symbolic and numerical calculations of magnetic resonance, within the Mathematica environment.

3d trajectories plot

How do you use SpinDynamica in your research?

Mathematica in general, and SpinDynamica in particular, have become completely central to how I develop and test theoretical ideas. So it’s not as if I develop an idea and then test it with SpinDynamica—I actually use SpinDynamica as a tool to develop the idea in the first place. It’s a bit hard to explain, but it works for me. There’s something about Mathematica that seems to match perfectly the way I think and create.

Is there an interesting example or discovery you’ve come across while working with Mathematica and SpinDynamica?

A central topic of research in our group concerns something called long-lived spin states. These are certain quantum states of coupled magnetic nuclei that are very weakly coupled to the environment. They may be used for storing quantum information in nuclear spin systems for long times. (We have demonstrated over 30 minutes, which is an incredibly long time for a quantum effect in a room-temperature liquid.)

In the jargon of magnetic resonance, the equilibration of the nuclear quantum system with the environment is called relaxation. So these special nuclear spin states have very slow relaxation. It is a surprising fact, but true, that although the relaxation theory of NMR has been extensively developed with thousands of research papers since the 1960s and several Nobel prizes along the way, the existence of these states had been overlooked.

It was only when the symmetry properties of the relaxation were examined with Mathematica (using a precursor of SpinDynamica) that the presence of such states was predicted, and then demonstrated experimentally by our group in 2004. Our group is intensively researching the theory of these states and their exploitation in practical NMR experiments and, hopefully, in clinical MRI as well. Amongst other things, we are working with collaborators to develop agents that use long-lived states to detect cancer.

What impact do you think SpinDynamica could have on future magnetic resonance research?

That is hard to predict. There are several simulation packages in the community, many of which require less user intelligence, and which have a much faster execution for specific problems, than SpinDynamica. SpinDynamica is immensely powerful, but it does require that users have a good theoretical understanding in order to use it.

That weakness could be addressed by including additional packages for simulating common experimental situations without major theoretical understanding. The problem is that, at the moment, SpinDynamica remains a hobby project that is developed almost exclusively by me in my spare time. So although it is a superb tool for our particular branch of research, which demands a high theoretical level, there are many aspects that are rather undeveloped, including some important functionality that I have simply never had time to develop.

Nevertheless, I think the core functionality of SpinDynamica is powerful and stable, and I hope that the community will take it and build on it. That is slowly starting to happen. I have taught several graduate-level courses using SpinDynamica to explain the quantum-mechanical concepts of magnetic resonance, so there is take-up by a small but growing group of scientists. I think the impact will become much greater when I find time to write up a proper scientific paper on the architecture and functionality of SpinDynamica. Unfortunately my schedule makes that unlikely to happen soon.

]]>
http://blog.wolfram.com/2014/04/16/qa-with-spindynamica-creator-malcolm-levitt/feed/ 0
Get Hacking with Wolfram Technologies http://blog.wolfram.com/2014/04/10/get-hacking-with-wolfram-technologies/ http://blog.wolfram.com/2014/04/10/get-hacking-with-wolfram-technologies/#comments Thu, 10 Apr 2014 18:09:59 +0000 Wolfram Blog Team http://blog.internal.wolfram.com/?p=18940 It probably comes as no surprise that Wolfram has been asked to participate in a number of hackathons recently, including the upcoming HackIllinois. There’s a natural fit between our pioneering, agile approach to technology development and the growing hackathon phenomenon, in which coders come together for a short but intensive time—either individually or in teams—to create new and unique software or hardware applications.

HackIllinois

Last month while at SXSW 2014, Wolfram helped provide support for Slashathon, the first-ever music-focused hackathon. Hosted by Slash from Guns N’ Roses, the winning hack will be used to help release Slash’s new album. Wolfram provided mentoring for the competition in the form of onsite coding experts and technology access.

Also last month, Wolfram supported both hackBCA and HackPrinceton in New Jersey for high school and college students, respectively. In addition to having Wolfram programming experts available as mentors, Stephen Wolfram attended both of these events, where he spoke about the Wolfram Language and what the Wolfram technology stack is making possible.

Stephen Wolfram gives talk about Wolfram Language

At hackBCA, several projects made use of the Wolfram|Alpha API and the emerging Wolfram Cloud platform. We also saw some neat uses of Wolfram technologies at HackPrinceton. The Wolf Cocoa team developed a solution for making OS X apps by creating Wolfram Language bindings to the Objective-C runtime. Another group, Pokebble, used the Wolfram|Alpha API to enable users to play Pokémon on the wearable Pebble smart watch. And the third place overall software project winner, α-TeX, used the Wolfram Cloud to enable users to embed computed results into LATEX.

Graphic command

This weekend Wolfram is again going where the coders are. Which isn’t far, as HackIllinois—the first-ever student-run hackathon at the University of Illinois at Urbana-Champaign—will be happening right down the road from Wolfram’s headquarters. Over 1,000 college students from across the country will be converging on the UIUC campus to imagine, learn, and launch their latest ideas as mobile apps, web apps, or other software and hardware projects.

As an event sponsor, Wolfram will be on hand to give a tech talk and demo our technologies, and to provide other event support. We’re excited to see what the winning teams can produce in only 36 hours!

Whether for educational purposes or fully commercial applications, we’re glad to see hackathons catching on as a way to develop the next generation of cutting-edge programmers. Maybe we’ll see you or your students at future hackathons. In the meantime, happy coding!

]]>
http://blog.wolfram.com/2014/04/10/get-hacking-with-wolfram-technologies/feed/ 0
Code Length Measured in 14 Languages http://blog.wolfram.com/2012/11/14/code-length-measured-in-14-languages/ http://blog.wolfram.com/2012/11/14/code-length-measured-in-14-languages/#comments Wed, 14 Nov 2012 15:29:46 +0000 Jon McLoone http://blog.internal.wolfram.com/?p=12446 Update: See our latest post on How the Wolfram Language Measures Up.

I stumbled upon a nice project called Rosetta Code. Their stated aim is “to present solutions to the same task in as many different languages as possible, to demonstrate how languages are similar and different, and to aid a person with a grounding in one approach to a problem in learning another.”

After amusing myself by contributing a few solutions (Flood filling, Mean angle, and Sum digits of an integer being some of mine), I realized that the data hidden in the site provided an opportunity to quantify a claim that I have often made over the years—that Mathematica code tends to be shorter than equivalent code in other languages. This is due to both its high-level nature and built-in computational knowledge.

Here is what I found.

Large tasks - Line count ratio

Mathematica code is typically less than a third of the length of the same tasks written in other languages, and often much better.

Before the comments section fills up with objections, I should state that there are many sources of bias in this approach, not least of which are bias in the creation of tasks, bias in the kinds of persons who provide solutions, and selectivity in which tasks have been solved. But if we worry about such problems too much, we never do anything!

It should also be said that short code is not the same thing as good code. But short good code is better than long good code, and short bad code is a lot better than long bad code!

Naturally, I used Mathematica to gather the data needed to analyze the code length. This is mostly an exercise in web scraping, so the first step is to read the “Terms of Use” of the website, which seem to allow this data aggregation. Second, I want to be responsible in the server load I create (and for this reason, I am not providing a download of the code—if you want it, you will have to contact me, or copy it by hand from this blog post). So I started by creating a version of Import that will store data for future use rather than request it from the server again. I could do this by copying the web pages to local storage, but their whole website is small enough to hold in memory. Preventing repeat web accesses is also very important for performance of this code.

Creating a version of Import that will store data for future use

Now, I start importing some key web pages. The first lists all the languages supported by the project. I use the special “Hyperlinks” option to HTML Import, and then string match away links of the wrong type.

Importing some key web pages

There is a special page for each language that lists completed tasks, so I do something similar to that…

Continuing to import web pages

…and extend the command to take a list of languages and return tasks that have been completed for all of them.

Extending the command to take a list of languages and return tasks that have been completed

The next step isn’t necessary, but you have to process all that slow internet access at some point, and I prefer to get it out of the way at the start by systematically calling every import that I will need to do. I will also dump the data to disk in a compact binary .mx file, so that I can come back to it without having to re-scrape the website. This is a good point to break for some lunch while it works!

Systematically calling every import needed

DumpSave["RosettaCodeData.mx", importOnce];

Now that all the data gathering is done, we can start analyzing it. First, how many tasks have been completed by the Mathematica community?

Length[getCompletedPageList["Mathematica"]]

446

That’s a good number; the most complete on the site is Tcl with 694 tasks. More importantly, there are plenty of tasks that have been completed in both Mathematica and other key languages. This is vital for the like-for-like comparison that I want to do. For example, there are 440 tasks that have a solution in both Mathematica and C.

Length[getCompletedPageList[{"Mathematica", "C"}]]

440

The thorny part of this problem is extracting the right information from crowdsourced, handwritten wiki pages. Correctly written pages wrap the code in a <lang> tag, with a rather inconsistent argument for the language type. But some of them are not correctly tagged, and for those I have to look at the position of code blocks relative to appearance of the language names in section headings. All that results in this ugly bit of XML pattern matching. I’m sure I could do it better, but it seems to work.

XML pattern matching

The <lang> tag, when it has been used, is usually the language name in lowercase, without spaces. But not always! So I have to map some of the special cases.

Mapping some of the special cases

For completely un-marked-up code, or where the solution is descriptive or is an image rather than code, this will return an empty string, and we will treat these as if no solution was provided. With the exception of LabVIEW (where all solutions are images), I suspect that this is fairly unbiased by language, but probably biased toward excluding very small problems.

Here is the code in action, extracting my solution for “flood filling”:

example =   extractCode["http://rosettacode.org/wiki/Bitmap/Flood_fill",    "Mathematica"]

Solution for "flood filling"

The next thing we need are some metrics for code length. The industry norm is “lines of code”:

Finding the metric for "lines of code"

But that is as much a measure of code layout as length (at least for languages like Mathematica that can put more than one statement on a line), so non-white space characters counts might be better.

Finding non-white space characters counts

That disadvantages Mathematica a bit, with its long, descriptive command names (a good thing), so I will also implement a “token” count metric—where a token is a word separated by any non-letter characters.

Implementing a "token" count metric

Here is that piece of code measured by each of the metrics.

Through[{characterCount, lineCount, tokenCount}[example]]

{330, 9, 45}

The line count doesn’t match what you see above because it is counting lines in the original website, and the narrow page design of the Wolfram Blog is causing additional line wrapping.

Now to generate comparison data for two languages, we just extract the code for each and measure it and repeat this for every task the two languages have in common.

Extracting the code for each language and measuring it

If we look at the first three tasks that Mathematica and C have in common, we see that the Mathematica solution has fewer characters in each case.

Take[  compareLanguages[{"Mathematica", "C"}, characterCount], 3]

{{588, 811}, {572, 3749}, {563, 2187}}

Here is all the Mathematica versus C data.

Mathematica versus C data

Mathematica versus C data

There is a lot of noise, but one thing is clear—nearly every Mathematica solution is shorter than the C solution. Some of the outliers are caused by multiple solutions being given for the same language, which my code will just add together.

The best way to deal with such outliers is to do all our smoothing and averaging using Median.

This shows an interesting trend. As the tasks get longer in C, they get longer in Mathematica, but not in a linear way. It looks like the formula for estimating Mathematica code length is 5.5√c, where c is the number of characters in the C solution.

Comparing Mathematica to C

Mathematica versus C

You see similar behavior compared to other languages.

Comparing Mathematica to C++, Python, Java, and MATLAB

Mathematica versus C++, Python, Java, and MATLAB

This is perhaps not surprising, since some tasks are extremely simple. There is little difference between one language and another for assigning a variable, or accessing an array. But there is more opportunity to benefit from Mathematica‘s high-level abstractions, in larger tasks like “Implement the game Minesweeper.” This trend is unlikely to continue though; for very large projects, they should start to scale more linearly at the ratio reached for the typical size of individual code modules within the project.

There are 474 languages listed in the website. Too many to be bothered with this kind of analysis, and quite a lot have too few solutions to analyze. I am going to look at a list of popular languages, and some computation-oriented languages. My, somewhat arbitrary, choices are:

Choosing which languages to analyze

To make a nice table, I need to reduce the data down to a single number. I have two approaches. One is to reduce all comparisons to a ratio (length of code in language A) / (length of code in language B) and find the median of these values over all tasks. The other approach is to argue that code length only matters for longer problems, and to do the same, but only for the top 50% of tasks by average code length.

Reducing the data down to a single number

And finally, here are the tables looking at all the permutations of code-length metric and averaging method.

In all cases, the number represents how many times longer the code in the language at the top of the chart is compared to the language on the left of the chart. That is, big numbers mean the language on the left is better!

All tasks - Character count ratio

All tasks - Character count ratio

All tasks - Line count ratio

All tasks - Line count ratio

All tasks - Token count ratio

All tasks - Token count ratio

Large tasks - Character count ratio

Large tasks - Character count ratio

Large tasks - Line count ratio

Large tasks - Line count ratio

Large tasks - Token count ratio

Large tasks - Token count ratio

Despite the many possible issues with the data, it is an independent source (apart from the handful of solutions that I provided) with code that was not contrived to be short above all other considerations (as happens in code golf comparisons). Perhaps as close to a fair comparison as we are likely to get. If you want to contribute a program to Rosetta Code, take a look at unsolved tasks in Mathematica, or improve one of the existing ones.

While the “Large tasks – Line count ratio” gives the most impressive result for Mathematica, I think that the “Large tasks – Character count ratio” is the really the fairest comparison. But however you slice it, Mathematica is presenting shorter code, on average, than these other languages. On average, five to ten times shorter than the equivalent in C or C++, and that should mean shorter development time, lower code complexity, and easier maintenance.

]]>
http://blog.wolfram.com/2012/11/14/code-length-measured-in-14-languages/feed/ 30