Back in 2012, Jon McLoone wrote a program that analyzed the coding examples of over 500 programming languages that were compiled on the wiki site Rosetta Code. He compared the programming language of *Mathematica* (now officially named the Wolfram Language) to 14 of the most popular and relevant languages, and found that most programs can be written in the Wolfram Language with 1/2 to 1/10 as much code—even as tasks become larger and more complex.

We were curious to see how the Wolfram Language continues to stack up, since a lot has happened in the last two years. So we updated and re-ran Jon’s code, and, much to our excitement (though we really weren’t all that surprised), the Wolfram Language remains largely superior by all accounts!

Keep in mind that the programming tasks at Rosetta Code are the typical kinds of exercises that you *can* write in conventional programming languages: editing text, implementing quicksort, or solving the Towers of Hanoi. You wouldn’t even *think* of dashing off a program in C to do handwriting recognition, yet that’s a one-liner in the Wolfram Language. And since the Wolfram Language’s ultra-high-level constructs are designed to match the way people think about solving problems, writing programs in it is usually easier than in other languages. In spite of the Rosetta Code tasks being relatively low-level applications, the Wolfram Language still wins handily on code length compared to every other language.

Here’s the same graph as in Jon’s 2012 post comparing the Wolfram Language to C. Each point gives the character counts of the same task programmed in the Wolfram Language and C. Notice the Wolfram Language still remains shorter for almost every task, staying mostly underneath the dashed one-to-one line:

The same holds true for Python:

Although the typical methods for comparing coding languages are usually by character count or line count, these measures are not reliable when looking at the Wolfram Language. Lines are fluid and arbitrary in the Wolfram Language, and it has long, descriptive function names. On the plus side, this makes the language very straightforward and easy to understand—but it can also skew data when trying to quantify coding efficiency in terms of character count or line count. Instead, we can compare “tokens,” or any string of letters and numbers that are not interrupted by a number or punctuation. This lets us classify length in terms of “units of syntax,” which, while it isn’t perfect, gives us a clearer picture of the number of different elements required to build a function or program.

And so, using tokens now as our metric to compare the Wolfram Language and Python, we see a slightly different spread, but the points still lean very much underneath the one-to-one line, implying that the Wolfram Language still ranks comparatively shorter.

Using a `MovingMedian` can help clean up some of the ambient noise around these results. Below, the Wolfram Language appears to, on average, increase in token count at a slower rate than Python. Using `FindFit`, we can estimate that a typical Python program that requires *x* tokens can be written in the Wolfram Language with 3.48 tokens, meaning a Python program that requires 1,000 tokens would require just 110 tokens in the Wolfram Language.

Similarly in the four comparisons below, the number of tokens naturally increases for both languages as the tasks become larger—but the Wolfram Language gets larger at a slower pace. (Their respective coefficients: C++ -> 2.85, C -> 2.36, Java -> 3.53, MATLAB -> 4.16.)

We can also look at the data in a table of ratios, comparing the languages across the top to the languages to the left. Numbers greater than 1 mean the language on top requires more lines of code.

The Wolfram Language does even better compared to every other language when looking specifically at large tasks.

To see more data, or to experiment with the code yourself, download the notebook at the end of this post. And to get a more in-depth look at the process we used to perform this analysis, give Jon’s blog post a read!

Download this post as a Computable Document Format (CDF) file.

## 18 Comments

seems you swapped the first two plots.

MMA has many elements that take the legwork out of defining and iterating functions or objects. MMA also doesn’t get enough credit for its documentation and the ability to program symbolically which is where one can produce impossibly small and logical code.

The flip side is that shorter code isn’t necessarily more readable, maintainable or fast. I often find that I can generate prototypes or working functions for smaller datasets but trying to scale up MMA code to real world problems either runs into speed issues or is orders of magnitude slower to debug. Reasons for this include: opaque internal compiling logic, forced unpacking of many common functions that are advertised as Wolfram language strengths (http://mathematica.stackexchange.com/questions/5258/are-there-guidelines-for-avoiding-the-unpacking-of-a-packed-array), Many time saving functions that are not Compilable, Many functions that are not optimized for parallel processing, No access to pointer like functions for indexing which to coin the word of the month is a ‘hell stew’ of going forwards and backwards with higher level functions to pullout elements of things like lookup tables. Quite efficient functions for generic sorting and nearest elements that become beyond slow when they try and take advantage of MMAs amazing Pure Function capabilities for tailored search or sorting, etc.

On the IDE side Workbench is ancient and despairingly out of date in comparison with modern IDEs. MMA also has no access to modern profiling and the Notebook interface while hugely efficient for testing new code elements is very cluttered for larger tasks and doesn’t handle large datasets that well.

As I pointed out in the original blog item, short code is certainly not the same thing as good code, but short good code is better than long good code and short bad code is much better than long bad code.

Some of your other points mix together the language with the current implementation of the language. Every version of Mathematica includes invisible improvements which don’t change the definition of the language so your code runs better without you having to change it. Certainly the next release will include a number of such improvements. I don’t know whether array unpacking or Compile coverage are among them, but rendering of large expressions in the front-end is.

But also we are continuing to find ways to improve the language. At least a couple of your comments are addressed directly in Mathematica 10: there will be a new construct allowing indexing in a much more readable and maintainable way. And there will be a collection of “*By” functions to complement SortBy (which is usually much faster than using Sort with a custom comparitor, though Sort is more flexible for comparitors that are asymetrical or cannot be mapped to a naturally ordered set.)

I suppose it comes down the fact that statistical comparison of code isn’t necessarily as revealing as specific examples. Lots of code is reused, battle tested and has public bug lists. If one has the time, you can bolt it on and use it without necessarily increasing the complexity of your codebase. Many common code examples aren’t reinvented, rather reused. MMA has many timesaving functions builtin, readily accessible and very well documented, but the codebase is still there, the library is just hidden. In either case you still need to test the outcomes as even MMA has bugs in longstanding functions. Getting something up and running is usually the exciting and quick part, refining the code and testing it is the long and tortuous part.

As you stated, Wolfram Research does a great job of demonstrating the applicability of functions that one might not normally ever consider; but are practical, extensible and usable due to the symbolic nature of the environment. I find that new MMA releases don’t generally introduce order of magnitude speed improvements (compared with say C) to existing code but rather open up areas of analysis I didn’t previously know were feasible or manageable within a projects timeframe. I would hope that future MMA releases focus heavily on optimization and removing the hidden idiosyncrasies that have built up over the decades – every language has them.

Your Blog Post seems to imply that larger projects might be easier in the Wolfram language, my comments were merely highlighting some of the problems and difficulties in implementing MMA solutions with the current toolset.

Love the product and have been using it for 22 years. Here’s hoping that MMA 10 is out soon.

As you say I have been actually looking at the examples in Rosetta stone and I have to say almost every entry I have looked at the python example seems to me of much higher quality. They give multiple implementations, using a variety of different approaches. Any procedure that says that Python code is longer than Java code for the examples I have looked at is simply a poor algorithm for scraping this data.

Okay, but how do these compare in computational time? While it may be easier on the fingers, I would like to see the same analysis in taking into account the time it takes to compute, especially large tasks.

There is no link to “the notebook at the end of this post”.

Hello, thank you for you comment! This issue has been fixed and you can view the notebook here

http://blog.wolfram.com/data/uploads/2014/06/wl-measures-up.cdf.

Isn’t this ultimately just a comparison of how high-level the language is?

Any such measure would have to include the size of the libraries involved, and maybe even that of runtime interpreter (if any), in order to have a fair comparison. Mathematica expressions are beautifully compact, but they rely on huge bodies of code to work so well.

I agree. I’m really disappointed with this re-branding of Mathematica code as ‘Wolfram Language’ – it makes it seem as though the latter can exist without the former (or some web-based variant thereof), where clearly it can’t. Mathematica is, what, over a gig in size, so big whoop when it can implement some routines in a single line – with all that back-end beef, you’d expect it to!

The hype around Wolfram Language is a bit of a disgrace, too – to say that the language lets you do all these amazing things is obviously wrong. It’s not as though typing D[...] (or whatever) performs the mathematics – the back end (ie. Mathematica) does. The language is merely an interface, not a fully defined language with competing implementations like C, etc.

plot legends would have been useful here… what are the units? why the ratio of X and Y axes are so different?….!

I couldn’t find the notebook you mentioned in the article.

Hello, thanks for you comment! This issue has been fixed and you can view the notebook here

http://blog.wolfram.com/data/uploads/2014/06/wl-measures-up.cdf.

This approach, which doesn’t parse the code for comments, examples given for the usage, multiple implementations seems to just give better results for languages that are high level, and less used by the general programming community, as there rosetta stone entries are often short/cryptic. For example almost all the python examples I looked up by hand had very short implementations (that I actually found more readable than the MMA ones, even though I am better at MMA, for example look up quicksort, the MMA implementation is needlessly cryptic and could use a lot more exposition and unpacking of the logic. The python entry on the other hand gives a range of interesting examples and many examples of use). But they often included a lot of examples and other implementations and meaningful comments. Why engage in this kind of silly data mining? It just feels really dishonest to me, and I love mathematica.

I really wish Wolfram Research would give up on trying to rename Mathematica “The Wolfram Language”.

No one calls it that. No one (outside the company) has any reason to call it that. Insisting that the language has been “renamed” is silly. I say this as a long time fan of Mathematica.

Hi Bill, I also had some doubts but after a while I thought it should be ok. When I’m asked what programming language I really like I say Mathematica. The very first typical reaction is something like, euhh you do something with Math? And that really doesn’t cover what the whole set of Mathematica really is. It can do much more. Renaming it is a way out to re-explain what it’s all about and when needed you can also tell it’s great with Math. I think this is what Wolfram is aiming at. The name Wolfram Language is perhaps debatable but that’s another discussion. Perhaps just a W would have suffice? just my 2c

i call it the Wolfram Language (WL) and that’s the name i use in throughout my tutorial on WL (located in the Wolfram Library Archives). in my view, the mistake was to mis-identify the language initially with Mathematica. however, i beleive SW did that in kee[ping with his stated objective of “using Mathematica as a trojan horse in which to smuggle in a new programming language”. i think this mistake will eventually be corrected as more non-scietnists and non-engineers use WL for their own purposes.