Wolfram Computation Meets Knowledge

Why Wolfram Tech Isn’t Open Source—A Dozen Reasons

Over the years, I have been asked many times about my opinions on free and open-source software. Sometimes the questions are driven by comparison to some promising or newly fashionable open-source project, sometimes by comparison to a stagnating open-source project and sometimes by the belief that Wolfram technology would be better if it were open source.

At the risk of provoking the fundamentalist end of the open-source community, I thought I would share some of my views in this blog. While there are counterexamples to most of what I have to say, not every point applies to every project, and I am somewhat glossing over the different kinds of “free” and “open,” I hope I have crystallized some key points.

A supplemental podcast is also on SoundCloud:
SoundCloud »

Much of this blog could be summed up with two answers: (1) free, open-source software can be very good, but it isn’t good at doing what we are trying to do; with a large fraction of the reason being (2) open source distributes design over small, self-assembling groups who individually tackle parts of an overall task, but large-scale, unified design needs centralized control and sustained effort.

I came up with 12 reasons why I think that it would not have been possible to create the Wolfram technology stack using a free and open-source model. I would be interested to hear your views in the comments section below the blog.


1. A coherent vision requires centralized design

FOSS (free and open-source software) development can work well when design problems can be distributed to independent teams who self-organize around separate aspects of a bigger challenge. If computation were just about building a big collection of algorithms, then this might be a successful approach.

But Wolfram’s vision for computation is much more profound—to unify and automate computation across computational fields, application areas, user types, interfaces and deployments. To achieve this requires centralized design of all aspects of technology—how computations fit together, as well as how they work. It requires knowing how computations can leverage other computations and perhaps most importantly, having a long-term vision for future capabilities that they will make possible in subsequent releases.

You can get a glimpse of how much is involved by sampling the 300+ hours of livestreamed Wolfram design review meetings.

Practical benefits of this include:

  • The very concept of unified computation has been largely led by Wolfram.
  • High backward and forward compatibility as computation extends to new domains.
  • Consistent across different kinds of computation (one syntax, consistent documentation, common data types that work across many functions, etc.).

2. High-level languages need more design than low-level languages

The core team for open-source language design is usually very small and therefore tends to focus on a minimal set of low-level language constructs to support the language’s key concepts. Higher-level concepts are then delegated to the competing developers of libraries, who design independently of each other or the core language team.

Wolfram’s vision of a computational language is the opposite of this approach. We believe in a language that focuses on delivering the full set of standardized high-level constructs that allows you to express ideas to the computer more quickly, with less code, in a literate, human-readable way. Only centralized design and centralized control can achieve this in a coherent and consistent way.

Practical benefits of this include:

  • One language to learn for all coding domains (computation, data science, interface building, system integration, reporting, process control, etc.)—enabling integrated workflows for which these are converging.
  • Code that is on average seven times shorter than Python, six times shorter than Java, three times shorter than R.
  • Code that is readable by both humans and machines.
  • Minimal dependencies (no collections of competing libraries from different sources with independent and shifting compatibility).

3. You need multidisciplinary teams to unify disparate fields

Self-assembling development teams tend to rally around a single topic and so tend to come from the same community. As a result, one sees many open-source tools tackle only a single computational domain. You see statistics packages, machine learning libraries, image processing libraries—and the only open-source attempts to unify domains are limited to pulling together collections of these single-domain libraries and adding a veneer of connectivity. Unifying different fields takes more than this.

Because Wolfram is large and diverse enough to bring together people from many different fields, it can take on the centralized design challenge of finding the common tasks, workflows and computations of those different fields. Centralized decision making can target new domains and professionally recruit the necessary domain experts, rather than relying on them to identify the opportunity for themselves and volunteer their time to a project that has not yet touched their field.

Practical benefits of this include:

  • Provides a common language across domains including statistics, optimization, graph theory, machine learning, time series, geometry, modeling and many more.
  • Provides a common language for engineers, data scientists, physicists, financial engineers and many more.
  • Tasks that cross different data and computational domains are no harder than domain-specific tasks.
  • Engaged with emergent fields such as blockchain.

4. Hard cases and boring stuff need to get done too

Much of the perceived success of open-source development comes from its access to “volunteer developers.” But volunteers tend to be drawn to the fun parts of projects—building new features that they personally want or that they perceive others need. While this often starts off well and can quickly generate proof-of-concept tools, good software has a long tail of less glamorous work that also needs to be done. This includes testing, debugging, writing documentation (both developer and user), relentlessly refining user interfaces and workflows, porting to a multiplicity of platforms and optimizing across them. Even when the work is done, there is a long-term liability in fixing and optimizing code that breaks as dependencies such as the operating system change over many years.

While it would not be impossible for a FOSS project to do these things well, the commercially funded approach of having paid employees directed to deliver good end-user experience does, over the long term, a consistently better job on this “final mile” of usability than relying on goodwill.

Some practical benefits of this include:

  • Tens of thousands of pages of consistently and highly organized documentation with over 100,000 examples.
  • The most unified notebook interface in the world, unifying exploration, code development, presentation and deployment workflows in a consistent way.
  • Write-once deployment over many platforms both locally and in the cloud.

5. Crowd-sourced decisions can be bad for you

While bad leadership is always bad, good leadership is typically better than compromises made in committees.

Your choice of computational tool is a serious investment. You will spend a lot of time learning the tool, and much of your future work will be built on top of it, as well as having to pay any license fees. In practice, it is likely to be a long-term decision, so it is important that you have confidence in the technology’s future.

Because open-source projects are directed by their contributors, there is a risk of hijacking by interest groups whose view of the future is not aligned with yours. The theoretical safety net of access to source code can compound the problem by producing multiple forks of projects, so that it becomes harder to share your work as communities are divided between competing versions.

While the commercial model does not guarantee protection from this issue, it does guarantee a single authoritative version of technology and it does motivate management to be led by decisions that benefit the majority of its users over the needs of specialist interests.

In practice, if you look at Wolfram Research’s history, you will see:

  • Ongoing development effort across all aspects of the Wolfram technology stack.
  • Consistency of design and compatibility of code and documents over 30 years.
  • Consistency of prices and commercial policy over 30 years.

6. Our developers work for you, not just themselves

Many open-source tools are available as a side effect of their developers’ needs or interests. Tools are often created to solve a developer’s problem and are then made available to others, or researchers apply for grants to explore their own area of research and code is made available as part of academic publication. Figuring out how other people want to use tools and creating workflows that are broadly useful is one of those long-tail development problems that open source typically leaves to the user to solve.

Commercial funding models reverse this motivation. Unless we consider the widest range of workflows, spend time supporting them and ensure that algorithms solve the widest range of inputs, not just the original motivating ones, people like you will not pay for the software. Only by listening to both the developers’ expert input and the commercial teams’ understanding of their customers’ needs and feedback is it possible to design and implement tools that are useful to the widest range of users and create a product that is most likely to sell well. We don’t always get it right, but we are always trying to make the tool that we think will benefit the most people, and is therefore the most likely to help you.

Practical benefits include:

7. Unified computation requires unified design

Complete integration of computation over a broad set of algorithms creates significantly more design than simply implementing a collection of independent algorithms.

Design coherence is important for enabling different computations to work together without making the end user responsible for converting data types, mapping functional interfaces or rethinking concepts by having to write potentially complex bridging code. Only design that transcends a specific computational field and the details of computational mechanics makes accessible the power of the computations for new applications.

The typical unmanaged, single-domain, open-source contributors will not easily bring this kind of unification, however knowledgeable they are within their domain.

Practical benefits of this include:

  • Avoids costs of switching between systems and specifications (having to write excessive glue code to join different libraries with different designs).
  • Immediate access to unanticipated functions without stopping to hunt for libraries.
  • Wolfram developers can get the same benefits of unification as they create more sophisticated implementations of new functionality by building on existing capabilities.
  • The Wolfram Language’s task-oriented design allows your code to benefit from new algorithms without having to rewrite it.

8. Unified representation requires unified design

Computation isn’t the only thing that Wolfram is trying to unify. To create productive tools, it is necessary to unify the representation of disparate elements involved in a computational workflow: many types of rich data, documents, interactivity, visualizations, programs, deployments and more. A truly unified computational representation enables abstraction above each of these individual elements, enabling new levels of conceptualization of solutions as well as implementing more traditional approaches.

The open-source model of bringing separately conceived, independently implemented projects together is the antithesis of this approach—either because developers design representations around a specific application that are not rich enough to be applied in other applications, or if they are widely applicable, they only tackle a narrow slice of the workflow.

Often the consequence is that data interchange is done in the lowest common format, such as numerical or textual arrays—often the native types of the underlying language. Associated knowledge is discarded; for example, that the data represents a graph, or that the values are in specific units, or that text labels represent geographic locations, etc. The management of that discarded knowledge, the coercion between types and the preparation for computation must be repeatedly managed by the user each time they apply a different kind of computation or bring a new open-source tool into their toolset.

Practical examples of this include:

  • The Wolfram Language can use the same operations to create or transform many types of data, documents, interfaces and even itself.
  • Wolfram machine learning tools automatically accept text, sounds, images and numeric and categorical data.
  • As well as doing geometry calculations, the geometric representations in the Wolfram Language can be used to constrain optimizations, define regions of integration, control the envelope of visualizations, set the boundary values for PDE solvers, create Unity game objects and generate 3D prints.

9. Open source doesn’t bring major tech innovation to market

FOSS development tends to react to immediate user needs—specific functionality, existing workflows or emulation of existing closed-source software. Major innovations require anticipating needs that users do not know they have and addressing them with solutions that are not constrained by an individual’s experience.

As well as having a vision beyond incremental improvements and narrowly focused goals, innovation requires persistence to repeatedly invent, refine and fail until successful new ideas emerge and are developed to mass usefulness. Open source does not generally support this persistence over enough different contributors to achieve big, market-ready innovation. This is why most large open-source projects are commercial projects, started as commercial projects or follow and sometimes replicate successful commercial projects.

While the commercial model certainly does not guarantee innovation, steady revenue streams are required to fund the long-term effort needed to bring innovation all the way to product worthiness. Wolfram has produced key innovations over 30 years, not least having led the concept of computation as a single unified field.

Open source often does create ecosystems that encourage many small-scale innovations, but while bolder innovations do widely exist at the early experimental stages, they often fail to be refined to the point of usefulness in large-scale adoption. And open-source projects have been very innovative at finding new business models to replace the traditional, paid-product model.

Other examples of Wolfram innovation include:

  • Wolfram invented the computational notebook, which has been partially mirrored by Jupyter and others.
  • Wolfram invented the concept of automated creation of interactive components in notebooks with its Manipulate function (also now emulated by others).
  • Wolfram develops automatic algorithm selection for all task-oriented superfunctions (Predict, Classify, NDSolve, Integrate, NMinimize, etc.).

10. Paid software offers an open quid pro quo

Free software isn’t without cost. It may not cost you cash upfront, but there are other ways it either monetizes you or that it may cost you more later. The alternative business models that accompany open source and the deferred and hidden costs may be suitable for you, but it is important to understand them and their effects. If you don’t think about the costs or believe there is no cost, you will likely be caught out later.

While you may not ideally want to pay in cash, I believe that for computation software, it is the most transparent quid pro quo.

“Open source” is often simply a business model that broadly falls into four groups:

Freemium: The freemium model of free core technology with additional paid features (extra libraries and toolboxes, CPU time, deployment, etc.) often relies on your failure to predict your longer-term needs. Because of the investment of your time in the free component, you are “in too deep” when you need to start paying. The problem with this model is that it creates a motivation for the developer toward designs that appear useful but withhold important components, particularly features that matter in later development or in production, such as security features.

Commercial traps: The commercial trap sets out to make you believe that you are getting something for free when you are not. In a sense, the Freemium model sometimes does this by not being upfront about the parts that you will end up needing and having to pay for. But there are other, more direct traps, such as free software that uses patented technology. You get the software for free, but once you are using it they come after you for patent fees. Another common trap is free software that becomes non-free, such as recent moves with Java, or that starts including non-free components that gradually drive a wedge of non-free dependency until the supplier can demand what they want from you.

User exploitation: Various forms of business models center on extracting value from you and your interactions. The most common are serving you ads, harvesting data from you or giving you biased recommendations. The model creates a motivation to design workflows to maximize the hidden benefit, such as ways to get you to see more ads, to reveal more of your data or to sell influence over you. While not necessarily harmful, it is worth trying to understand how you are providing hidden value and whether you find that acceptable.

Free by side effect: Software is created by someone for their own needs, which they have no interest in commercializing or protecting. While this is genuinely free software, the principal motivation of the developer is their own needs, not yours. If your needs are not aligned, this may produce problems in support or development directions. Software developed by research grants has a similar problem. Grants drive developers to prioritize impressing funding bodies who provide grants more than impressing the end users of the software. With most research grants being for fixed periods, they also drive a focus on initial delivery rather than long-term support. In the long run, misaligned interests cost you in the time and effort it takes you to adapt the tool to your needs or to work around its developers’ decisions. Of course, if your software is funded by grants or by the work of publicly funded academics and employees, then you are also paying through your taxes—but I guess there is no avoiding that!

In contrast, the long-term commercial model that Wolfram chooses motivates maximizing the usefulness of the development to the end users, who are directly providing the funding, to ensure that they continue to choose to fund development through upgrades or maintenance. The model is very direct and upfront. We try to persuade you to buy the software by making what we think you want, and you pay to use it. The users who make more use of it generally are the ones who pay more. No one likes paying money, but it is clear what the deal is and it aligns our interest with yours.

Now, it is clearly true that many commercial companies producing paid software have behaved very badly and have been the very source of the “vendor lock-in” fear that makes open source appealing. Sometimes that stems from misalignment of management’s short-term interest to their company’s long-term interests, sometimes just because they think it is a good idea. All I can do is point to Wolfram history, and in 30 years we have kept prices and licensing models remarkably stable (though every year you get more for your money) and have always avoided undocumented, encrypted and non-exportable data and document formats and other nasty lock-in tricks. We have always tried to be indispensable rather than “locked in.”

In all cases, code is free only when the author doesn’t care, because they are making their money somewhere else. Whatever the commercial and strategic model is, it is important that the interests of those you rely on are aligned with yours.

Some benefits of our choice of model have included:

  • An all-in-one technology stack that has everything you need for a given task.
  • No hidden data gathering and sale or external advertising.
  • Long-term development and support.

11. It takes steady income to sustain long-term R&D

Before investing work into a platform, it is important to know that one is backing the right technology not just for today but into the future. You want your platform to incrementally improve and to keep up with changes in operating systems, hardware and other technologies. This takes sustained and steady effort and that requires sustained and steady funding.

Many open-source projects with their casual contributors and sporadic grant funding cannot predict their capacity for future investment and so tend to focus on short-term projects. Such short bursts of activity are not sufficient to bring large, complex or innovative projects to release quality.

While early enthusiasm for an open-source project often provides sufficient initial effort, sustaining the increased maintenance demand of a growing code base becomes increasingly problematic. As projects grow in size, the effort required to join a project increases. It is important to be able to motivate developers through the low-productivity early months, which, frankly, are not much fun. Salaries are a good motivation. When producing good output is no longer personally rewarding, open-source projects that rely on volunteers tend to stall.

A successful commercial model can provide the sustained and steady funding needed to make sure that the right platform today is still the right platform tomorrow.

You can see the practical benefit of steady, customer-funded investment in Wolfram technology:

12. Bad design is expensive

Much has been written about how total cost of ownership of major commercial software is often lower than free open-source software, when you take into account productivity, support costs, training costs, etc. While I don’t have the space here to argue that out in full, I will point out that nowhere are those arguments more true than in unified computation. Poor design and poor integration in computation result in an explosion of complexity, which brings with it a heavy price for usability, productivity and sustainability.

Every time a computation chokes on input that is an unacceptable type or out of acceptable range or presented in the wrong conceptualization, that is a problem for you to solve; every time functionality is confusing to use because the design was a bit muddled and the documentation was poor, you spend more of your valuable life staring at the screen. Generally speaking, the users of technical software are more expensive people who are trying to produce more valuable outputs, so wasted time in computation comes at a particularly high cost.

It’s incredibly tough to keep the Wolfram Language easy to use and have functions “just work” as its capabilities continue to grow so rapidly. But Wolfram’s focus on global design (see it in action) together with high effort on the final polish of good documentation and good user interface support has made it easier and more productive than many much smaller systems.

Summary: Not being open source makes the Wolfram Language possible

As I said at the start, the open-source model can work very well in smaller, self-contained subsets of computation where small teams can focus on local design issues. Indeed, the Wolfram Technology stack makes use of and contributes to a number of excellent open-source libraries for specialized tasks, such as MXNet (neural network training), GMP (high-precision numeric computation), LAPACK (numeric linear algebra) and for many of the 185 import/export formats automated behind the Wolfram Language commands Import and Export. Where it makes sense, we make self-contained projects open source, such as the Wolfram Demonstrations Project, the new Wolfram Function Repository and components such as the Client Library for Python.

But our vision is a grand one—unify all of computation into a single coherent language, and for that, the FOSS development model is not well suited.

The central question is, How do you organize such a huge project and how do you fund it so that you can sustain the effort required to design and implement it coherently? Licenses and prices are details that follow from that. By creating a company that can coordinate the teams tightly and by generating a steady income by selling tools that customers want and are happy to pay for, we have been able to make significant progress on this challenge, in a way that is designed ready for the next round of development. I don’t believe it would have been possible using an open-source approach and I don’t believe that the future we have planned will be either.

This does not rule out exposing more of our code for inspection. However, right now, large amounts of code are visible (though not conveniently or in a publicized way) but few people seem to care. It is hard to know if the resources needed to improve code access, for the few who would make use of it, are worth the cost to everyone else.

Perhaps I have missed some reasons, or you disagree with some of my rationale. Leave a comment below and I will try and respond.

Comments

Join the discussion

!Please enter your comment (at least 5 characters).

!Please enter your name.

!Please enter a valid email address.

88 comments

  1. “This does rule out exposing more of our code for inspection. However, right now, large amounts of code are visible (though not conveniently or in a publicized way) but few people seem to care. It is hard to know if the resources needed to improve code access, for the few who would make use of it, are worth the cost to everyone else.”

    – It is certainly true that “spelunking” can be done in some cases, where the vital parts of a function are implemented in top-level code. However, a frequent discomfort I hear about regarding the use of Mathematica is that the documentation on the methods being used (and thus references to them) is usually very scant. I think those people are justified with not being comfortable. Just because they specify Method -> “DifferentialEvolution” in NMinimize, or Method -> “Adams” in NDSolve, to give explicit examples, people are supposed to assume the literature method is being applied to their problem, when it may well be that there were subtle changes that yield different behavior from what was expected. With source that is open to inspection, as well as (or maybe just, at the extreme) good pointers to the literature, people might have more confidence in using these functions.

    I also hear similar complaints about the curated data functions or even Wolfram Alpha’s results. Currently, people might use the results of a Wolfram Alpha query when prototyping, but will switch to more “official” sources in actual applications. People might be more reassured to hear if Wolfram Alpha says it got the melting point of tin it returned from e.g. the latest CRC Handbook, or somesuch.

    There are a few more points, but they should probably be in another comment. This is just what came to mind when I read that sentence.

    Reply
    • That is certainly a common reaction. I think one has to distinguish between the theoretical benefit of seeing the source and the practical benefit. Lots of people are reassured that they could look if they wanted to, but don’t actually look when they can. The effort required to become familiar with a large code base to gain useful insight is quite high. When we take on new developers they need time to reach that state before they are safe to make significant contributions. That is not to say that the “marketing” value of that theoretical benefit should be ignored. In practice when people have serious questions about how code works, often connecting them with the right developer answers those questions faster.

      On the Wolfram|Alpha data, there is a link at the bottom of the page to sources, though we have not solved the question of how to display the specific source of an answer which might depend on combining several sources to produce a plot or compute a derived value. For example, the “melting point of tin” query lists element data as having the following sources ”
      Audi, G., et al. “The NUBASE Evaluation of Nuclear and Decay Properties.” Nuclear Physics A 624 (1997): 1-124. »
      Arblaster, J. W. “Densities of Osmium and Iridium: Recalculations Based upon a Review of the Latest Crystallographic Data.” Platinum Metals Review 33, no. 1 (1989): 14-16.
      Barbalace, K. L. “Periodic Table of Elements.” EnvironmentalChemistry.com. »
      Cardarelli, F. Materials Handbook: A Concise Desktop Reference. Springer, 2000.
      Coursey, J. S, et al. “Atomic Weights and Isotopic Compositions with Relative Atomic Masses.” NIST—Physics Laboratory. »
      Gray, T., N. Mann, and M. Whitby. Periodictable.com. »
      Kelly, T. D. and G. R. Matos. “Historical Statistics for Mineral and Material Commodities in the United States, Data Series 140.” USGS Mineral Resources Program. »
      Lide, D. R. (Ed.). CRC Handbook of Chemistry and Physics. (87th ed.) CRC Press, 2006.
      National Physical Laboratory. Kaye and Laby Tables of Physical and Chemical Constants. »
      Speight, J. Lange’s Handbook of Chemistry. McGraw-Hill, 2004.
      United States Secretary of Commerce. “NIST Standard Reference Database Number 69.” NIST Chemistry WebBook. »
      Winter, M. and WebElements Ltd. WebElements. »
      The Wikimedia Foundation, Inc. Wikipedia. »

      Also “This does rule out” was meant to read “This does not rule out”. (Fixed now)

      Reply
  2. Here are some reasons why our startup switched from Wolfram to python.

    * the deployment options of Wolfram language are very limited, expensive and are actually a lock in. It is actually a paid software with the addition of the Freemium and Commercial traps.
    * you can learn so much from others from sites like Kaggle
    * very hard to impossible to hire someone who knows Wolfram, therefore very high costs in training

    If Wolfram was free than the community would also grow and I could deploy my software wherever I want and have full control. So I really appreciate what Wolfram is trying to achieve and I am still a fan, but the direct and indirect effects of the paid model made it a bad choice for us.

    Reply
    • Taking your points in turn:
      * I don’t think your first point is a fair representation of deployment. The desktop route has always been – you pay for the development environment (eg Enterprise Mathematica) and you deploy for free using the Player. That is the opposite of freemium when it comes to deployment. For cloud deployment there are fees, though those are low for low demand in the public cloud but scale with use.
      * Kaggle is a fine site. And while we have not tried to do what they do, we do put effort into community to help people get together and learn. The two main places are http://community.wolfram.com (hosted by us) and https://mathematica.stackexchange.com/ (independent but with many contributions by Wolfram people)
      * This is an important challenge and we are putting more work into helping people learn (https://www.wolfram.com/wolfram-u/). But I think you also have to disinguish between language skills and whole platform. Knowing Python is not the same thing as knowing Python and the full set of libraries that one might need to assemble to solve a problem that Wolfram Language could address directly. If it is the Python coding skill and style then there is no reason to choose. You can access the whole of the Wolfram stack entirely from Python (https://pypi.org/project/wolframclient/). Coding skill problem solved! If you are talking about whole-stack knowledge, I think the recruiting gap is much smaller.

      Reply
      • Thanks for your reply.
        Deployment with the free player is very limited, since you can not access files or data bases, that’s way i thought of freemium. You get the very limited option with free CDF with the standard Desktop version and if you want something more useful you need to pay much more.
        Yes the mathematica stackexchange community is very helpful. The http://community.wolfram.com lacked a search and was therefore useless for me in the past. Maybe its better now.
        Overall, the underlaying problem remains if its paid the community stays small, therefore its harder to get answers and harder to recruit. Thanks for the info about wolframclient, this is really interesting and I will give it a go.

        Reply
        • Player isn’t limited when the content is created in Enterprise Mathematica. However, I accept that that very answer is a encroaching on the same ideas as Freemium, and is sufficiently complex that it doesn’t meet the highest standards of transparency that I would advocate. (And I must take some of the blame, as I was involved in those decisions at the time). There are some plans to streamline definitions to make them more transparent, the first step of which is to remove the distinction between CDF and NB in version 12.

          Your central point about needing to grow the community is well made, where we have more to do. One imminent small step will be a simplified process and presentation of free cloud accounts, since not enough people are discovering the free options at https://www.open.wolframcloud.com/

          Reply
          • Yes make the free option more prominent, and streamline the whole licensing. Maybe point students and hobby users there and invest in the long term growth instead of some quick $. Anyway thanks for this blog post and your answers and the effort for more transparency.

  3. And yet there is one very important reason why it should be open source: scientific reproducibility.

    All of the reasons you give seem to address the fact that the tech isn’t free, but none directly address the openness. If you look at the way Apple releases darwin and webkit code, you will see that open doesn’t necessarily mean free and non-commercial. Same thing goes for the success of redhat or android, which are open source, but corporate-backed.

    Thus said, if these are the best reasons Wolfram Research can officially come up with for not being open source, I think we can expect some parts of the language becoming open source soon.

    Oh no wait, I forgot, most of the language is already open source: all the functions written in Wolfram Language have inspectable definitions (look it up on stackexchange). We just need Wolfram Research to release the ones that are written in C.

    Reply
    • Yes, I had forgotten to mention the business of reproducibility, thank you for bringing it up. This is related to my statement about being assured that Method -> “Something” is really doing Something’s algorithm on whatever problem you fed it.

      I see now that “this does rule out” in the original version of this entry was a typo. This I am hopeful about. In that case, I am sure a way can be found where yes, people can (and should) still pay for things like support, but that the software itself can still be made transparent to scrutiny.

      Reply
    • Scientific reproducibility has little to do with Wolfram Language in my opinion. If you integrate some function it gives a response. You don’t need WL to verify, can be any language or book…

      Moreover, at least in physics, i think the reproducibility is much more in the experiment rather than in the data-analysis. I have never heard of reproducibility issues with the data processing…

      Reply
    • Yes. My comments were not really directed at the commercial corporate projects where the code is opened as a last step. Sometimes that is because the code is of no real value (the company is monetizing something else in the chain). But there is the question that I referenced briefly in the penultimate paragraph and in response to Joe Boy (the first commenter) about potential value in exposing code as an output of our commercial model. The notion of academic reproducibility is an extreme illustration of the “theoretical vs practical value” of seeing code as a means of understanding what is happening. One bad consequence of fully intergrate computation, is that often the way that WL computes something is much more complicated than a simple library version of a capability. Because we can call on lots of sophistication, we often do, making it all the more challenging to understand from the outside. To truly know how it works, there are potentially millions of lines of code, the code of all libraries and operating system operations, there is the code for the compilers used to turn source into machine code and then there is how the machine code executes on the CPU. A failure in correctness can occur at any stage, or just because you got unlucky and cosmic ray flipped a memory bit as it executed (that happens).

      It conflicts with our training in maths and science, but one has to accept that you will never know exactly how your computation is done, even in open source software, unless you dedicate your life to it (such as working for us).

      Personally, I think the thing that matters more to scientific reproducibility is that the documentation should be clear about what is supposed to happen. I was always taught in maths, “show enough working that someone else could verify each step”. To me WL code isn’t proof, it is the description of how you arrived at your answer. If the documentation is clear enough, someone else should be able to reproduce your answer using non -WL code. (Though, again, that is a theoretical task more than a practical one, because if we are doing our job right, the WL code should be much simpler than the alternate implementation). Our documentation isn’t always good enough, but I think we don’t do badly and it is generally getting better.

      As I said in my blog. That is not to say that we shouldn’t expose more code, but there are costs. My feeling is to spend that money on development rather than what I suspect is a marketing benefit more than a practical one. But I have added your vote to my mental tally for what the community wants!

      Reply
      • I would disagree with this statement that it is not reasonably possible to show decisions are being made behind the scenes to perform your calculation. I envision a new type of Trace function that returns a graph as ProofObject[“ProofGraph”] does. The “ProofGraphs” can be very complex and they are analogous to the decision process behind the scenes in selecting which branches of algorithms to test for selection and which was eventually selected.

        Reply
        • Well, certainly you can show them, and all that information is available to you. But perhaps what I should have said is that it is not realistic for the human mind to be able to hold that information in a way that gives your reasonable insight.

          A ProofGraph like view on code flow might be possible. I suspect the main challenge is showing the right amount of detail.

          I have never found a use for the one-argument form of Trace. Anytime that I have applied it to a non-trivial problem, I get so much information back that I need to write a program to analyze the answer.

          Reply
  4. I’m gonna throw this out there from a development oriented perspective: OSS is nice because when WRI screws up in kernel-level code or makes a suboptimal product there is no recourse for figuring out a) why it’s broken and b) how to fix it. OSS allows us to fully inspect and potentially even contribute a fix.

    One other thing: Mathematica couldn’t even thrive as OSS because its developer tools are highly lacking. We in the external Mathematica dev community can try to fill this gap as much as is possible but with all of the inconsistencies in the design of the language, the feature bloat leading to poorly implemented functionality, and just the general lack of performance of a lot of key pieces of the software we often hit dead ends that we’re not able to work around.

    Mathematica is a fun language, but I have to say writing big code in it causes a mental strain that just never appears with python.

    Reply
    • Also the many, many, valid complaints from a prominent member of the Mathematica development community here: https://chat.stackexchange.com/transcript/message/49763393#49763393

      Reply
    • It is really only the software world where there is a distrust of closed technology. I trust my life to cars, and planes that are entirely closed to me, our very survival depends on electrical networks that we have no public inspection of, I allow myself to be irradiated by medical devices that I can’t examine…. The difference is, that there is a small fraction of software users who feel that they have the knowledge to be able to make use of that information. I suppose a small fraction of air passengers know enough about plane design to understand a 747s architecture, but none of them feel that they could fork Boeing if it ever went bust.

      While there is a theoretical independence from Wolfram if it went bust or took a crazy development direction, having to take over responsibility for maintenance is almost as disastrous. I think the real practical value is as a threat to the owners to behave. I accept the value of that, as there certainly have been companies that have behaved badly and perhaps would not have if customers had had the walk-away power of source. All I can point to is 30 years of good behavior as a predictor of future behavior.

      Reply
      • That was meant to be a response to Bill Mooney below…

        Reply
      • What you say is partly untrue, partly really worrying.

        The part that is false is your car analogy: once I buy the car, I own it and I can go service it wherever I want or even fix it myself. The same is patently false of closed software.

        The part that is worrying is that you are comparing users of a programming language to passengers on a jet plane. That analogy might work for a videogame, maybe even for a computer algebra system, but users of a programming language are engineers and want to understand the tools they work with. And I think this attitude is the main reason why Mathematica is struggling to cross the chasm between computer algebra system and general purpose programming language.

        Reply
        • Well you own the physical object of the car, but you don’t own the brand or designs or the circuit diagrams, or the simulations that explain what will happen when you modify it. You can only really fix it yourself to the extent that you can buy components that the manufacturer has chosen to make available or standardize/document the interface to or to the extent that you can reverse engineer it. The bigger problem with the analogy is that “fix” for a car means replace parts. Hardly anyone can “fix” their car by improving the design of the engine.

          My impression is that this issue only really matters to relatively sophisticated developers who are capably of reading a language’s source code, but that is not a practical option for most people. What fraction of Python users have ever even opened its source code, let alone tried to fix something. I would be surprised if it were anywhere near 0.1%.

          What gives people reassurance is that “someone else” can fix it for them. The question is a large number of others like themselves, is more reassuring that a small number of others that they are paying?

          Reply
          • The utility of an open-source language goes far beyond the simple ability to “fork” a language repository. You’re right that most people would not have the skill to fix problems in the language implementation, but that’s only a small part why being open is useful. I program primarily in Scala/Spark for my day-job, and the ability to debug/trace into the language implementation (both Java and Scala source) is invaluable. One can inspect the values of variables at any level of the call stack and see what assumptions are built into the code even if one has no intention of actually modifying the language. I can fix issues with my inputs when I understand what is happening “under the hood.” I have learned a lot about how to write software (and how not to write software) by having the ability to understand every nut and bolt in my build. That’s why I get paid. I keep hearing a lot of chest-thumping about how WL is a ground-breaking general-purpose productivity language, and I admit (as a long-time paying user) it is really interesting. But, to be frank, it will rarely be used as such by professionals (in this day and age) as long as it is closed. Much as I like Mathematica for prototyping and data exploration, I would never trust it to build a high-throughput production-ready data-processing stack at my company running at scale. Respectfully, the days of developers embracing black-boxes (like they might do for a car) is over.

          • I am not sure the “30 years of good behaviour” actually applies. I use Mathematica a fair bit, but I steer my students to Python, to reduce the risk of lock-in — I love Mathematica, but find Wolfram’s pricing policies to be designed to encourage lock-in.

            Wolfram is fantastic in the sense that code I wrote 20 years ago likely still works today, and providing a fantastic mix of computational paradigms in the same box — it is truly impressive.

            However, I am not actually convinced it is a company I would trust in the long term.

    • I have seen a number of examples in the stack exchange where its users have shown that a built-in function can be handily outdone by user-made functions both in speed and accuracy, thus suggesting that insufficient development and/or QA time were spent on those functions. A user would reasonably think that something she or he is paying(!!!) for would have things that are, if not the best, very nearly the best, and yet here we are. Having code open for inspection would have helped avoid such awkwardness.

      Reply
      • I think embarrassing us with a better implementation and then reporting our failure to us as a bug is probably at least as effective a remedy as reading the code to understand what we did wrong (though that might add to our embarrassment!).

        (Sometimes we prioritize a method that is better overall but not the best in specific cases and lacks an good test for algorithm switching, but I expect that is not the explanation for most of the examples you are referring to).

        Reply
  5. For me, open source is more about intellectual property rights, than about architecture and control. From the outside looking in, it appears that you have made some decisions made on some bad assumptions of what open source software has to be. I could argue your points one by one, but I think it’s more important to address the key factor here: The market is inherently distrustful of closed technology systems.

    I do believe the scope of what Wolfram is trying to do is massive and important, but given the barriers to using the technology, it will be hard to extend it’s market. Worse is still better.

    Reply
  6. Personally, I do not really have a problem that Mathematica is not Open Source. Your points are understandable and it is completely valid for a company to go down that road.

    However, there is one point, though, which is a mystery for me that you don’t follow a more open approach: a public issue tracker. One of the biggest problems of the Mathematica ecosystem is, IMHO, the huge amount of open bugs and glitches. But what is worse, there is no central accessible place which keeps a list of all these problems and their current state. If there were, we would at least know if our problem is known and someone is working on it. Currently the information is spread around everywhere: forum, stackexchange, or, worst of all, individual contact with the support team (because then all the other users with the same problem do not have the information).

    This is not even a new idea. Many other companies apply this strategy successfully (e.g. Microsoft Visual Studio, JetBrains, …). I really think that this could also be a huge benefit for Mathematica and its users :-)

    Reply
    • Jan – I haven’t been involved in any of the decisions related to public bug tracking, so there may be issues I don’t know about but my main concern would be if it were to actually slow down or constrain the conversation. If contents had to be publicly accessible, would it take longer to avoid the insider jargon that our bugs database is full of, and would a developer feel unable to express the severity or inept cause of a bug knowing that they might be quoted out of context? It does feel like the world is more mature in its attitude to bugs than when I started in the 90s.

      Then if you admitted that there were bugs some people would instantly demand refunds or claim that it was a trick to force you to upgrade. Even today in a Reddit discussion about this blog, someone was quoting a bug that was closed 5 years ago to discredit my comments.

      I suspect that the benefits would outweigh the costs, but there are costs. But as I said, it isn’t my field.

      Reply
  7. Hi there. I think this is a very well thought out argument. My issue with it is that it is contradicted by reality. I mean, there are numerous examples of open source products that do just what you try to disprove possible… projects with a sustained, long-lived, constant development, world-wide adoption, quality, complex, and so on.

    So while an interesting intellectual exercise, it fails to adhere to reality.

    Nevertheless it got me thinking why is it so… that would be an article I would also read.

    Reply
  8. Jon – Very interesting read with some great points. Also, very brave of you to touch on this subject!

    Reply
  9. Thank you for the great post Jon. I agree with Wolfram’s direction. Keep up the great work.

    Reply
  10. While some of the above commentary might be OK to justify for mostly industry solutions, when it comes to academic or non-profit research the Wolfram Language is beginning to lag behind. The US Federal funding agencies are highly unlikely to fund research for which the majority of users or potential developers and users cannot obtain free licenses. In fact peer reviewers typically negatively evaluate such proposals, so developing in the Wolfram Language, however wonderful to code in leads to an inherent funding disadvantage. Hence the success of Python and R for data science and federally funded research. If Wolfram was intending to keep up with these developments, then rethinking the license model is essential: i.e. free licenses should be provided for academic and non-profit organizations. Otherwise, we cannot in these environments continue to develop software solutions in the Wolfram Language without funding, and given that Wolfram itself does not have any funding mechanisms to encourage connections to the non-profit/academic researchers. Keep in mind the payoff: students or researchers moving into the for-profit sectors, once they use Mathematica are more likely to continue using it. However right now, most data science and quantitative mathematical science teaching is shifting to using R and Python in the classroom – and this will have a severe detrimental effect to the Wolfram user base in the coming years.

    Reply
    • In academia our focus is on “free at the point of use” and we are making good progress towards this.

      There are a very large number of universities (at least in developed countries) where Mathematica is licensed on an unlimited basis. Unfortunately not all universities pass this model on to the staff and students by internally charging cost recovery fees per install, but we are working to encourage unrestricted access.

      We are also being much more open to whole-system arrangements. Perhaps the biggest is Egypt where every single student, academic and teacher in the country has free (to them) access to Mathematica (probably about 40 million people). I would like to see more such arrangements.

      With the free Wolfram Cloud access options (soon to be simplified) limited access is free to essentially everyone.

      The case needs to be made in the funding applications, and this seems like something we should help articulate.

      Reply
  11. Please do NOT o/s the Mathematica system. EVER. Instead, WRI should be more amenable to user feedback/suggestions, but in the end, still retain all decision rights. I’ve seen a quarter century of *horrible* battles between total idiots (individuals and entire groups) about how to “improve” software, and which strategic direction to go, which in the end made bad software worse. In a way, it’s understandable because different people have different expectations, ideas, and visions. No criticism here. I know what I write here won’t be a popular opinion, because “diversity” and “inclusiveness” are currently the mantras du jour, quality and consistency are not. Then there are the communication and decision-making / committer issues. Endless debates and fights, because people cannot agree, get stubborn and petulant, hasty commits, and the M system is not something that can easily change directions or “fix” things, because there are some things called “kernel integrity” and backwards-compatibility. “Decision by committee” is not accidentally used to criticize useless blahblah groups. As long as my suggestion is duly noted and not ignored (but can eventually be discarded as a bad suggestion), I’ll be satisfied. I don’t need to “participate”, and I don’t have the time to do it. But I need a powerful system, and PLEASE don’t water it down by too much participatory inclusiveness / diversity. People can make their very diverse suggestions known through the feedback/support mechanism. As long as people actually *deal* with it (and not just drop it), then there is diversity of thought. And we do *not* want participatory inclusiveness in the development of the M system. We need this to WORK WELL!

    Reply
    • I see enough developer debates to know that user feedback gets through and is considered.

      The impression I get, however, that we don’t get that much of it and have to search for some of it in external forums. But my guess is that people feel that sending suggestions will be ignored, and probably the first line response from support of “your suggestion has been forwarded to our development team” is assumed to be a brush-off.

      This contrasts with what I see at the annual Wolfram Technology Conference where it is hard to take the volume of feedback that one gets in face-to-face conversations.

      I don’t immediately know how to be more inviting when not face-to-face.

      Reply
    • WRI can have all decision rights about Mathematica even if Mathematica is open-source. It doesn’t have to listen to the community or take commits to the code.

      Reply
  12. I have used Mathematica for almost three decades at academia and during 22 years of system and product engineering and it is still my favourite software tool now in the new shape of Wolfram One. My main point in this discussion is that if we value time as money with a reasonable conversion factor, then the Wolfram Language is as close as you can come to free software, because if you can solve it within Wolfram it will definitely take less time than fiddling around with python, matlab or root packages/libraries (F# is fun too but lacking the built-in knowledge and algorithms).

    Then it is also very obvious that there is still a long way to go from solving an engineering challenge with Wolfram capabilities to generate “production code” for mass-produced embedded products. Maybe new compiler technology in the next release is a silver lining? I keep my fingers crossed.

    Last (but maybe least) when could we have an interactive way to zoom into a 2D plot with the mouse only?

    Reply
  13. Well, from my own experience, once I start prototyping my ideas in Mathematica, there is limited freedom for me to choose my tech stack. It seems that if you have done one thing in MMA, then you should probably do all the things in MMA. It is like a honeypot, where you enjoy the part it benefits you, but you need to bear all the other parts which MMA is not good at. Thus, I am quite the opposite of relying on a huge system like Mathematica when choosing my toolchain for industrial usage. There is no such silver bullet that unifies the design while keeping everything smooth and fast. As you said, MMA is a multidisciplinary software, but not all the users are interested in all the subjects that MMA deals with. They might just need a few new features and old bugs fixed in a specific area. It is not reasonable for them to deploy the gigabytes of the environment and wait for a major update every two or three years. A more efficient and economical way for them is to decoupled things into modules where the bottleneck may be quickly positioned and replaced by a better substitute. In the open-source world, users and developers enjoy the “combos” of different tech frameworks. One of the biggest reasons is they are capable of upgrading any “weapon part” in their hand for a specific purpose quickly, instead of sticking to a corpulent system and waiting for its evolution.

    Also, unified representation seems a joke in Mathematica to me. Wolfram Language is a dynamic pattern-based lisp-like language, it is flexible and expressive. It has dozens of programming paradigms, which enables developers to choose whatever they like to represent the data. We have seen the encapsulation in object-oriented form, like `Graph`, `CellObject` and `NotebookObject` that use `SetProperty` or `PropertyValue` as the interfaces. Meanwhile, we may encounter `SocketObject`, `InterpolationFunction` and `LinearModel`, whose interfaces are bound to their `DownValues`. There is no official code style guideline in WRI, even the packages that are maintained by the same team has different data representations because too many developers have touched it during the thirty years and they have different preferences in implementing the packages. Take the packages which are open-sourced and easy to find in the installation folder of MMA for example, they apparently have completely different design patterns and not as unified as you wrote in the blog. This is how the Wolfram language impresses people. It allows difference and freedom for developers to interpret the same idea. So personally, I am not a big fan of the centralized decision and unified design.

    I am not requiring WRI to open-source the entire MMA or other products. I am just suggesting that they should be more opened and more developer-friendly. We do not care as much about your commercial operations as you are, but we do concentrate on technical details. I recall a lecture about the parallel computing given in Wolfram U. the lecturer said that usually `Table` is faster than `Map`, but sometimes it is not. And sometimes you may want to try `PackedArray` for better performance. He presented this in such an uncertain way, just like the response you get from Google or other third-party communities such as StackExchange. It would be fascinating for WRI to publish a technical manual, besides the documentation that mainly deals with the basic usage of the functions. As an engineer, I may concern about the metalanguage details such as the memory model, evaluation strategies, operator precedence, and interpreter implementation. There are commercial secrets behind the algorithm, but for the core language part, they are so basic, and not necessary to hide from developers.

    Also, as I stated earlier, it is good to break things down into modules. The fact that the backend (kernel) and frontend (notebook) are entangled so deeply, is really an obstacle for developers who want to implement efficient packages. For example, the `Rasterize` function is implemented in Frontend, it takes several seconds for the kernel to send the data to the frontend and get the result back. Also calling `Export` function on images will start a frontend on the background. This kind of twisted kinds of stuff happens in many places in MMA. The pure pattern test (like head checking for _SocketObject) will automatically import “zeromq” packages which take up a few seconds, even if you do not need them. Modularization not only benefits developer, but also the business. You may divide the packages into different bundles, and sell and upgrade them separately.

    Reply
    • I want to separate modularity as an implementation issue from modularity in the user experience. Good modularity in implementation is sensible, and done poorly is bad. Some of the problems you describe are caused by modularity (not having Rasterize in the kernel, loading libraries when less common functionality is needed) but it hasn’t been done as well as it could be (loading ZMQ if it wasn’t needed, and having modules like the FE that are too big). (There is a long term project to re-engineer the FE but it isn’t in the near future).

      But the idea that you should break up the user experience into independent components is, I think more problematic. It is already problematic enough to have to worry about whether your code uses version 11.3 features that won’t work if you send it to an 11.0 user. But making it so that the required versions are a list of individually versioned components is unmanageable. Even if you think you are only touching three modules, because of the way that we integrate all of the functionality to be able to make use of the rest, the result would probably be a dependency tree that ended up upgrading everything anyway.

      The idea that you don’t need all of the Wolfram Language is true, but breaking it up is problematic. Years ago we listed to users that said “I only want to use 10% of Mathematica, can’t I get a cut down version that is cheaper?” I led the development of CalculationCenter that addressed that demand. When we took it to market, lots of people said “If it just had feature X, I would buy it” but unfortunately X was different for every customer. Modular purchase would destroy sharing. At least “you need Mathematica or Player to run this” is easy to explain. Having to list a set of toolboxes is not.

      On the gigabytes of download- there is a discussion going on about re-creating a “no documentation” build where docs would download on demand or be web based only. That would cut out more than 50% of the installation size. Not for version 12 though.

      Reply
  14. I’d say there is little point to open source the kernel to the public, as there won’t be many people being able to read/review its code without devoting all their time into it. But OTOH it would be a great idea to invite certain people outside WRI to access the code (like Microsoft’s MVPSLP Program).

    And as an intermediate developer, without the need of kernel source access, what I really wish is a more friendly package develop environment: a stable and documented-in-detail package building and distributing system, an official Wolfram language specification, a promise on which functions are subject to long term stability. And maybe a collaboration system to let WRI participant 3rd party package developments so to make sure there is no conflict between any important 3rd party packages and the company’s grand design. That way, for WRI there won’t be risk of compromising own design. For the developers, without worrying their work being suddenly incompatible with latest WL, they can really get encouraged and enjoy participanting the establishment of an ecosystem.

    That said, I think WRI can and should totally keep the solo power of deciding the future path of WL. In the meanwhile it will definitely be good to have a bridge / portal between WRI and the developers / users as open as possible. We are not just sit and wait for whatever ships from WRI, we will also be able to contribute during the decision making stage, from *our* view of aspect.

    Lastly, hopefully regular users’ desire would get pass to WRI through the words of developers.

    Talk more with developers. That’s my two cents.

    Reply
    • For larger projects we do have a partnerships team who are supposed to help with some of those issues. The fact that I can’t immediately find the email address suggests we don’t publicize it enough. But mail into info@wolfram.com and ask for the message to be forwarded to them and it should get through.

      I believe that there is a plan to clean up and document the Paclet mechanism that we have long used for package delivery and updates, and create ways for you to deliver through that. Not sure of the schedule.

      We will imminently have a simplified mechanism that is meant for single-function delivery. It will initially only support free functions with source code. You can see it at https://resources.wolframcloud.com/FunctionRepository
      Right now it only contains a few hundred functions submitted by Wolfram people, but with version 12, it will be open to user submissions.

      Reply
  15. I learned again and again that using software where I can’t fix a problem that interferes with my work is simply not worth my time. Sorry but I can’t trust it, and without a possibility of fixing or working around bugs I’d rather look for alternatives.

    With this hostility to open source, I’m orphaning today the support for Mathematica fonts in Debian-based distributions (Ubuntu, Devuan, Mint, …) I had been maintaining, If someone is willing to pick it up, now is the time. Otherwise, I’ll file for removal shortly.

    Reply
    • I’m sorry you feel that this blog was hostile, that wasn’t my intent. As I said in the article, I think FOSS development is very good at some things, but I wanted to explain why I don’t think it would work for what we are trying to do.

      If there is information you are missing about the fonts, please just ask for it.

      Reply
  16. I’m a theoretical physicist, and I use computer algebra very intensively for calculations of multi-loop Feynman diagrams. I use Mathematica(and find it very powerful), and also Reduce (I use it from 1978, it is much more efficient than Mathematica in many cases), also some other free systems. What I see when I compare Mathematica with various free software programs:
    1. Bugs in Mathematica are not fixed for many years after thay have been reported. Users who report bugs get no feedback from the developers. Any free software project has a public bugzilla, where bugs can be reported and discussed. Mathematica is closed, and users are helpless. Just try
    DiscreteRatio[Sin[x],x]
    This elementary bug has been reported long ago, and still is not fixed.
    New bugs are introduced all the time. A moderately complicated calculation (about 10 minutes I think) which had run successfully in Mathematica 11.0 leads to a silent dying of the Mathematica kernel in 11.3. As a result, my college (who has paid for the upgrade to 11.3!) had to downgrade to 11.0.
    3. In many cases, bug reports from users have the form like: this integral was being calculated correctly in the version x, but produces a wrong result in the version x+1. In any free software project I know developers would immediately do git bisect and find the offending commit. If, after this commit, some previously correct integral has become wrong, this commit is, at least, suspecious. And developers would try to refine it. In Mathematica, nobody fixes these bugs, and the integral continues to be wrong in versions x+2, x+3, … ad infinitum, for many years.

    Reply
    • Sorry, I meant DiscreteRatio[Sin[Pi*x],x] of course

      Reply
    • I’m not qualified to comment on the specific bug. But I will just make one general point when we are in the space of things like Integrate. While many regressions are just mistakes that should be corrected quickly when traced, sometimes they are victims of a net improvement. In order to fix a more important problem (a more common integral, or a large class of cases) it may be that the sacrifice is breaking a less important or smaller class. Conversely, when the fixes for integrate bugs cause a net deterioration elsewhere, we don’t include them. I think this is perhaps intrinsic in problems that are implementing at or close to the limit of human knowledge on a topic.

      Reply
      • I have similar concerns. Sometimes, Mathematica works like a black box without telling the user how certain things are implemented and to really effectively and correctly use Mathematica programming for research, I really need to know a little more how some functions are implemented. Often times, when the topic is esoteric enough, it is really hard to get qualified people to discuss with you. For example, https://community.wolfram.com/groups/-/m/t/1616638. It seems sometimes, there needs to be one-on-one online or on-the-phone communication channel between the relevant Wolfram developer and the user.

        Reply
        • I understand that there is a plan to have more real-time support. But that only half answers your request as our support people are only expected to answer “regular” questions, and the really in-depth problems have to be escalated to the developers, who will not be in a public-facing role, so that they can spend most of their time “developing”.

          While community.wolfram.com and mathematica.stackexchange.com are both great and quite a few of our developers contribute, we monitor them as a formal process. So you should send questions or bug reports into support@wolfram.com where they are tracked. It doesn’t guarantee you the answer you want, but you shouldn’t get ignored.

          Reply
  17. Hello Jon,

    This blog post confirms several ideas and attitudes within Wolfram Research that I always suspected, and which are very concerning to me. I have invested very heavily into Mathematica/WL, including putting countless hours into the development and maintenance of several packages. Thus I feel that I need to respond.

    I am less interested in whether Mathematica should or shouldn’t be (F)OSS. What I am concerned about are several opinions that you expressed above. I do want to note though that it seems to me that you conflate making the source open with how a project is governed. Yes, many or most FOSS projects are driven by community-contributions, and I agree with many of the points you make about what’s wrong with that. I am also in the (probably) minority that would agree with your point 9, and I acknowledge the many innovations that came out of WRI. I, as well, have pointed out many times in the past how community-driven projects do not seem to be capable of producing true innovation, and will mostly just copy an often inferior (but popular) concept. E.g. most competitor systems just copy MATLAB’s IMO inferior linear algebra approach. I also remember many conversations I had about the notebook workflow before Jupyter/ipython became popular. People just didn’t get it, and the typical response I got was that this is not necessary, redundant or even inferior, or “we already have report generation” (which is not the same thing). Look at how everyone is using Jupyter now, often without even knowing where the notebook idea originally came from!

    With that out of the way, let me take some points you made (quotes are marked with three ” signs). Some are very concerning as WRI approach has exactly the opposite practical effect that you claim, while open source project manage much better. This should not happen!

    While reading the below, please keep in mind that it is meant as constructive criticism that comes from an avid fan of Mathematica who hopes to be able to continue using it for a long time to come. I would not take the time to comment if I did not care.

    “””
    Your choice of computational tool is a serious investment. You will spend a lot of time learning the tool, and much of your future work will be built on top of it, as well as having to pay any license fees. In practice, it is likely to be a long-term decision, so it is important that you have confidence in the technology’s future.
    “””

    This is a very good point, and it is, unfortunately, precisely the reason why I often feel like I should jump ship as my over-reliance on Mathematica makes me vulnerable. I am on the brink of losing confidence in the technology’s future.

    “””
    Because open-source projects are directed by their contributors, there is a risk of hijacking by interest groups whose view of the future is not aligned with yours.
    “””

    Well, this is exactly what happened to me when I bet on Mathematica! With open-source project I at least have some influence or can contribute to mould it to my needs (this is not theoretical, I have done this!)

    Mathematica promised to provide usable graph theory or network analysis functionality. Then the development of Graph was simply abandoned. This is not admitted publicly, but it is as plain day to someone like me who tries to use this functionality regularly. No new functionality of substance was added since v10.0 (PlotThemes fo graphs don’t count), and the countless very serious bugs are not getting fixed, or are fixed only very slowly. Responses relayed by support are either not helpful, or are plainly refused. To be fair, 12.0 (which I had the opportunity to beta test) has fixed more practically relevant graph-related bugs, than any previous version since 10.0, but the general functionality area is still in a very sorry state with many more unaddressed issues than in other parts of WL.

    How do I deal with this? I started my own network analysis WL package (IGraph/M), originally as an interface to the igraph library, then with many more functions added independently.

    Can you convince me that I am not being extremely foolish for still making my network analysis work dependent on WL after Wolfram has left me high and dry? At this point, it is hard to bring rational arguments, even though I very much WANT to be convinced.

    Compare that with open-source: igraph’s original authors practically no longer contribute. But I _can_ contribute, I can fix things, I can implement new algorithms, and they are all accepted. I try to eventually contribute everything I do back to the open-source igraph (which also has a Python and R interface) and NOT just my IGraph/M WL package because I am no longer confident that Wolfram will support me in the future. I no longer have much confidence in the technology’s future.

    “””
    The theoretical safety net of access to source code can compound the problem by producing multiple forks of projects, so that it becomes harder to share your work as communities are divided between competing versions.
    “””

    It’s not a theoretical safety net, it’s a real one that I am relying on, as I explained above. And if igraph’s maintainers disappeared from the planet tomorrow, I could fork the project and would lose no prior work.

    What if Wolfram goes bankrupt? I can’t say the same.

    Also, community fragmentation from forks is not a common thing at all (though I am well aware that at least one FOSS project that WRI relies on was affected by this).

    “””
    Minimal dependencies (no collections of competing libraries from different sources with independent and shifting compatibility).
    “””

    This way of thinking is one of the major issues I have with Wolfram. Packages are not bad, they are good: both structuring Mathematica into packages and encouraging a healthy third-party package ecosystem.

    For some reason, Wolfram is almost hostile towards package developers. Functionality that would be critical for package development are not added, or are hidden in undocumented contexts like Internal. You refuse to use namespaces, everything must go in System, which is a surefire way to regularly break compatibility with existing packages.

    Not only that, but you are also shooting yourself in the foot: You talk a lot about design, and the importance of good design, while making some extremely questionable choices that go against everyone else (such as that namespaces, or “contexts” are helpful).

    Did you know that the design of Graph is just broken, and can’t be fixed? The API is such that it will never be possible to work conveniently with multigraphs, and it is plainly impossible to assign edge properties to them. This is _critical_ for practical network analysis (what network scientists do) and also important to certain branches of graph theory (what mathematicians do).

    I pointed this out many times through many channels (including support) and got exactly zero feedback from the developers. No wonder, what could they say?

    Maple, a system that embraces namespaces/packages, has already replaced its original graph package with a new, better one, while also providing access to the old package. They could easily do it in the future because their system is structured into packages.

    Instead, Wolfram often comes up with a design without considering any practical use cases (this is what’s wrong with PURE top-down design), and then insists that it’s good instead of fixing it, despite concrete examples of critical tasks that can’t be accomplished with it.

    But how _could_ you fix it? I guess introducing a symbol named Graph2 would be too ugly, and since WL does not embrace namespaces, you cannot even introduce a package. Are you going to completely change Graph and break all backwards compatibility? Can you tell me how this could be solved, at least theoretically? This would be important to restore my confidence in the future of the technology, and make me feel secure to continue relying on it.

    “””
    While the commercial model does not guarantee protection from this issue, it does guarantee a single authoritative version of technology and it does motivate management to be led by decisions that benefit the majority of its users over the needs of specialist interests.
    “””

    You expressed several times that Mathematica wants to do everything in a unified way, and wants to do it alone, without 3rd party packages. **That’s not possible, you don’t have the resources!**

    Almost none of the functionality areas in Mathematica cover as many use cases as open source alternatives. It is rare to see something implemented in Mathematica that is not implemented in some Python- or R-accessible open source package. The reverse is frequent.

    “over the needs of specialist interests” reads as if we can’t expect the functionality area needed for our work to be developed. Most users are scientists. We always need specialized tools.

    Have you ever used a tool like Fiji? It’s an open source tool for image analysis. It is terribly ugly with countless inconveniences, but it covers vastly more functionality than Mathematica could ever hope to implement. It also receives the bleeding edge methods that come out of the latest research. Several research groups in my institute contribute to it.

    Please do not try to implement everything yourself. Instead be open to interoperability with other systems, be open to having lots of packages developed by the community, and make it possible for people to also implement a Mathematica interface to latest new image processing method they just created.

    That’s the only way to have access to what I need and still use Mathematica as the center of my workflow (which is what I would like to do).

    “””
    Consistency of design and compatibility of code and documents over 30 years.
    “””

    WL is getting worse and worse at breaking compatibility with packages due to indiscriminate pollution of the system namespace, as I pointed out above. New symbols in a different context don’t hurt. Lots of new symbols in the System context guarantee to break packages.

    Incompatible updates are also not documented (note that MATLAB always documents incompatible updates meticulously), and changes are made capriciously to dark corners of the language. There is no language specification, so we need often to spelunk (something done regularly in the community). With open source projects, this is much less of an issue as we can examine how they work if there are doubts, and if there’s no changelog, in the worst case there’s a commit history.

    Mathematica is not only NOT open-source, there’s also often pointless code hiding, such as the Locked/ReadProtected pair so typical of Graph-related internal functions. (Many other areas are luckily easily spelunkable.) Maple is also not FOSS, yet it explicitly makes its code readable.

    “””
    Our developers work for you, not just themselves

    Commercial funding models reverse this motivation
    “””

    It’s not as clear with WL as most customers are academic and have a site license. The individual who complains to support can’t threaten no longer buying Mathematica.

    “””
    Accessible from other languages and technologies and through hundreds of protocols and data formats.
    “””

    Yes, this is EXTREMELY important, but it is just not true that Mathematica is interoperable. I even gave presentations at Wolfram events in France about how great a glue language Mathematica could potentially make if this sort of use were embraced by WRI. I strongly believe in the value of having Mathematica as the center of one’s workflow while pulling in functionality from other systems as needed. This is why I co-authored MATLink, a MATLAB interface, and created LTemplate, which made it feasible for me to integrate bits of C++ code extremely quickly.

    But Mathematica’s interoperability is unfortunately not nearly as good as it should/could be. Example: RLink was only 90% finished, with that last 10% being critical for practical usability Please try to use it for real-world tasks and you will see this. I originally tried to interface with igraph through its R interface. RLink is in such a bad state that it does not even work out of the box anymore. I have a (surprisingly popular!) blog post which collects workarounds.

    J/Link does not support important Java 8 features (I bumped into this when trying to interface with Fiji).

    MathLink/WSTP’s documentation is severely lacking (I won’t link posts here so as not to trigger your blog system’s spam filter, but contact me for examples if you need).

    The renaming of MathLink API functions from ML to WS *and* breaking compatibility was an extremely developer-hostile move that benefitted none of your users, it only caused pain. What was the point? Mozilla still uses NS-names functions even though they’re not NetScape. Apple uses NS-names function even though macOS is not NEXTStep. API function naming should have NOTHING to do with marketing.

    It took a really long time to get some Python interoperability (MATLAB had it much sooner and both its Python and Java interfaces are MUCH better—please try them). Sadly, 11.3 is still not usable for the kinds of tasks a typical academic WL user would want to use it for. This brings me to the point of top-down design. You talk a lot about the importance of good design and unified design, and I agree with that. But this theoretical, idealistic top-down design approach will too often produce literally unusable results.

    Consider a basic task: Generate a matrix in Mathematica and invert it using a Python library. Or compute its (possibly complex) eigenvalues. If this can be done easily and efficiently, a lot of practical tasks can be done as well. But it can’t. It’s both complicated and it performs terribly because there’s no structured data transfer to Python (the matrix is literally sent as Python code!!)

    Meanwhile, with one of the first protoypes of MATLink we could simply do MFunction[“eig”][matrix]. That’s it. This is because we did not only follow idealistic design goals, but always kept a collection of practical tasks in mind.

    A glaring problem with this design method is ExternalEvaluate’s pointless generality. Why d you try to make a single API interoperate with any other system, including Python, JS, or controlling web browsers? It won’t work well with any of them.

    Case in point: ExternalFunction[] doesn’t do keyword arguments because not every target language has them. This (as well as the still unreleased ExternalFunction) would have bee present in the very first version of this functionality if development and design are driven by real, practical use cases.

    I am frequently frustrated with WL trying to be over-general, and be able to at least expression all problems (if not solve them), which then leads to it not being able to solve *typical* and *practical* problems that we need to deal with.

    This is coming from someone who uses WL daily not just for fun but for getting real work done. An example is regions. The functionality is trying to be so broad, trying to handle meshes, symbolic regions, even parametrized regions with the same API, that it ends up being full of holes. Filling these holes is very hard because there are too many. Solving some cases would be a research problem in its own right.

    The result? I often can’t predict what I should expect M to be able to do or how to express the problem so it will do it with usable performance. Which Graphics3D primitives are discretizable? To experience the difficulties first hand see e.g. my SE question about how to crop a Voronioi diagram to an arbitrary convex shape (typically circle). The answers there make it plain unintuitive it is to come up with the *practically working* (performance) solution. They also exposed multiple bugs.

    I feel much more comfortable with the more limited MeshRegion, but it also has similar issue. Compare with ElementMesh, which is more limited, but it’s also better at the limited task it does.

    You want one unified API, no duplication of functionality, and no structuring into packages. This is a perfect example of what’s wrong with that. The most general API will handle none task *well*. If we specialize on a single task, we need multiple packages, each for its own task.

    Community-developed projects are ugly, you are completely right about this. But they also get the task done, precisely because their creation was motivated by a practical task.

    WL’s design, on the other hand, is often motivated by theoretical considerations. Trying to be general and interoperable with the rest of the language is absolutely a very good thing, and a strength of WL. But this should not be done at the cost of usability for practical tasks. Always keep your eye on the task! It should be the main motivation. An ugly and inconvenient system that can do the job always trumps the beautiful one that cannot.

    Another example: SemanticImport. It’s pretty but so slow that in practice it’s unusable. Today we don’t work with 10,000-row tables. We work with million-row tables. The same applies to Dataset: doing this manually with packed arrays often _works_ while Dataset is just too slow and too memory hungry. I am sure that Dataset could use packed arrays internally instead of countless associations. Why doesn’t it? I am also sure that Missing[], Indeterminate and Infinity could be stored in packed arrays (R does it and manages to distinguish between NA=Missing[] and NaN=Indeterminate). There are Python/Pandas blog posts about handling a billion. Will this ever be possible with WL, or does the over-general design get in the way?

    It seems that the real-size problems your real users have not been kept in mind when designing a lot of this functionality.

    To wrap up, I’ll repeat that all this is not a response to whether WL should be FOSS. It’s a response to opinions expressed while you argued why it shouldn’t be FOSS, and especially about the design philosophy of Mathematica.

    And finally, while this is not a direct response, it relates closely to what I said above:

    Consider who your real (academic) users are. They are not the ageing PIs who were convinced by WRI’s marketing guy to buy a license, but only *play* with it (apologies to the many exceptions!). They are the grad students and postdocs who do the daily grunt work of data analysis, figure creation, modelling etc. Please consult them and listen to them. They are also the future, and sadly my impression is that most of them do not consider or do not know Mathematica anymore. This is another reason why I have lost confidence in the future of the technology. Consider at the average age of participants of WRI events, and look to the future. The Wolfram Summer School is no solution, it’s an exclusive event that most just can’t afford (hotel, travel to Boston?).

    This comment is already too long and there’s no room to add a specific example for every claim (though I provided many already), and I’m not sure adding links is possible here. Feel free to contact me, and I’ll send them.

    Reply
    • This is a long comment which raises some important issues, so I want to give you a considered answer (and it is too late here in the UK to start now). I will respond properly later, but just wanted to note that I have read your post.

      Reply
    • Szabolcs,

      this is a !!!crucial!!! contribution that expresses the feelings of many of my colleagues and me.

      WL + Mathematica is clearly a unique tool, but it is not suitable for solving current scientific and commercial computing tasks, because it does not actually provide effective tools to solve them. This is the main reason why tools other than WL are used in practice.

      Unfortunately, I’m afraid that nothing you mention here will affect the future development of WL. Previous developments confirm this concern.

      Reply
    • In order to be concise, I am going to over-summarize your comments as:
      a) Abandoned support for Graph
      b) No support for developers
      c) Centralized design is bad because we can’t do it all
      d) Specific feature problems.

      I am going to mostly not respond to d) but have passed them to the people responsible for those functions.

      a) I see that you brought up “abandoned support for Graph” in community a year ago https://community.wolfram.com/groups/-/m/t/1327325 and Charles Pooh responded with the stats on bug fixes in graphs and networks WL 10.0 – 62, WL 11.0 – 27, WL 11.3 – 97. I just did a search in the bugs database and see that another 54 graph bugs that were reported in 11.3 or earlier are fixed in WL 12. I re-ran the Voronoi/Disk intersection code that you said was too slow and it is 10x faster in 12 than in 11.3. Plus there are some new features and modernization in graph infrastructure in 12 that I am not going to reveal here, but should be public soon. While it may not be everything that you want, it is certainly not true that support has been abandoned.

      b) Let me start by saying something that perhaps you don’t want to hear. For every “serious developer” like you, there are 10-20 “can code” people and 50+ “can compute” people. Sometimes the needs conflict. Namespaces is probably an example of this. I think it is no exaggeration to say that when we had more standard packages, I have had to explain more than 100 times “The reason why BarChart doesn’t work is that you first have to evaluate <<Graphics`Graphics` and also, for reasons that you won't understand, you have to now do Remove[BarChart] because YOU got it wrong". Don't underestimate how hard some people find such details and, more often, how disinterested they are in learning them.

      Having said that, it is not my attitude that developers are not important. Developers' importance is disproportionate to their numbers because they are the multipliers for helping the other two groups. We want to support that whether it is community open source WL development or commercial. If you look at all the connectivity functionality (including the ones that you have criticized, but also all the http, crypto and authentication support, new database connectivity coming in 12, etc), these are all developer features not aimed at the other groups. There is work underway to make code distribution easier (a small part of is imminent but more later). There is project to simplify, document and expose DocuTools to make it easier to add documentation to the help system. And the new compiler technology is very much for power-users. I appreciate that you have some valid complaints but I think you are wrong to infer that Wolfram is disinterested in developers.

      c) Which leads me to the question of "Does Wolfram try to do it all on its own?" which I think is the central issue. Obviously from all I have said, I believe in the benefits of centralized design, but you are, of course, completely right to say that we lack the resources to do everything. I don't have a clean answer to that. Our job is to build the platform, it is up to users (of all levels) to do the specialization (or my consulting team's!). The challenge is defining the difference between platform and specialization. But with this distinction, if it were definable, then our audience right now is entirely developers – but the definition has to cover people like you right down to people who write one-liners. It is my belief that while the low-bar distinction scales quickly in the short term (as we have seen for R and are ssing for Python), our relatively high bar for the distinction scaleable well for the long-term.

      Of course we make mistakes, and you are right to complain when you see them, but I think it would be wrong to conclude that there is a greater meaning or plan to them.

      I don't think the blog comments area was designed for such involved discussions, so I am happy to continue this over email.

      Reply
      • Jon,

        I for one wouldn’t mind seeing a continuation of your response to Szabolcs here in the forum. Certainly if I were him I wouldn’t be much interested in continuing the conversation in email. He sounds quite at the end of his tether viz WL, and if I were him I’d be interested in public discussions where people can be quoted and held to account. I confess that I too am pretty pessimistic about the future of Mathematica/WL.

        On the other hand, you might fairly say that he is somewhat hijacking the thread, taking it off into slightly tangential topics (which I think he has recognized).

        Perhaps you can do another blog on “Wolfram: the Next Ten Years” where you show sales trends, market shares, median ages of new users – that sort of stuff – where users can respond with this sort of complaint.

        B

        Reply
        • It just felt too many directions (some of which I don’t know the answers to) for a single threaded conversation.

          I think future predicting is somewhat harder than commenting on the present and past. Market shares and new user ages, in particular are hard to determine, let alone predict. I am not going to go into commercial details, but I will comment that our bottom line is very healthy. I think 2018 was our highest revenue ever, and certainly in my (EMEA) region where I see the figures in detail, it was the highest by a significant margin.

          Reply
      • Jon,

        I for one wouldn’t mind seeing a continuation of your response to Szabolcs here in the forum. Certainly if I were him I wouldn’t be much interested in continuing the conversation in email. He sounds quite at the end of his tether viz WL, and if I were him I’d be interested in public discussions where people can be quoted and held to account. I confess that I too am pretty pessimistic about the future of Mathematica/WL.

        On the other hand, you might fairly say that he is somewhat hijacking the thread, taking it off into slightly tangential topics (which I think he has recognized).

        Perhaps you can do another blog on “Wolfram: the Next Ten Years” where you show sales trends, market shares, median ages of new users – that sort of stuff – where users can respond with this sort of complaint.

        B

        Reply
  18. As a hobbyist Mathematica user, I am not very concerned about the closed source code directly. As pointed out, even if it were open, I would never look into it. However, I also agree with some of the commenters above that closed source limits the attention that WL gets from the wider community. A lot of the new and “shiny” things, such as TensorFlow, OpenCV and the like work much better with Python than with WL. Also, there are so many more code snippets in Python that I can copy and paste to use them directly than there are in WL. On the other hand, I also see the strengths of the Wolfram proposition. I work as a patent attorney, and when I come home and want to play around for an our to explore the Einstein equations or the quantum mechanic harmonic oscillator or similar things, I can not be bothered to figure out again on which port and with which password the Jupyter server works or which Python environment I should switch too to use a particular library. As a side note: Wolfram releases Mathematica for free for the Raspberry PI. This system is (nearly) fully functional, but even as a hobbyist you run fairly quickly into the Hardware limitations of the little SOC. What if there were a somewhat limited free version of Mathematica available for every platform?

    Reply
    • I think this is a great suggestion and should be well considered by WRI: a brilliant language without new blood has no future.
      Given the less and less new users of Mathematica, I’m hesitating to put my more time into this language.

      Reply
      • We probably have more new young users than ever, but the challenge is to have them see beyond the context where they were introduced (often in calculus classes).

        The key problem with a free limited versions, is what would the limitation look like? It mustn’t be so little that it destroys our income, but also, is not so crippled that it creates some of the problems I have argued against in this blog – loss of integrated capability, poor sharability because of fractured standard language etc.

        One decision that we made early (it predates me) was that our student version was not crippled as some of our competitors did. The thinking was, just because you are a student doesn’t mean you are only wanting to do toy computations. That served us well.

        In a sense, the Raspberry Pi version allows us not to be responsible for that, because the platform imposes the limitation. The free cloud accounts create that artificially, but the jury is out on that because we made it too complex (about to be fixed).

        One thing that has been thought about several times is the “M Language” project as it was known in the past. That is what would it look like to create a kernel that was just the “language”, no fancy algorithms, but would do everthing that an ordinary language would do. No one has made a definition that makes sense though. I understand that that was Sergei Brin’s internship project when he was at Wolfram (before he went off and set up Google).

        Reply
  19. I think that WRI would greatly help a much greater openness to their users, but at the same time I am afraid that this openness is exactly what people managing WRI don’t really want. Otherwise, I can’t explain why WRI does not respond to some bug reports and why constantly adding new inefficient and unreliable functionality to WL.

    Reply
  20. I’ve been a Mathematica user since version 1, and a programmer (in the sciences) since the time that there was no such thing as ‘commercial’ software.

    I did a major in-house project in parallel with my use of Mathematica. I would do the R&D in Mathematica and implement the system (a real-time process control system) in c. With recent developments in Mathematica, I realized that i could do a whole lot more using just Wolfram Language and a commercial database.

    My main concern in the early days (versions 1-4 or so) was that Wolfram research would be out of business the way that so many other technical software companies were (and are). Fortunately, Wolfram research did not abandon the Mac as so many of the companies selling competing CAS systems did.

    For me, Mathematica was a lifesaver. I could never have justified hiring enough programmers to do all the coding for which I relied on Mathematica.

    I have found my dealing with Wolfram Reserch to be open. I can usually get explanations for what is going on in the software, and tech support is better than any other software company I deal with. There are some quirks or omissions, of course. It would be great if there was a semi-major release that added no new features, but which fixed outstanding bugs.

    As for accessibility, a casual user can use Wolfram language in a browser for free, and this might satisfy the needs of a lot of people — or at least let them see whether the software is for them.

    I have used open-source code in the past. I have had to debug the standard libraries provided by my c IDE vendor. I have had to create my own graphics primitives for a language/OS that had nothing. Using Mathematica is **much** better.

    It comes down to what you want to spend your time on. Mathematica lets me spend most of my time exploring ideas. That is what is most important to me.

    Reply
  21. Jon, why the need to share your thoughts on FOSS at this time ?
    Seems like WRI is hit by an exodus of some key kernel developers.

    >Code that is on average seven times shorter than Python, six times shorter than Java, three times shorter than R.

    Gottcha! If I hide 25 internal functions behind FUNCTION in my programming language MMA, then this is going to be a bit shorter than something raw in Python, or Java , or R. Great choice of example.

    Excuse me, but wishful thinking that the “notebook” was invented by Wolfram.
    Maple, and competing predecessors had similar things already implemented.

    Reply
    • Factual corrections first:
      I am not aware of any key developers leaving and certainly the claim of an exodus is false.
      You are wrong about Maple having notebooks first. Their “Worksheet” functionality was first added in Maple 4.2 in 1992, four years after Mathematica.

      The “Wolfram Language code is only short because it is all hidden in libraries” is a more interesting comment that appears often in forums. The idea that using higher functionality is somehow cheating and is should be avoided is part of the”real programmers program in binary” way thinking. I agree that Wolfram Language code is shorter only because of the higher level standard functionality. That is exactly what I argued in point 2 in the blog. It is a good thing.

      Reply
  22. This is a fairly reasonable article. When using free or open source software, it is rare that I see any of the direct concrete advantages of openness — indeed, when I’m looking at source code of a library, it’s usually only because the documentation sucks and I have no other option. (I have sometimes hit bugs in free software, dug into the code, fixed them locally, and later gotten patches merged — so the free software dream can be real — but I’m probably a weird outlier in that regard.)

    The main problem I’ve run into with closed-source Wolfram products is with the black-box nature of some of the functions, most notably those for machine learning.

    Sometimes nobody cares what software I use, but they want me to be able to describe, in a language-independent reproducible way, how exactly I processed my data. This can be difficult when working with Mathematica; sometimes there’s an undocumented way to crack things open and get a better idea of what’s going on, and sometimes there isn’t.

    I get the sense that WRI sees the automatic parameter tuning in these functions as a competitive advantage, and is deliberately vague about how they work for that reason. That’s probably rational in some cases: I could see why it might be good business to keep NDSolve details secret from competitors.

    But there are many, many cases where the algorithms used are pretty standard and could be much better documented. I think the goal (maybe not always achievable) should be reproducibility by an expert not using Mathematica except in cases where there really is a significantly novel algorithm at work. This could be accomplished by making it easy to see the source code of specific functions, by letting the user see chosen parameters, or simply by explaining methods in more detail in the documentation.

    As a concrete example, take DimensionReduction. All the logic involved in automatically choosing a method and so on, sure, maybe that’s “secret sauce”.

    But if I ask it to autoencode a bunch of images, I think it should be possible to at least find out after the fact what architecture was chosen for the encoder and decoder, how long they were trained for, what preprocessing steps were used, etc. (In this specific case, there is actually an undocumented way to see most of this, but there probably should be a stable, documented interface instead.)

    I trust Mathematica (otherwise I wouldn’t be using it), but someone who has no reason to trust me or Mathematica might want to know exactly how the reduced features were generated.

    In general I think there is value for the users in making the results they produce with Wolfram software more reproducible and therefore more defensible.

    Reply
  23. Mr McLoone and Mr Horvát said important and relevant things, but one the other hand both ‘beat about the bush’, from inside and from outside. One good thing about Mr S Wolfram is his clearity and he wrote

    “I’ve often said—in a kind of homage to 2001—that my favorite personal aspiration is to build “alien artifacts”: things that are recognizable once they’re built, but which nobody particularly expected would exist or be possible. I like to think that Wolfram|Alpha is some kind of example—as is what the Wolfram Language has become. And in a sense so have my efforts been in exploring the computational universe.”

    (from his blog entry “Learning about the Future from 2001: A Space Odyssey, Fifty Years Later”).

    What does it mean?
    (1) alien artefacts are clearly more important than software maintenance
    (2) the freedom of WRI and of Mr S Wolfram is a very high value
    (3) things become usable over the time (in the beginning there where no numerical packages, no sparse arrays, no adequate graphics) but all that appeared – it’s more than difficult, even more than an alien artefact – so to say – to do everything right on the first shot
    (4) WRI has a much stronger focus on research (that’s in the name WRI already) than on customer care or usability (the Entity and Quantity framework are examples of interesting stuff which puzzles people, Dataset …) – here a bit continuous deployment and continuous integration could help; given that automated testing is done it seems that internal complexity is so high that at some point in time the _decision_ to release is made: may there rest lots of bugs or not; an example of this are the many complaints about Integrate
    (5) comparison with Maple is not adequate because WRI will never go public (because of freedom and the artefact approach) while Maple belongs to Cybernet Systems Co. Ltd. and has investors
    (6) comparision with MATLAB is also no adequate because Wolfram Language aims to do much more – because of the conjectured conservation law of difficulties there must also be more fails
    (7) I stopped the practice of uninstalling Mathematica n before installing Mathematica n+1: the previous version keeps running and the next version is installed on the other machine, where version n-1 is uninstalled
    (8) the problem with being not standardised and being not community friendly is to have always a master mind like Mr S Wolfram pushing things into unexpected directions; otherwise the daily problems will kill WRI: think about Digital Equipment great machines, great editors, enhanced fortan, but UNIX won the battle and Digital Equipment people worked to make Windows into an operating system.

    All the best
    U. Krause.

    Reply
  24. Ok so I write code across multiple computation domains. I use languages as utilities based on the domain specific problem/desire I may face. About 6 months ago I decided that I wanted to create a new computing environment. In a very MIT-sh kind of way, I was trained to pick up any language and write code that works. So for this I had a handful of requirements that I felt were best served by Rust, Pharo – a smalltalk variant – and Wolfram Language. After much evaluation, I ended up choosing WL. Why? Out of the many reasons were the symbolic aspects and the correlated small amount of code it takes to write just about anything.

    So far I’ve been writing small projects in WL every few weeks and I can say I’m satisfied. My goal in this comment isn’t to convince anyone to use WL, but I’ll say this… if you feel that all the incredible computing power available should be used to do more than running Microsoft Office, that using vi and emacs was only a necessity when it was created and that modeling and simulations should be the only thing called “computing,” then give WL a look.

    Now to the Wolfram team, please improve on these things:

    1. A mobile version of all your online properties.
    2. Search for the online community. Come on guys, this is truly embarrassing.
    3. Either write documentation as a user or let users contribute documentation. I’m speaking, specifically about the examples in the code.
    4. Mathematica Desktop and Wolfram Development Platform online are not the same. Either say that and don’t pretend that they are and pitch it as such or rewrite all the clients – iOS, Android, web browser, etc – to be exact replicas of Mathematica Desktop. That whole “doesn’t run on Cloud” message sucks and defeats the Wolfram Language and Stephen’s vision of “computation everywhere.”
    5. It’s clear that Wolfram the company is a vehicle for Stephen to do his research and as a side effect, those that align to his vision can become “customers.” Cool. I get it. But I also know that some of those “customers’ at banks, labs and 3-letter agencies would love to have proper support. I don’t mean customer service, even though that can always improve, but I mean support for the enthusiasts in the WL community. And no, the dev conference is not the answer – that’s expensive and limited. What I want is support for local groups in Meetups and non-Mathematica sponsored hackathons, etc..
    6. This is related to the last item. I believe it took about 250 years for printing press and movable type to be used outside of the Church environment to print other things beyond Bibles and get to the point time of “Common Sense.” No other current company comes as close of actually marketing and selling computation as yours. We can’t wait 100 years or even 50 for this to become mainstream. And by mainstream I’d like to see all computer users do their communication with small snippets of code/computation on a daily basis. That could happen now but that’s not the cultural definition, but it was the original goal in the 60s and early 70s. Carry the torch. Have some backbone and call computing for what it is and what it’s not. There’s a market that feels that there’s something inherently “wrong” with $1000 iPhones that don’t improve our lives in any way whatsoever. Market computation, please.
    7. This level of transparency in this post is unprecedented for Wolfram. Do more of it.

    Thanks

    Reply
    • 1) Yes, but there are other compeiting priorities for the cloud team such as 4
      2) Could you elaborate, I clicked on “Support & learning ” on our front page and see a link to it. Google Mathematica online community and it is first hit, and I think the community is a pretty good site. Was it one of these issues, or have I missed the point?
      3) I think this has been discussed, but I was not part of the conversation, so have nothing useful to say.
      4) There is a big re-organization planned fairly soon that will simplify the whole presentation of cloud interfaces and simplify the choices. We all agree that the current offering is confusing. The “not in the cloud” issues fall into two categories – things for which the cloud archtecture makes very challenging. eg client reasons like CurrentImage[] or for secruity reasons like RunProcess. Then there are things that we simply haven’t supported yet. Those should all go away in time.
      5) I don’t speak for Stephen but while Mathematica was created to let him do physics, and clearly a lot celular automata functionality is related to his interests, I think his research these days is researching how to make the Wolfram Language better.
      6) I agree about making computation mainstream, though I have nothing against more frivoulous uses of CPU power.
      7) Thanks

      Reply
      • Thanks Jon for taking the time to reply. My replies below:

        On #1
        Understood. Thanks.

        On #2
        If you visit: https://community.wolfram.com and search, the results are from Documentation and other Wolfram resources, but not from the community portion in which you’re in.

        On #3
        In your post you wrote that one of the benefits is “Consistent across different kinds of computation (one syntax, consistent documentation, common data types that work across many functions, etc.).” Which is perfectly understandable and clearly a benefit. But at times it feels that the examples in the official documentation are almost based on a template by someone in a rush that’s just trying to get it over with and move on to the next. Is there any harm in having examples submitted for review that can later be included in the official documentation? Note that I’m not asking to open source the documentation, but to create a channel in which users can contribute and make it better. Completely relevant to the conversation.

        On #4
        Thank you.

        On #6
        I have a serious problem with it. Why? Well, if you actually believe that as a species we have existential issues for the current and next generation – what are we in the Cold War? hide under the table? – we shouldn’t have such a large portion of the population belong to the “flat Earth” movement when a simple argument backed by computation on their vastly capable smartphones would most likely do. That’s an easy one, but if you look around, you’d see that dialogue, discussion and agreement could be incredibly enhanced by computation. Google could do this but they’re too busy creating AGI. Amazon and Apple don’t really care about the philosophy behind computation. And Microsoft, the prime candidate, is too busy deciphering cloud. So who else could do this but the company whose founder understands it and lives it daily? Did I forget to say that he wrote a book solely based on this foundation? So you might not care, but I hope you reconsider, even if off-topic.

        Thanks

        Reply
        • # 2 got it. And agree. Especially as you can achieve it in Google by adding “site:community.wolfram.com”

          #3 is a sensible suggestion, I will share it around.

          #6 Feels Monte Python already addressed this one:

          Chairman: Item six on the agenda, the Meaning of Life. Now Harry, you’ve had some thoughts on this.

          Harry: That’s right, yeah. I’ve had a team working on this over the past few weeks, and what we’ve come up with can be reduced to two fundamental concepts. One, people are not wearing enough hats. Two, matter is energy. In the Universe there are many energy fields which we cannot normally perceive. Some energies have a spiritual source which act upon a person’s soul. However, this soul does not exist ab initio as orthodox Christianity teaches; it has to be brought into existence by a process of guided self-observation. However, this is rarely achieved owing to man’s unique ability to be distracted from spiritual matters by everyday trivia.
          [Pause.]

          Max: What was that about hats again?

          Reply
  25. The simple fact is that present day Mathematica would not exist had it been done open source.
    Just compare it with Sage or any of the other open source programs in this application area.

    Reply
  26. Sorry if you’ve already answered this question, but I don’t see it addressed here or in the comments.

    What would be your issues with an open-source closed-development model? Seems like all these reasons are against open development rather than opening the source code itself.

    Reply
    • That’s a fair summary.

      There are some arguments against that, but they are relatively minor and more practical than fundamental. While I suspect that there would be those concerned with sharing our “secrets” with competitors, that doesn’t bother me. My concern is really over the resources required to make the shift useful. There are practical steps, such as re-orgnazing our repos and processes, work to do such as cleaning up internal documentation for public consumption, and there are some culture shifts (would it affect the frankness of bug reports and code comments if developers were concerned about them being public and being quoted out of context).

      None of that is a big deal, but is it worth it for the relatively tiny number of people who will actually read the source and gain benefit from it? For most people the benefit is the reassurance that they COULD look at it if they wanted to. That is a marketing benefit rather than a practical one, though perhaps a big one.

      As it happens, you can read the source code for a large fraction of Mathematica, though not in a very friendly UI. While we don’t document it, it is not secret, but few people seem to care.

      If you are talking about closed-development but free, then more of my arguments apply. Nothing is really free, the question is, what is the right business model to fund it?

      Reply
  27. Your answers seem to be (and that’s fair) a reflection of who you report to, and you have drunk plenty of the Wolfram koolaid.

    After having been a heavy user of Wolfram Mathematica (call it WM below) for 20yrs (early 90s to late 2000s), and over the past 3 having become a heavy R ecosystem user, here are my two cents on each of your points:

    1) A coherent vision requires centralized design: alas, an incredibly powerful, imperfectly coherent vision (e.g., the R ecosystem, the “tidyverse” which is a collection of some 20-odd pkgs for all things data) can emerge from a non-centralized design. What has emerged is this incredible willingness of folks from all over the world to share and work together via social media and a few live conferences. What is better? The agile emergence and exchange of a vast set of simple tools, all moderately inconherent, or a centralized, unified “language of everything” which moves ever so slowly, trying to cast all areas of knowledge under a single “symbolic layer” no one will use?

    2) High-level languages need more design than low-level languages: who needs higher level languages (here you mean it that can universally capture the semantics of computation in any domain — from chemistry to blockchain to ML? What has been the effective adoption of “semantic constructs” in WM by the world at large? I bet it’s minimal. Drawing a comparison with Egyptian Hieroglyphs, though beautiful and internally consistent, they were much too complex to be understood outside Egypt, and were eventually defeated by a poorer but more easy-to-understand, decentralized system (alphabet).

    3) You need multidisciplinary teams to unify disparate fields: who said such folks need to be regimented by a single person in Urbana-Champaign? R and Python prove these teams can self-assemble via twitter, GitHub, blogs, package repositories, bug fixes, webinars, etc. The web has destroyed these assumptions. Within the R community (say made up w/ 100s of thousands of users), there have emerged 100-1k “power”, highly skilled package builders that operate at a middle layer of total centralization and crowd-decentralization. With the emergence of tools like twitter, blogs, GitHub, this is at a much better productivity compromise than a single Stephen Wolfram commanding paid coders in Urbana Champaign. On a side note: how do you get *anyone* interested in investing precious formative years to implement features in a language no one seriously uses for software dev?

    4) Hard cases and boring stuff need to get done too: R has currently 17k packages, each with 10s to 100s of functions. Do you really think the coverage does not include “hard cases”? Still, WM sophisticated / graphical functionality will most definitely never be reproduced but that does not stop the open-source community from going fwd at a production at 1000x the dev pace achievable in centralized manner.

    5) Crowd-sourced decisions can be bad for you: they can, but (1) they are hardly “crowd-sourced” as smaller groups self-assemble around specific needs and repositories, with their own bug-fixing workflows and many times under the vision of a few power highly skilled developers, and (2) it’s ok if the complex system of distributed software development advances imperfectly, with bugs and design flaws, because it is built around collaborative tools that allow for it to be self-correcting, most bad decisions are quickly flagged and become good over time. When WM fights this trend it is fighting Darwin.

    6) Our developers work for you, not just themselves: open-source devs work for the community! How many times while developing in R, have I submitted feature requests or have I pointed out a bug that was fixed the same day in an entirely different country and made available to the community on the same day! Can WL do that?

    7) Unified computation requires unified design: does the average user care for “unified” computation? Egyptians cared for perfect Hieroglyphs, but the other peoples of the middle east didn’t! This is perhaps the CEO’s most fatal flaw over the last 10 yrs. He should have released WM to the open-source community then and changed his business model but now it’s too late. It is no wonder the system is terribly late and awkward in many areas: package management, data wrangling, database and file integration, etc., it’s as if it’s “inbred” by a single-man’s vision, from a different era of computation, etc.

    8) Unified representation requires unified design: the design (lisp-like) is terribly outdated, overly insistent in being universal for computation, when no one cares. Whenever people have atempted to come up with universals they failed. Complexity is more efficiently managed via DSLs. I can’t believe Wolfram is still insisting WL is somehow going to be adopted as a universal language for computation — even the word “computation” is outdated. Today computer use is about web-publishing, API interop, gluing techs. Why is he wasting time unifying the tower of Babel? No one will care!

    9) Open source doesn’t bring major tech innovation to market: R alone has 17k freely available pkgs, Python and JS/Node some 100k each and you are saying you monopolize innovation?

    10) Paid software offers an open quid pro quo: yes, but folks are willing to compromise hieroglyph beauty and unity for agility, zero-cost, community, and collaborative infrastructure, get it going now with no installation required.

    11) It takes steady income to sustain long-term R&D: the model to acquire steady income via paid software seems to have faultered. The cathedral is in flames, crumbling, similar to Notre Dame in Paris. GitHub and package repos have completely redefined how folks parade their achievements (they have become CVs for coders), and new jobs will be given to those willing to quickly integrate into open-source tools having shown there wares via GitHub.

    12) Bad design is expensive: this used to be true, but open-source design is not bad, it’s just imperfect, and this flaw is compensated by redundancy (many pkgs for same use) and self-healing ability via github and bug-fixing and feature-adding flexible community flows.

    P.S. — I have loved WM for 20 yrs. To make this work of art survive release it into OS now. It is too late. But at least parts of it could be integrated into other ecosystems like Python, R, JS. Find other ways to pay your workers, e.g., via consulting.

    Reply
  28. At first read, your comment appears to say I am wrong on every point (as well as being a cool-aid drinking dinosaur), but re-reading, I think we actually agree on most points except whether it matters.

    You point that Darwinian evolution embraces extinctions and interaction on mistakes for emergent good behaviour is true on a system level where the scale of exploration can afford the losses (which clearly big ecosystem like Python and R can). But on a local level it is a bad thing, since every one of those rejected ideas or redundant packages comes at a human cost for those that backed it. Their sacrifice may be for the greater good, but that does not diminish their loss.

    I think you missed my point about innovation. I did say “Open source often does create ecosystems that encourage many small-scale innovations” which your figure of 17k packages is certainly evidence of. My claim was that it is hard to do “major” innovation, because, well, 17k moving parts!

    I think our central disagreement comes from the assertion you seem to make, that one must choose a single model. Open source or die. I do not claim centralized development or die. As I started the piece “free, open-source software can be very good, but it isn’t good at doing what we are trying to do”

    To pick up your Tower of Babel metaphor (I realize you meant it as a biblical reference to arrogance followed by collapse) but I think it is true that we are trying to build a tower. Open source is building suburbs. In a city there is a place for both big centralized engineering (towers and bridges etc, that are not build by communities) and suburbs that are. Both have their successes and failures. (Towers fall down, suburbs can be shanty towns) but both have their benefits. You claim no-one cares about the tower, I say many do, and more should.

    (I realize you meant hieroglyphs as a pejorative metaphor for ‘elegant but dead’ but I think 2000 competing symbols which don’t agree whether the purpose is phonetic or symbolic sounds more like your description of the OS ecosystem and the conceptually cleaner and simpler-to-use alphabet is “trying to cast all areas of knowledge under a single [phonetic] layer”, is closer to WL. But I doubt that either was centrally designed, in the end alphabet was simply a better design).

    Reply
  29. I have used Mathematica at many companies for a long time and I like it. And I don’t care about opensource in general. That said, you wrote a lot and most of the points boil down to the same thing, control and design. That has nothing to do with opensource, most opensource projects still maintain very strong centralized design. I don’t know, pick any example, Linux, Go, Redis, Postgres, literally it’s the same for most.

    So the bulk of your argument doesn’t apply.

    That said, I’ve been using more and more of the python stack lately because, albeit I still think Mathematica is a great product and it is superior to what you can get by collating some python packages together, its barrier of entry is simply too high. In my job, I don’t need Mathematica on a daily basis, not even often, it’s really a nice-to-have. And even if for a company it’s not a problem to license a couple of seats, it’s that terrible that there’s no way for me to share things easily with the rest of the company or with outside researchers. Being locked in an expensive commercial platform is not good for science. If you were to offer a reasonable, free/open base version, and sell commercial packaged for specialized functionality, support, many-cores / distributed computing etc, it would be much easier to use Mathematica more often and openly.

    Reply
  30. …Love MMA, have been using it since release 1.0. Literally the greatest thing since sliced bread. However, arguments pro and con are rendered moot in the light of my experience, which I’m sure can’t be unique.

    After some considerable effort, over many months, to convince the engineering director to give MMA a go they finally relented and requisitioned a license.

    Long story short. I spent the next few months writing MMA code to automate a statistical system for reporting on manufacturing tolerance data from an SQL RDBMS, across multiple product lines. Worked great, felt a certain sense of satisfaction for a job well done. Previously it required four engineers two week’s work each to produce monthly reports for senior production managers. Now reports could be had even up-to-the-minute. Probably couldn’t achieve a similar result using any other tool at the time, in such a short period of time.

    Three months go by and I get a visit from the engineering director requesting details of a “snippet” of the source code for his perusal, muttering something about some V&V process he needed to justify to his superiors. Needless to say I had no access to that code.

    As a result I have subsequently never again (in twenty plus years of writing code) felt able to, hand on heart, recommend Mathematica to higher management for fear of embarrassment.

    Do I hate MMA/WL? No. On the contrary I’m still of the opinion that it’s the greatest thing since sliced bread, a veritable modern technological marvel, and will continue using it for personal use.

    Would I still put my (i.e. their) money where my mouth is? Not until WR can resolve this fundamental problem to the satisfaction of those on whom my bread and butter depends.

    They say that smart people learn from their mistakes, but smarter people learn from the mistakes of others.

    Reply
  31. One of Richard Stallman’s justifications for free software is that users are able to learn programming from it. I personally don’t think that just making source code available is of much educational value. I do like to learn how the software I use works, but I am only able to because it is accompanied by papers published in computer science journals. In particular, functions in R generally have references in the help files. Without these papers, I would never be able to understand the source code of the R functions. I would like to be able to learn how Mathematica is doing things, instead of just seeing the results. However I don’t think making Mathematica open source would accomplish this goal. It would also require putting references to the algorithms used in the documentation, and publishing papers when novel algorithms are developed. I realize it would be an enormous extra burden for Wolfram Research to do this. However, I think that if I had a software company, I might try it, in the hopes of stimulating academic research that would be relevant to me.

    Reply
    • That is a benefit, but should it be the purpose of our project?

      It is an argument that only seems to be applied to software but would be valid elsewhere — I am sure I could learn more about automotive engineering if the components of my car came with full engineering design documentation and models, but instead Ford focus on making cars for people to use rather than teaching them how to build cars.

      Reply
  32. You wrote:
    ——————————————————
    Other examples of Wolfram innovation include:

    Wolfram invented the computational notebook, which has been partially mirrored by Jupyter and others.
    Wolfram invented the concept of automated creation of interactive components in notebooks with its Manipulate function (also now emulated by others).
    Wolfram develops automatic algorithm selection for all task-oriented superfunctions (Predict, Classify, NDSolve, Integrate, NMinimize, etc.).
    ——————————————————

    This is an old argument. Microsoft Word and Word Perfect wanted to say they invented writing and paper. You want to say “computational notebooks” are innovative? They are pen and paper.

    Your frills included with the computational notebook amount to giving the user different choices of writing tools and paper.

    Many people have commented on open-source innovations. Linux, Android, and OpenStack seem to me as much better computational innovations than anything Mathematica has contributed.

    Reply
    • Surely all innovation is incremental. Your argument that Word (or WordPerfect, Write or other early word processors) are ‘just writing’ can be applied all of IT which is ‘just switches’).

      I don’t know OpenStack but it is interesting that you pick Linux – which was effectively a clone of MINIX which itself was a minimal clone of AT&Ts UNIX – and Android which was created as a reponse to the iPhone as examples of innovation. Innovation they may but not fundamental. In fact I would argue that the real innovation of Android was in the business model. Recognizing the mayhem that the iPhone was causing to hardware driven phone makers, Google exploited their lack of software development capability with a free OS to guarantee the installation of Google apps and services in 2 billion devices.

      The very fact that iPython/Jupyter have taken the ideas of the Wolfram Notebook shows that those innovations had value.

      Reply
  33. Hi Mr. McLoone
    I wish you and Mr. Wolfram good health and success.
    Yes, it takes many things to be united, but this unity leads to inflexibility.
    A unified project is very good, but this unified project can be set up to use 10 percent of the power of 10 people or 1 percent of the power of 100 people or 0.1 percent of the power of 1,000 people or … . I did it like this because my job is network marketing and I know what I’m saying.
    The important issue you can solve are distributed processes. You can use local supercomputer around the world to manage distributed processes instead of buying a supercomputer and having it in your company.
    In this big project, you use universities and thus a large group of students interested in a graphical software develop different parts of it and share a lot of information through supercomputers and you choose the best rather than want to design and implement the best.
    You look at the Sage project how fast it’s progressed!

    Reply
    • Choosing the best from a large network of contributors and discarding the rest is, by definition, wasteful. When you can persuade large numbers of people to work for you for free, perhaps you can ignore that cost, but the cost is real for them and each of those bad ideas wastes the time of the people whose ‘job’ is to try and use it and discover that it isn’t good.

      The Sage project is an interesting case study. When it was created as a project to ‘Create a viable alternative to Mathematica, Matlab and Magma’ it generated a lot of energy, and briefly seemed to make progress as lots of relevant, existing, projects were thrown into the mix giving the appearance of rapid development.

      For years now it is hard to see how it has moved forward. Check out the revision logs at https://www.sagemath.org/changelogs/index.html and it is fairly easy to see why. They have been bogged down in the lack of consistency between the libraries. Most of the items on thos lists, that are not bug fixes, are restructurings. They are battling the very issues that I described in this blog — trying to make different libraries from different sources with different conceptualizations, play nicely together rather than moving forward with new ideas.

      Here is what William Stein, the project originator said: “Measured by the mission statement, Sage has overall failed. The core goal is to provide similar functionality to Magma (and the other Ma’s) across the board, and the Sage development model and community has failed to do this across the board, since after 9 years, based on our current progress, we will never get there.”

      Reply
  34. I fully agree. Only the stake holders who improve the product can get a free license to use it.
    Nothing worthwhile comes free.

    Reply
  35. An interesting article. I agree with you completely.

    Reply