For the past 20 years, Stephen Wolfram has hosted the annual Wolfram Summer School: four weeks of intensive mentorship and teamwork and the completion of computational and research projects on a variety of topics, ranging from pure math to humanities, engineering, physics and more. Students from all over the world participate in Socratic classroom discussions, lecture series, casual networking events and coding sessions to help them design and complete original projects.
With the help of their assigned mentors, one another and Stephen himself, students are guided through a process designed to help them understand the capabilities and mechanics of the Wolfram Language and then apply their own creativity and backgrounds to develop unique projects. Through a series of discussionbased meetings with Stephen and their mentors, students define the scope of a project, explore it, tweak it if necessary and ultimately complete it. At the end of the program, students package their project results in a Wolfram Notebook, create a summary as a post on Wolfram Community, generate a poster and give a twominute presentation including discussion with their classmates.
The team behind the Wolfram Summer School is superb and highly dedicated to providing the best environment for doing real science. This year, we invited some of our students to share their experience and journey through the Summer School. Here, Athina, Cayden and Fizra discuss their experience with the Wolfram team.
Athina’s project focused on infinite lists to operate in selfreferential expressions by developing ConsList and ConsTree functions. Her full project can be found on Wolfram Community.
✕

Attending the Wolfram Summer School this year was a unique and fruitful experience for me. A few weeks after my return to my hometown, I found myself reminiscing (and processing) moments that unconsciously got stuck in my mind. The four most beneficial experiences were the program’s courses, the interactions with the company’s staff, the interactions with the rest of the Summer School students and taking a closer look into a corporate workplace for the first time.
Passing quickly—but efficiently—from the introduction phase on to more advanced Wolfram Language features was something that challenged me. Even though I didn’t completely understand all of the code I was presented, I was introduced to the logic of functional programming through some builtin courses. The courses, both my failed and successful attempts at coding, and discovering how the Wolfram Language works at its core triggered my interest in writing Wolfram Language code and in constantly trying to improve my project.
From programming courses based on dataorganization techniques, manipulating data and timeefficient programming to image processing, artificial intelligence and quantum mechanics courses, each day was filled with interesting lectures given by the company’s staff. The plethora of lectures raised my interest in various fields of science and motivated me to seek out further information afterward. Furthermore, these lecture hours were also a way for me to clear my mind and think about something else apart from my project.
✕

My mentors and all the staff in general were willing to answer all my questions and guide me to the next steps in my project. What left a positive impression was the fact that each time I had a question, apart from the ones that were answered by my mentors, I was referred to a Wolfram employee who knew exactly how to help me. Furthermore, mentors played a major role in welcoming me to the Summer School and making me feel comfortable. Dr. Stephen Wolfram also helped me to integrate into the Summer School community as he turned me from a listener to a speaker by giving me the chance to express my thoughts.
People with diverse backgrounds gathering to exchange thoughts and share knowledge and experiences was something that I had not had the chance to experience before. During one of the last days of the Summer School, a few hours were devoted to an informal project presentation from the students. This was like a Q&A session where each student was available to answer questions based on their project. During the session, I asked so many questions and I was so happy that everyone was willing to explain their work and was so open to discussion. I found it challenging when it was my turn to share and answer questions about my project.
Each one of the students was friendly and approachable, and as we got to know each other, we encouraged and helped each other when we reached stumbling blocks with our projects. We had a common space full of desks where we all coded together, which contributed to motivating each other—and lots of laughs. When it was time for lunch or dinner, we would make an outing of it together. When we weren’t working, we were walking through downtown Champaign, running in the nearest park or bouldering.
✕

Cayden’s project focused on extracting relations from word embeddings and language models. His full project can be found on Wolfram Community.
✕

I attended the 2022 Wolfram Summer School, an intensive fourweek program focused on exploring and creating projects at the frontiers of science, technology and innovation. While there, I met and interacted with brilliant scientists, engineers, academics, entrepreneurs and students. Everyone had an interesting backstory and area of expertise, which made the daily, farranging conversations about physics, neuroscience, mathematics, etc. incredibly fun and interesting. Every student and mentor involved in the school seemed to bring with them a gleam in their eye to understand the world and progress human science and technology. It was the perfect environment for exploration, learning and progress.
The program is largely based around three pillars: your project, your mentor and lectures. Each day, there are lectures on a variety of subjects, from neural networks to category theory, all of which show how to explore the respective area using the Wolfram Language. Before the school begins, each student is assigned a mentor who helps the student with their project, from ideas, to coding help, to finding the right people to talk to and more. At the start of the school, each student worked with their mentor and directly with Stephen Wolfram to select a project they would pursue for the next three weeks.
In the first couple of days, every student crowded in for welcome keynote talks by the heads of the school. The first day had a large focus on meeting mentors, teaching assistants, the other students and Stephen Wolfram through a social pizza party. Everyone there was super friendly and social, and this was a common theme throughout the program. By the end of the summer, I had met and talked to nearly every attendee.
✕

The first night of the program, we decided to have some casual team bonding among students and mentors by heading over to a local pub. The conversation revolved around science, technology and everyone’s research interests. We also discussed our personal backgrounds, as this is an international program with a diverse range of people flying in from all over the world.
Before the school began, I wondered if students would have much chance to interact directly with Stephen Wolfram, whose book A New Kind of Science and blog were my impetus to apply for the Summer School. I was pleasantly surprised that on the first day and on many occasions after his lectures, Stephen was happy to hang around and engage in all kinds of conversation and held office hours during the program. I joined in on a couple latenight conversations with Stephen where a small group gathered and discussed everything from ruliology and physics to communicating with animals and extraterrestrials. Talking to Stephen was fun and engaging, as he asked many questions and had his own fun stories of his past to share.
The team of mentors that supported students daily throughout the program was excellent. My learning, project outcome and overall experience were greatly improved by my interactions with the many brilliant mentors that the Summer School put together.
✕

My project explored the relationships between words as a step toward a symbolic and computable representation of language. Much of my four weeks was spent “digging for gold” in meaning space (word embedding/language model latent space), exploring the space and looking for all kinds of relationships.
All of the information and code about my project can be found on my Community post, “Extracting Linguistic Relations fr. Word Embeddings & Language Models.”
After four weeks of working hard on our projects and preparing artifacts for submission, every student does a twominute presentation on their project. This is a fascinating day, as one gets to see and hear about the many interesting projects that other students completed at the school.
On the day after presentations, we all graduated. Upon graduation, we received signed books by Stephen Wolfram, minidiplomas, NFTs of our projects and even Wolfram Language cupcakes to celebrate our graduation.
I had an excellent time at the Summer School. It was a month of intense learning, exploration, meeting new people and having fun. I’d highly recommend the program to anyone with a curiosity about STEM and understanding how the world works. The program had students ranging from undergraduates to latecareer PhDs, and every one of them got immense value from the program.
I filmed bits and pieces of my time at the Wolfram Summer School and put them all together into a single video that tells (part of) the story of my experience.
Fizra’s project focused on modeling the visual hallucinations caused by migraines and occipital epilepsy. Her full project can be found on Wolfram Community.
✕

When I heard about the Wolfram Summer School, I was curious. Except for randomly using WolframAlpha for some college assignments, I had no knowledge of the Wolfram Language. After some exploration, I was more intrigued by the Summer School program. The extensive but perspicuous nature of it felt like an excellent opportunity to hone and learn a new skill, meet talented people and, most importantly, create something. I had little idea of what to expect from the school, but it turned out to be a far more fulfilling experience than I had imagined.
The whole process of project selection, from shortlisting a few projects to finalizing the final project with Stephen and my mentor, was specifically catered to my budding interest in neuroscience and its applications in computation. My project was to model migraine auras in artificial neural networks, which allowed me to get some handson experience and dive deep into the world of computational neuroscience.
Since I was new to the Wolfram Language and the overall functional programming paradigm, my mentor was extremely patient and helpful while I struggled to sail through my project and the Summer School. He helped me get a good grasp of the Wolfram Language in a relatively short amount of time and spent hours with me brainstorming ideas and coding, as well as making sure I stayed on track, completed the project and, most importantly, learned from the process. The Summer School team very diligently supported us with all the resources and motivation to complete our projects in time. Teaching assistants were always available to help with coding problems and errors that seemed impossible to conquer at the beginning. The Summer School lectures from experts within and outside the Wolfram team, plus interactions with Stephen, helped me in my project and in navigating disciplines other than my academic background. One of my favorite parts was how Stephen was involved not just in the execution of all the projects, but also guided the students personally to help them move in the right direction.
✕

It was difficult to manage participating remotely with a huge time difference, but the team made sure to keep all remote students in the loop and connected throughout the program. The four weeks of the Summer School were the most hectic, challenging and stimulating weeks of my life. This experience made me realize that when you have the right tools, mentors and resources, you are capable of pushing yourself far more than you imagined. For anyone interested in computational sciences, whether in engineering, finance, economics or social sciences, the Wolfram Summer School is the best place to start, as it will help you get out of your comfort zone and give you a chance to do some original research!
As we continue planning for the Wolfram Summer School 2023, we invite you to explore the projects created and sign up for the mailing list to be informed when applications open for 2023. We hope to see you there!
Learn more and stay informed about Wolfram Summer School! 
For 10 thousand years, humans have been using fermentation to produce beverages for pleasure, rituals and healing. In ancient Greece, honey was fermented to produce mead. Today, popular sources of beverage fermentation are grains, grapes, berries and rice. The science of fermentation—known as zymology (or zymurgy)—is a fascinating blend of chemistry, biology, history and geography. The Wolfram Language now brings a new dimension to the study of alcoholic beverages through an extensive dataset ready to be explored and analyzed.
The Wolfram Language offers more than six thousand functions, including arithmetic functions like Max, Min, Mean and Median. Let’s look at the median alcohol content per serving and default serving size volume for all rums in the alcoholic beverages data:
✕

This mimics EntityValue’s default behavior when working with attributed Food entities:
✕

You can apply the same calculation to other alcoholic beverages, including "Cognac", "Sake" and "Brandy"—which are all part of the FoodType entity—to see how they compare with "Rum".
Wolfram Language data on the alcohol content of beverages can help when deciding how much to drink. On a night out, approximately how many drafts of beer equate to the usual glass of Chardonnay you have at home? The average glass of wine is about five fluid ounces:
✕

A serving of draft beer is usually one pint:
✕

So, every pint of beer is equivalent in alcohol content to slightly more than one and a half glasses of wine:
✕

Knowledge of alcohol content can be useful when designing or choosing cocktails. Once liquors start to mix, it becomes harder to determine the actual alcohol content of the drink. Disclosing this information on menus could help less experienced bar patrons have a better understanding of what’s on offer.
Here’s a function that gets the total alcohol content of a group of alcohols (assuming the density of most alcohols is about one g/mL, the density of water):
✕

Let’s apply this function to a whiskey sour and a gin martini to compare the total alcohol content for these mixed drinks. A whiskey sour is often made with two fl. oz. of whiskey, and a classic gin martini can include two fl. oz. of gin and one fl. oz. of dry vermouth:
✕

✕

✕

In this example, the martini has a higher total alcohol content than the whiskey sour. But we are leaving the most important question for you to decide—shaken or stirred?
If you count calories, have you ever wondered whether red wines and white wines differ in calorie content? Do all white wines have similar calories? You can create informative plots that answer questions like these.
First, query the "RelativeTotalCaloriesContent" for both red and white wines using EntityValue:
✕

Use ListPlot to visualize the data:
✕

✕

If you enjoy whiskey, geography or travel, GeoBubbleChart can answer the question, “Where in the world are the whiskey producers?” You come away with an informative visualization and an entertaining tonguetwister.
Query the FoodLocation attribute for all whiskeys in the alcoholic beverages data:
✕

Use Interpreter to get the geo positions that correspond to the FoodLocation entities:
✕

Tally the geo positions and then plot them on a GeoBubbleChart to visualize the data:
✕

According to the data, the most common locations for whiskey producers are Kentucky, Tennessee, Scotland and Ireland, with less prolific whiskey producers in Canada and Japan. In this way, you could make sure that whiskeys from all around the world are represented in your home collection and explore the flavors produced across different regions.
You can reuse this code to produce another GeoBubbleChart and discover where FoodType values "Rum", "Beer" and "Vodka" are produced globally.
Machine learning and neural networks are prevalent in our everyday lives. You can apply Wolfram Language machine learning to the alcoholic beverages data. Use Classify to train a neural network to distinguish wines from beers based on relative alcohol content:
✕

✕

We split the data obtained from EntityValue into a training set (80% of the available data) and a testing set to make sure we can tell how well our ClassifierFunction works on values it hasn’t seen before:
✕

Create the ClassifierFunction:
✕

Classify a subset of the testing set data we reserved earlier:
✕

✕

NiceGrid from the Wolfram Function Repository displays a readable grid:
✕

The alcoholic beverages available via the Wolfram Language are full of flavors. Several functions give you the ability to explore and visualize the data:
✕

You can continue to explore vodka flavors, like "Mango", "Peach" and "Coconut".
Use Manipulate to investigate other flavored alcoholic beverages:
✕

✕

The Wolfram Language can visualize data in many different ways. ImageCollage can tell your data story in full color:
✕

Find the images corresponding to the vodka FoodFlavor entities:
✕

✕

Use ImageCollage and Scaled to create a visualization of the most common flavors among vodkas in the Wolfram Language alcoholic beverages data:
✕

From this image, we can quickly see that citrus fruits are among the most popular vodka flavors, as well as strawberry, cherry, raspberry, cucumber and coconut. The data reveals some surprising flavors like pumpkin, dragon fruit, espresso and caramel.
WordCloud also provides an interesting weighted visualization of the most common vodka flavors:
✕

I have enjoyed sharing some of the innovative ways you can analyze and visualize the wide range of alcoholic beverage data now available. I hope you continue to explore Wolfram Language food and beverage data and that you share your exploration with us. (And if you drink, please don’t drive. Arrange a designated driver.) Here’s to our wonderful Wolfram community:
✕

Get full access to the latest Wolfram Language functionality with a Mathematica or WolframOne trial. 
WolframAlpha for iOS first launched in 2010. Since then, it has been an indispensable tool for students, teachers and pro users around the world, often ranking among the top 10 reference apps in the App Store^{®}. Users are able to ask questions on a variety of topics, from solving homework equations to determining the airspeed velocity of an unladen swallow.
Until now, users had to buy the app to use it. Previously available was a free version called WolframAlpha Viewer, which could be used to run queries. The app was limited, however, to queries executed through Siri and queries made using one of the example queries. To enter a custom query, you had to buy the full app.
WolframAlpha for iOS is now available for free. The free app has all of the features from the previous paid app, minus basic stepbystep solutions, plus a few new features available with an active WolframAlpha Pro subscription, including math optical character recognition (OCR) and the assistant apps previously available as separate apps.
We are also announcing that one of the most frequently requested features is finally here: math OCR. This feature will be available with an active subscription to WolframAlpha Pro.
Previous versions of WolframAlpha featured Image As Input, where a user could take a photo or choose an existing photo and send it to the server for analysis or run their photo through one of WolframAlpha’s image filters. This was a fine feature, but a lot of users really wanted to use the camera to solve an equation.
Image as Input is still in the app; however, we’ve added a new option to take a photo of an equation, translate the equation to WolframAlpha input and then query the equation. You can also scan a previously taken photo of an equation in your device’s photo library.
Here’s an example. I see an equation in the book Handson Start to WolframAlpha Notebook Edition, and I put it into the camera’s frame using the viewfinder:
✕

I take a photo of that equation:
✕

In order to isolate the equation I want to analyze, I must mark the equation. I can use a circle or draw a line from one extreme point to the other. I draw a circle using my finger (or an Apple Pencil^{®}, if you have one and it works with your iOS device). When I’m done drawing, I can adjust the view so I send the correct equation:
✕

Finally, I can edit the equation before starting the query:
✕

And now, I see the results, including stepbystep solutions:
✕

✕

Previously, we sold a number of assistant apps alongside the WolframAlpha app. These assistants contained forms that made WolframAlpha even simpler to use. Some of the more popular ones included the Algebra Course Assistant and Calculus Course Assistant for users like students or the Sun Exposure Reference Assistant for gardens or keeping up with your sunscreen application.
They were, however, made back in a different time in the history of the App Store, and while releasing many templated apps was perfectly acceptable back then, that’s considered to be “app spam” now. Therefore, we are discontinuing the assistant apps and rolling the forms in each app into the WolframAlpha app with a WolframAlpha Pro subscription.
There are many practical uses for these forms. You can, for instance, determine the time required to develop a sunburn from your current location using the Wolfram Sun Exposure Reference app form:
✕

Or track how your favorite Major League Baseball team is doing this season using the Baseball form:
✕

Or find out which famous people were born in a given city using the Travel form:
✕

Or just look at cute cats while learning some fun facts using the Cat Breeds form:
✕

The new WolframAlpha app is free for all iOS and iPadOS^{®} users, and for macOS^{®} users with Apple Silicon CPUs. The original WolframAlpha app, now known as WolframAlpha Classic, is still around, but no new features are being brought over to the app. We encourage all of you to migrate to the free app.
The assistants and math OCR features are unlocked by having an active subscription to WolframAlpha Pro. If you have a Wolfram ID with a Pro subscription purchased elsewhere, you can simply sign in to your Wolfram ID from within the app and use the new features. If you don’t, then you can buy a WolframAlpha Pro subscription within the app just like you could in the previous release.
Look for some exciting updates coming to the WolframAlpha Android apps soon »
Download WolframAlpha 2.1 on the iOS App Store!
✕

Sign up for WolframAlpha Pro to access customizable settings, stepbystep solutions, increased computation time and more. 
Dr. Martha Abell, coauthor of Mathematica by Example and Differential Equations with Mathematica, is a professor and former dean of science and mathematics at Georgia Southern University. She received her Ph.D. at Georgia Tech in 1985 and has been a celebrated educator ever since, having received recognition for her outstanding instructional skills from her students and colleagues. For example, she won the Mathematical Association of America (MAA) Southeastern Section Distinguished Service Award in 2016, and was nominated by the same organization for their Meritorious Service Award in 2019. Her coauthor, Dr. James Braselton, is also an esteemed educator at Georgia Southern University and has been a prevalent author and peer of Dr. Abell for decades.
Mathematica by Example, sixth edition, published in 2021, was described by publisher Elsevier as “an essential resource for the Mathematica user, providing stepbystep instructions on achieving results from this powerful software tool.” Moreover, it praises the textbook for its thoroughness and recommends it to science students, researchers and anyone looking to utilize Mathematica.
Differential Equations with Mathematica, fifth edition, published in 2022, is a necessary resource for those interested in exploring concepts regarding linear algebra and calculus through Mathematica. Publisher Elsevier writes “it uses the fundamental concepts of the popular platform to solve (analytically, numerically, and/or graphically) differential equations of interest to students, instructors, and scientists.” We talked with Dr. Abell following the release of her new books.
Q: What were your intentions for Mathematica by Example? Were there any gaps in the literature that you were attempting to fill?
A: As my colleagues and I were learning how to use Mathematica, we realized that there were few resources available to assist us. As a result, Jim Braselton came up with the idea that we develop Mathematica by Example in the early 1990s. It bridges the gap between elementary handbooks and those references written for advanced Mathematica users. Mathematica by Example is driven by examples, where we introduce the basic commands based on typical examples of applications of those commands. In addition, the text includes commands useful in areas such as calculus, linear algebra, business mathematics, ordinary and partial differential equations, and graphics.
Q: How has your experience at Georgia Southern University led you to this type of work? What inspired you?
A: Computer algebra systems, like Mathematica, became more prevalent in the university setting as I started my career as a faculty member, so I became interested in finding ways to use Mathematica to enhance the instruction of my classes and augment the tools available for conducting mathematical research. I found that Mathematica was particularly helpful in allowing students to explore concepts graphically and numerically, which helped them to better understand the more theoretical areas of mathematics.
Reflecting on my days as a student, I remembered solving challenging problems but having no way to visualize the solutions, and I wanted my students to have a better experience. In my teaching, it became commonplace to graph solutions. For example, after we solved an applied problem involving partial differential equations such as the wave equation on a circular region to find the vibrations of a drumhead, we used Mathematica to observe the vibrations. My students always seemed to appreciate my efforts to bring mathematics “to life” with them.
Q: Why is Mathematica your language of choice? What are some of its “hidden gems”?
A: I appreciate Mathematica’s consistency. Commands rarely become obsolete, so we didn’t have to rewrite our code from previous projects when new versions of Mathematica were released.
Mathematica’s Manipulate command was a gamechanger in the development of apps to help students explore concepts in the undergraduate mathematics curriculum. Anyone with an elementary knowledge of Mathematica can quickly write a command to help their students.
Q: Lastly, as a successful woman in science, what advice would you give to other women attempting to follow your footsteps?
A: I would recommend building a supportive network of colleagues and friends. My success was possible because I was part of a wonderful collaboration with Jim Braselton, and we were both lucky to be members of a department/college/university where our work was valued.
The expectations of faculty have increased over the years, so faculty need to be careful in balancing their workload, making sure that they are working on projects that will be positively reviewed on faculty evaluations (annual reviews, pretenure, tenure and promotion, etc.).
An ideal approach is to develop a research program that involves connections to teaching (such as research with undergraduates or graduate students, the scholarship of teaching and learning, etc.) and service (such as leading a committee within a professional organization, organizing research or professional development sessions, etc.).
Involvement in programs such as Project NExT (MAA) and research interest groups associated with professional organizations (MAA, American Mathematical Society, Society for Industrial and Applied Mathematics) also helps faculty to build a community in which they can grow and succeed.
Dr. Henry Foley is the current president of the New York Institute of Technology and former chancellor of the University of MissouriColumbia. He earned his Ph.D. in 1982 at Pennsylvania State University and has been incredibly involved in academia throughout the United States, having held scholarly positions at numerous universities. Not only has Dr. Foley been a distinguished lecturer at the University of Utah and the University of Notre Dame, but he has also been awarded the Academy of ScienceSt. Louis Award and the Science Leadership Award. In addition, he has been acknowledged as a fellow of the American Association for the Advancement of Science, the American Chemical Society and the Industrial and Engineering Chemistry Division.
Introduction to Chemical Engineering Analysis Using Mathematica: For Chemists, Biotechnologists and Materials Scientists, second edition, was published by Elsevier in 2021. According to Elsevier, the textbook reviews “the processes and designs used to manufacture, use, and dispose of chemical products using Mathematica… covering the core concepts of chemical engineering, ranging from the conservation of mass and energy to chemical kinetics.” Moreover, the textbook is a valuable resource that incorporates easytouse technology with complex concepts. We discussed the new book with Dr. Foley.
Q: What was your first encounter with Mathematica and the Wolfram Language?
A: It’s 1988 and I was teaching chemical engineering at the University of Delaware; we were trying to bring computing into the classroom. This was pretty funny, because we literally had to carry large desktop computers into the classroom to work with them. That’s beside the point. I was trying to find a way to get students to do more computing. So, I was looking around for something that would allow us to do that, and there were a few things that were on the market at the time. Then, Mathematica came out that year and I was astonished. I was just blown away by what I could do and started using it immediately, and it was so easy to use because even then it was, relative to other programming languages, much closer to natural language programming than anything we’d ever seen. You got text, graphics and plotting all for free. I started off using it with honors students in the committee program. So you know, chemical engineers are better than your average student, and the honors students among them were even better than those students, so they could kind of handle it, and they got it and it was fun.
Q: What were your goals for Introduction to Chemical Engineering Analysis Using Mathematica: For Chemists, Biotechnologists and Materials Scientists, second edition? How is your textbook different than others on the subject?
A: I decided to start to write a book on it based on everything I’d learned in the classroom, how I taught it and my own research. The primary goal was how to think like a chemical engineer, and then how to do modeling and computing all at the same time. At the time there were books that taught you how to think like a chemical engineer, how to build a model, but not how to solve it. There were also books that were all about trend programming and chemical engineering. But I wanted to do something new. So, we brought all those pieces together, trying to teach people computational thinking.
If you have a big job to do, and you have to do it many times, like some big calculation for thermodynamics, then you carefully write a program to do that. And whenever you need to do that calculation, you’ve got it. But it takes a lot of work and a lot of effort—it’s not a homework problem.
So you aren’t really thinking computationally, you’re more thinking like a programmer, because you get so deeply involved in the programming. But with Mathematica and the Wolfram Language, we can start to get people beyond that—get them to think beyond that which is the physical process, the chemical process that’s happening. We wanted to teach people how to use advanced technology, Mathematica, to improve their understanding of physical concepts.
Q: If you had to describe your book in one sentence, not in the synopsis, what would you say?
A: So the first part of the book is how to use Mathematica, and that goes almost one hundred pages. Then, the next eight hundred pages are examples of how to do things that are of importance to chemical engineers, chemists, material scientists, maybe even physicists or chemical physicists. It’s really two books in one; you get a howto book, and then you get a book on the topical concepts.
Q: How has your experience at the New York Institute of Technology inspired your research?
A: I never knew that I would be a president, but I really love it and I find it incredibly rewarding. However, it’s also been very difficult because these are obviously very unusual times. One of the great things about the pandemic for me was I had much too much time, and on evenings and weekends and vacations, I could review 20 years or so of material. I was able to look back on all of my years of experience, using them to the fullest extent. I think I put them together in this new book, which is really the second edition of the first book, but it might as well be a new book. And so the pandemic turned out to be productive for me, and my work was kind of how I kept myself sane, by working on this every day in between my regular work.
Q: Do you see examples of concepts from your academic/career life in your personal life? Do the two ever mix? Does your personal life inspire your study?
A: I’m lucky in that I’ve never really worried about career life balance, so to speak. I know that if, for whatever reason, I wasn’t able to be a president tomorrow, I would still do my research and I would still be involved because I love what I do. I actually know that if I retire someday, I’ll be doing this stuff because I love it.
I even have another book in mind, and I’d like to do it a similar way. My goal is to make the barrier for entry very low, meaning I want to make the information accessible and inviting. A lot of this kind of information ends up being very exclusive, so only those who have a preexisting dedication seek it out. With this book, I want people to feel inspired to do more. Without barriers, people can become imaginative and creative with what they’re doing.
I’m thinking of a book on partial and differential equations, which is pretty austere stuff, and yet it drives the world and it should be accessible to more people. So I’d like to work on that because it’s not really as daunting or as frightening as it seems when you’re faced with the full theoretical aspects of partial differential equations. For example, if you had to know how your car works in detail down to the radical reactions that are occurring during combustion in your engine, you’d never turn the key on. You’d never drive anywhere but you’re perfectly capable of driving and using your car, and so forth, and getting enormous pleasure and gain from it. And I see math the same way.
The computer now does all that for you much better than you would ever do it, so why try? Why not focus on the part of the problem that’s the most important? Like, why is it a problem? How can a human brain digest this differently than a computer? So, let’s try to create a word statement, a picture for what it is, so that we really understand it, and let’s build some equations that describe it. Let the computer solve it, and then let’s try to see how it behaves. Does our model accurately describe the behavior? And if not, okay, what can we do to make it more sophisticated? So you also believe in starting with small learning models and then working your way up to progressively more accurate, serious models. That’s something I’m not sure I’ll ever see in practice, but making advanced technologies more accessible is amazing work.
Written by Dr. Jon M. Conrad and Dr. Daniel Rondeau and released in 2020, this new book brings computation to resource management and economics. Publisher Cambridge calls it “foundational to advanced research, as it presents required mathematical methods, classic dynamic models for nonrenewable and renewable resources, and explores several contemporary problems.” Moreover, students are given resources to use Mathematica in studies such as the transition from fossil fuels to clean energy, as well as overfishing and deforestation. Natural Resource Economics: Analysis, Theory, and Applications also allows those interested in environmental studies to access information through advanced technology.
This textbook, written by Dr. Robert P. Gilbert, Dr. Michael Shoushani and Dr. Yvonne Ou, is unique because it encourages students to learn the ins and outs of Mathematica so the program can be utilized to its fullest extent as a resource. Moreover, the book is described by publisher Routledge as “a textbook addressing the calculus of several variables. Instead of just using Mathematica to directly solve problems, the students are encouraged to learn the syntax and to write their own code to solve problems.” Multivariable Calculus with Mathematica also provides questions to test students’ ability at the end of each chapter, as well as an online component that aims to increase students’ understanding of reallife applications to their study.
Galina Filipuk and Andrzej Kozłowski have released the third volume of their series Analysis with Mathematica. This series tackles concepts ranging from singlevariable calculus to differential geometry and special functions. Each volume, while varying in subject, is unified by the organization of the text. Publisher DeGruyter says that Mathematica is constantly integrated with examples so students are better able to understand the concepts. This organization provides students with numerous practice problems, allowing them to learn the concepts from their own calculations with Mathematica. Additionally, each textbook in the series is a continuation of the last, meaning that they assume that the reader has prior knowledge, making this series perfect for more experienced users of Mathematica.
If you would like to preview Dr. Filipuk and Dr. Kozłowski’s trilogy, you can find sample chapters of each textbook and converse with the authors on Wolfram Community.
S. M. Blinder’s 2022 textbook holds a vast wealth of knowledge ranging from special functions to black holes. Additionally, Mathematics, Physics & Chemistry with the Wolfram Language utilizes interactive learning, as it comes with all of its code written in Wolfram Notebooks. This allows the reader to work through practice problems with full control and the ability to experiment, ensuring full understanding of the concepts. World Scientific, the book’s publisher, writes, “This book should be a valuable resource for researchers, educators and students in science and computing who can profit from a more interactive form of instruction.”
This accessible textbook from author Dr. MahnSoo Choi provides students not only instruction on quantum computation, but also tools to best use Mathematica to that end. Springer, the publisher, calls this textbook “an organization of all the subjects required to understand the principles of quantum computation and information processing in a manner suited to physics, mathematics, and engineering courses as early as undergraduate studies.” Additionally, A Quantum Computation Workbook is praised because it helps students to develop their understanding of Mathematica and quantum computation by encouraging them to alter the code within the textbook.
Published in 2022 and written by Dr. Michael A. Henning and Dr. Jan H. van Vuuren, this new book hones in on application. Springer depicts the textbook as “covering a diversity of topics in graph and network theory, both from a theoretical standpoint, and from an applied modelling point of view.” This dynamic approach to teaching graph theory makes Graph and Network Theory a valuable resource for those interested in learning fundamental and advanced concepts. The textbook also provides students with multiple approaches to the material, meaning that there are different study tracks depending on students’ prior knowledge, with demonstrations and reallife applications to motivate any student.
In 2020, a full team of educators worked together to bring readers this new text: Dr. Kirill Rozhdestvensky, Dr. Vladimir Ryzhov, Dr. Tatiana Fedorova, Dr. Kirill Safronov, Dr. Nikita Tryaskin, Dr. Shaharin Anwar Sulaiman, Dr. Mark Ovinis and Dr. Suhaimi Hassan. This textbook serves as both a brief introduction into model theory and an advanced look at creating computer models. With its special attention given to Wolfram System Modeler, Computer Modeling and Simulation of Dynamic Systems Using Wolfram System Modeler is perfect for those who wish to become more familiar with Wolfram technology and computer modeling. Moreover, Springer recommends it to “students and professionals in the field,” writing “the book serves as a supplement to university courses in modeling and simulation of dynamic systems.”
If you’re interested in finding more books that use the Wolfram Language, check out the full collection at Wolfram Books. If you’re working on a book about Mathematica or the Wolfram Language, contact us to find out more about our options for author support and to have your book featured in an upcoming blog post!
Get full access to the latest Wolfram Language functionality with a Mathematica or WolframOne trial. 
In grade school, long arithmetic is considered a foundational math skill. In the past several decades in the United States, long arithmetic has traditionally been introduced between first and fifth grade, and remains crucial for students of all ages.
The Common Core State Standards for mathematics indicate that firstgrade students should learn how to add “a twodigit number and a onedigit number.” By second grade, students “add and subtract within 1000” and, in particular, “relate the strategy to a written method.” In third grade, multiplication by powers of 10 is introduced, and by fourth grade students are tasked to “use place value understanding and properties of operations to perform multidigit arithmetic,” including multiplication and division. A fifth grader will not only be expected to “fluently multiply multidigit whole numbers using the standard algorithm,” but also “add, subtract, multiply, and divide decimals.”
Now, WolframAlpha Pro returns stepbystep solutions for long addition, subtraction, multiplication and division problems, including ones involving decimals or negative numbers. We have also developed detailed stepbystep solutions for long division of whole numbers and negative numbers as well as—for the highschool level—multiplication and division of polynomials.
Long arithmetic is used to solve addition, subtraction, multiplication and division problems in writing, often by organizing numbers one on top of the other, with digits aligned in columns.
The long arithmetic algorithms are rooted in the concept of place value. In our base10 number system, each digit represents a count of a certain value associated with its place in the number. For example, a threedigit number with no decimal uses three place values: the hundreds, tens and ones. Aligning numbers based on their digits amounts to lining up the digits with the same place values. While the long arithmetic algorithms can be carried out without fully thinking through the place value reasoning each time, it can be conceptually useful for students to understand the process of, for example, multiplying the ones, multiplying the tens and multiplying the hundreds, then combining those results to get the final result.
Long arithmetic can be challenging for students seeing it for the first time. Moreover, since there is a variety of long arithmetic methods, it can be challenging for parents to help their students. If it seems to you like every generation learns a new long arithmetic method, that may not be your faulty memory! In fact, there are many variations of the long arithmetic algorithm, and which one you learn in school can depend on a variety of factors, from geographical region to teacher preference to curriculum updates.
✕

If you ask WolframAlpha to add several numbers, you can view a stepbystep solution that performs the computation using long addition. First, we arrange the numbers into columns based on place value, using the decimal point as a guide:
✕

WolframAlpha then walks you through each step of the long addition algorithm. In general, this involves adding the digits in each column from right to left. If the digits in a column sum to 10 or more, we carry the first digit to the column to the left.
More conceptually, adding the digits in a column means counting the number of units in a particular place value. In the following step, for example, summing the 6 + 9 + 4 in the hundredths column adds up to 19 units in the hundredths column. Nineteen hundredths is equivalent to 1 tenth and 9 hundredths, and so we record the 9 hundredths in the hundredths column and move the 1 tenth to its fellow tenth units in the tenths column:
✕

The stepbystep solution guides you through the addition of digits in each column, at which point you can read the final answer from the bottom of the grid:
✕

✕

WolframAlpha also returns stepbystep solutions for long subtraction of a smaller number from a larger one. We begin setting up the problem by arranging the numbers on the page:
✕

The long subtraction algorithm proceeds by subtracting the bottom digit from the upper digit in each column. In the case that the bottom digit is a higher number than the upper digit, we must borrow from the columns to the left. We indicate this on the long subtraction grid by replacing the 3 in the hundreds column with a 2 and moving the borrowed 1 into the tens column to create 14 tens:
✕

The long subtraction–borrowing procedure can be explained in terms of place values. In the previous step, the need to borrow arises in the tens column because 4 tens is fewer than 9 tens. We therefore look beyond the tens column to the hundreds column, which allows us to instead consider subtracting 9 tens from 1 hundred and 4 tens. Conceptually, this changes the relevant subtraction problem for this step from 40 – 90 to 140 – 90. In the long subtraction grid, this only appears as 14 – 9 = 5; the place values of the digits are encoded in their positions in the numbers in the long subtraction grid:
✕

Each borrowing and subtracting step is enumerated in the stepbystep solution. When there are no longer any digits in the bottom number, we can bring down any remaining digits from the upper number and read off the final answer from the bottom of the long arithmetic grid:
✕

For stepbystep long multiplication, we recently added the capability to multiply decimals and negative numbers. Performing the long multiplication algorithm with decimals or negative numbers simply involves replicating the algorithm as if for integers and then placing the decimal or negative in an additional step before reporting the final answer.
In the words of the great Jaime Escalante in the 1988 film Stand and Deliver, “A negative times a negative is a positive!” The stepbystep solution for multiplying two negative numbers explains that you can effectively ignore the negative signs before continuing with the long multiplication algorithm:
✕

One step of the long multiplication algorithm involves multiplying a digit of the second number by each digit of the first, carrying the tens digit of each product as necessary. Each step is summarized in the stepbystep solution:
✕

Finally, if we are multiplying decimal numbers, we write the decimal in the final answer by taking place value into account, which amounts to counting up the number of digits after the decimals in the original numbers:
✕

The final result of long division is not always given as a single number. When a number does not evenly divide into another, long division reveals both the quotient and remainder:
✕

✕

Another way to report the result of a long division problem is with a mixed number, sometimes referred to as a mixed fraction:
✕

Regardless of how you are presenting the final answer, the steps for performing the long arithmetic algorithm are the same. We begin by arranging the numbers in a slightly different layout, using a division bracket instead of stacking the numbers vertically, as we did for the previous algorithms. The number to the left of the bracket is the divisor and the number inside the bracket is the dividend:
✕

Each step of the long division algorithm requires multiple substeps. First, if the divisor has two digits, as in our example, you need to determine how many times the divisor goes into the first two digits of the dividend. Write that number on the top of the division bracket, multiply the divisor by that number, subtract that product from the divisor and bring down the next digit:
✕

Phew! Need some extra clarification on that? If so, you’re not alone. Long division is a notoriously challenging long arithmetic method to learn or teach. Behind the Multiple intermediate steps button, therefore, you can see the multiplication and subtraction worked out separately, with each addition to the bracket explained one at a time:
✕

After performing the steps for the long division algorithm outlined until no more digits of the dividend remain, a final step guides you through the process for finding the quotient and remainder and, if desired, expressing the result as a mixed number:
✕

Presently, WolframAlpha only returns stepbystep long division solutions for integers, not for decimal numbers. We look forward to expanding our stepbystep support for long division to include decimals in the near future.
In high school, students extend long arithmetic with numbers to apply to mathematical expressions called polynomials. Polynomials are sums of terms that include variables and exponents, such as 3x^{2} + 4x – 5. Polynomials can be added, subtracted, multiplied and divided using methods analogous to numeric arithmetic. In particular, polynomial multiplication and division are critical skills for upperlevel highschool and college math classes. We have recently expanded the stepbystep solutions for polynomial multiplication and division problems.
There are similarities between long arithmetic for numbers and for polynomials. While the digits in numbers can be grouped based on their place values, the terms in a polynomial can be grouped based on the exponents of the variable.
In high school, students learn how to “add, subtract, and multiply polynomials” and, as stated separately in a different Common Core Standard, divide polynomials “using inspection, long division, or, for the more complicated examples, a computer algebra system.” WolframAlpha is the perfect tool for handling those complicated examples, especially because we also show how to arrive at the solution via stepbystep polynomial long division.
One common method for multiplying polynomials, often referred to as the “box method,” involves organizing the terms of the polynomials around the outside of a grid. The stepbystep solution explains how to determine the size of the grid:
✕

Next, we fill in each box in the grid by multiplying the terms in each rowcolumn pair. Each of these steps is outlined in the stepbystep solution, and the intermediate steps give details about how to multiply each pair of polynomial terms:
✕

Finally, we obtain the result of the polynomial multiplication problem by summing the terms in all of the boxes:
✕

The box method is particularly useful as a visualization of the pairwise multiplication of the terms of each polynomial. We are also working on developing additional methods for polynomial multiplication, including polynomial long multiplication (formatted similarly to numeric multiplication) and the “arrow” and “FOIL” methods commonly taught in high schools in the United States.
Last but not least, we also have improved stepbystep support for polynomial long division problems. Polynomial long division follows an algorithm similar to integer long division, but with extra requirements for keeping track of variables and exponents. To divide one polynomial by another, we set up the long division bracket with the dividend (or numerator) inside and the divisor (or denominator) outside:
✕

✕

The “multiply, then subtract” structure of the integer long division algorithm is the same for polynomial long division. The question of “How many times does the divisor go into the first term(s) of the dividend?” can roughly be translated to something like “How many fewer highest powers of x does the divisor have than the first term of the dividend?” Mathematically speaking, we refer to the “leading term” as the term of the dividend with the highest power of x and determine what the divisor needs to be multiplied by to match the leading term of the dividend. The multiple is recorded on top of the bracket, and the result of the multiplication is recorded and subtracted at the bottom of the grid. Since students doing polynomial long division are already familiar with integer long division, this entire step is summarized inline:
✕

Once we have completed the final subtraction, the quotient and remainder appear in the same places of the long division grid as they do for integer long division. The final result can be reported either in terms of a fraction or in terms of multiplication of the quotient by the divisor, in what we refer to as “quotient and remainder form”:
✕

The WolframAlpha math team hopes to further expand its stepbystep coverage for long arithmetic. We continue to work on adding more methods for long arithmetic, including alternate representations of the same algorithms. While the underlying conceptual arithmetic algorithms do not vary among teachers or regions, there is some variation in the procedures and methods for representing long arithmetic problems. Different teachers have different preferences of methods for communicating content to their students, and the arrangement of columns and procedures for visualizing arithmetic vary regionally.
We are also working to add more methods beyond the traditional long arithmetic algorithms, such as visual methods that emphasize different mathematical concepts behind the arithmetic. We hope to provide useful tools for students, parents and teachers performing long arithmetic, and welcome feedback or requests if there are particular additions (or subtractions, multiplications and divisions) you would find useful.
Sign up for WolframAlpha Pro to access customizable settings, stepbystep solutions, increased computation time and more. 
In the past few years, there have been many significant anniversaries in the Mathematica world. This has made me think about my long personal history working with all things Mathematica. Here I present an account of how I got involved with this world, developed my part of it and continue to use it. I show what I think is a unique application that differs from the other thousands of applications in Mathematica or the Wolfram Language presented on the various Wolfram Research websites, Wolfram Community and elsewhere. Finally, I attempt to describe the physics of what I do. The beginning historical part with much namedropping can be skipped for those who want to read only about technical or physics issues.
Autobiographically, this begins with me in high school in 1965 and a book by Peter Bergmann, one of Einstein’s former assistants at the Institute for Advanced Study in Princeton.
✕

I read the book line by line and cover to cover one summer and found a few typos. Naively and not knowing who Bergmann was exactly, I wrote a letter to him, pointing out the errors I had found. Weeks later, a kind but definitive letter came from him, pointing out that the book had been published years before and that he surely would have known about any problems by that time.
The section on finding solutions to Einstein’s gravitational field equations was particularly inspiring.
✕

Anyone who has tried to find exact solutions to such very complicated, coupled, nonlinear partial differential equations knows that even the straightforward, static, spherically symmetric and empty space case requires some complex tensor algebra to get to the differential equations themselves. Although this has always been difficult, Schwarzschild, Kerr, Gödel, Kaluza–Klein, Robertson–Walker, Taub–NUT and others did find nowfamous solutions.
My initial fascination with such solutions led me to contact John Archibald Wheeler and his student Brendan Godfrey during an APS conference in Chicago in the late 1960s. Godfrey was doing his dissertation on exact solutions. These men encouraged me to follow my passion for studying such solutions.
It quickly became apparent that solutions were rare. Finding even the wellknown ones was challenging and error prone. This led me to think of my other obsession: computer programming. Back in the mid60s, access to any computer for a highschool student was very unusual. However, I was lucky. My hometown, Moline, Illinois, was the headquarters of the farm implement company John Deere. One of its main engineering groups worked there. At the time, the engineers wanted to convince their boss to fund new IBM computers to be used—not for business, but for engineering simulations. They had been doing some of this on analog computers, but felt digital was the future. The boss was concerned that the time and cost needed for educating his employees in Fortran and using punch cards, not to mention buying the IBM hardware, would take them away from pressing design work. They got the idea to bring in highschool students, give them a Fortran book and access to a computer, and then see if they could learn to code relatively quickly. If teenagers could understand it, engineers would be able to do it even more easily.
The engineers called up the science and math departments at Moline High School and asked if MHS had any students who might be interested. The school asked whether there were any interested students, and about 10 said yes. I and the others showed up for an evening meeting at Deere, and the engineers presented their proposal to give some programming tutorials. If I remember correctly, perhaps five came for the first tutorial. Only one appeared for the second session—me.
The engineers wanted me to simulate the problem of a tractor going over a bumpy field. The tractor had only one spring and shock absorber to protect the driver above. I was to solve the damped harmonic oscillator differential equation and plot the motion of the driver. Today this is a straightforward task. For example:
✕

We would have killed for such miraculous futuristic software then. In any case, I first learned about what kind of solution was needed by programming their analog computer, which was similar to this one, connected to a pen plotter:
✕

This proved to be a lot of fun and relatively simple to do, but how could I do it digitally? I had to figure out how to solve a differential equation numerically on the IBM and print out a plot on a line printer. After some weeks of work, my study of numerical methods and Fortran programming finally led to plots similar to the analog output results. The engineers were happy. I was thanked and given the Fortran book. Eventually I attended a nice “honors” lunch at the Deere headquarters building. I have no idea how the engineers used computer programming afterward.
When I went to Augustana College in Illinois, my experiences at Deere turned out to be very valuable. For the first time in 1967, the college offered a course in Fortran programming for students and permitted the faculty to use the college’s small IBM computer for their research in the evenings. I was hired as the first student to run the computer, taking in decks of punch cards, running them through and returning the printout. It was the bestpaying job on campus and gave me free access to any programs I wanted to run. I tried to find a way to do symbolic computing to help find solutions to Einstein’s equations, but nothing I tried in Fortran would help. I wrote some game programs, composed some computergenerated music and did some planetary orbit simulations, but no relativity. The physics and math faculties at Augustana were remarkably supportive, allowing me to teach a seminar course in relativity theory and give guest lectures in the group theory and topology courses. I also taught a programming course for them.
A turning point came in the summer of 1970 when I was invited to an undergraduate summer program at Iowa State University under the sponsorship of the thenAtomic Energy Commission (AEC). I was working on trying to understand Misner and Wheeler’s notion of an “already unified field theory.” I wrote my firstever paper on the subject and submitted it to a journal. It was rejected with a very kind and encouraging referee report, so I was not unhappy. What I got most out of that summer came from one of the administrators in the computer center there. Somehow, he had heard about my interest in relativity and using computers. He told me about FORMAC, which was released in late 1964 and was available on IBM mainframes. I got a copy of the manual for this system, one of the first “computer algebra” programs, an extension of Fortran. When I got back to college, I was able to get it installed on the small IBM machine we had. (I think it was an 1130 with a modem connected to a 360 system at the University of Iowa.) I was fascinated by the prospects of finding a way to do tensor calculus by computer, but FORMAC would not allow that.
After that, I applied to graduate school, emphasizing a desire to do research in gravitation using computers. It turned out that my timing was perfect. Bryce DeWitt, then the renowned head of the Institute of Field Physics at the University of North Carolina at Chapel Hill, was looking for three new research graduate students to do the work on computer simulations of black hole collisions. He hired Larry Smarr, Elaine Tsiang and me to work on this from day one. In January of 1972, we moved from Chapel Hill to the Center for Relativity Theory at the University of Texas at Austin, where DeWitt took over as director. We worked hard for the first two years to discover numerical ways to model the spacetime around a black hole. We started programming with punch cards fed from a terminal to the campus CDC mainframe. Taking the cards and printouts up and down various floors in Moore Hall got tiring. Funds were found to buy a terminal with a screen and keyboard, which were connected to the mainframe to do the programming and get the printouts. It was my job to write, but mainly debug, the code—since my years of helping students and faculty fix their programs gave me a good eye for noticing errors.
This effort contributed to some of the very early theoretical ideas and computer algorithms for what became the LIGO gravitational wave discoveries many decades later. However, after taking DeWitt’s “theory of everything” course on quantum field theory in curved spacetimes and quantum gravity, I decided to do my Ph.D. work in these areas. I chose to find ways to expand on the work in chapter 17 of his famous book, Dynamical Theory of Groups and Fields. The book was based on his lectures at the Les Houches Summer School in 1963, founded and run by the remarkable Cécile DeWittMorette, also a professor at Texas. In chapter 17 and other parts of the book, he outlined what is now known as the Schwinger–DeWitt proper time algorithm for doing regularization of infinite quantities in quantum field theory in curved spacetimes.
✕

What is regularization? Suppose there is a quantum scalar field (or any other field, but for simplicity, choose a scalar field) in curved spacetime. We might want to see how that field interacts with the spacetime, say, of a black hole. Once we define a vacuum state, we can try to find the vacuum expectation value (VEV) of the stress tensor for the field in that vacuum. The VEV can then be put into the right side of the Einstein field equations to see how that changes the background gravitational field. This is called the back reaction problem. What typically happens is that the expectation value is infinite. We want to get rid of the infinities somehow in order to extract finite results. The process of extracting the infinities is called regularization. The VEV is broken up into an infinite and a finite part. The next step is to absorb or throw away the infinite part in some physically meaningful way. This process is called renormalization.
There are numerous ways to do this that go back decades. The regularization method that the Schwinger–DeWitt algorithm uses is “pointsplitting” or “pointseparation.” What this does is write the VEV of the stress tensor in terms of something called the Green’s function, which abstractly satisfies
✕

where the right side is the Dirac delta function and is the twopoint Green’s function. Here we take F to be the differential operator for the coupled massive scalar field. (We have a socalled conformal field when the parameter is and .)
✕

This function comes from the basic scalar field equations from an action S:
✕

DeWitt, following the flatspace work of Schwinger, was able to show that the Green’s function in a curved spacetime could be written as an integral of a sum of terms:
✕

The two key ingredients in this equation are , the socalled biscalar of the geodetic interval, and the acoefficients derived from recursion relations with one boundary condition for the zeroth one. To maintain manifest covariance in a general curved spacetime, is used. It measures the distance between the x and points along the geodesic between them. In the flatspace limit, σ is just half the square of the straightline distance between the two points. As the two points come together, σ goes to zero. The is the van Vleck–Morette determinant, which is related to σ. These are the recursion relations that come from F acting on :
✕

✕

For example, if we set and look at the terms that remain when , we find the first term vanishes and the second term is just , where the brackets indicate the coincidence limit. The final term is . The third term shows that we will need the summed second covariant derivative of . We have to take two derivatives of the recursion relation to get that. This leads to more complicated derivatives.
DeWitt and I were able to show that to get the first (socalled oneloop) divergences in the VEV of the stress tensor, we would need at least the object’s coincidence limit. This requires the coincidence limit of six derivatives of σ. As I will show later, such calculations generate hundreds of terms with many indices to keep track of.
DeWitt was admired for his incredible depth of knowledge in physics and also for his ability to execute by hand enormously long and complex calculations with no errors. I heard stories from some of his colleagues about what he could do over a weekend that might take them weeks or months. I learned many of his techniques for accurate symbolic calculation, but I convinced him that we should try to use a computer to automate it all. In particular, if we wanted to extend to higherspin fields or more loops, we would be generating thousands, if not tens of thousands, of tensor structures—too many for even DeWitt to do perfectly, and perfection was required.
I told him about FORMAC and how it would not work. He told me he knew of someone who might be writing software to do such calculations. This potential contact was Martinus Veltman, a Dutch physicist then at the University of MichiganAnn Arbor. Bryce offered to introduce me. Veltman was very kind and willing to help by sending me a copy of his Schoonschip computer algebra system that would run on the mainframe computers in Austin. I got it installed, but even after learning how to code with it, I could not get it to do what we needed for the acoefficient work. I gave up on computerizing the calculations and spent six months, eighteen hours a day, seven days a week doing the calculations by hand. I did them independently five separate times to ensure I got the same answers.
I finished my dissertation, finding the divergences and finite terms in the scalar VEV of the stress tensor. After leaving Austin, I went to King’s College in London for a year and published the dissertation results in the following paper:
✕

The ultimate result in this paper was the elaborate set of following equations. The first term shows what is called a “quartic divergence”—that is, as , that term diverges as the inverse of the distance along the geodesic to the fourth power. We also see quadratic, logarithmic and linear divergences. All of these will need to be renormalized away in some fashion:
✕

Note the many subtle coefficients. If even one of these is not computed correctly, the physics results can be utterly wrong. One of the most important results of this work was to confirm the existence of something called a trace anomaly. It was assumed that the trace of the stress tensor (the sum of its diagonal elements roughly) would be zero for a massless conformally invariant scalar field. But Capper and Duff in the UK had shown that the trace was not zero—that is, it was anomalous in the dimensional regularization scheme. My calculations had shown the same thing. The finite term in the previously shown equation gave the same result as Capper and Duff’s work did, but in an entirely new way. Soon after finding this fact, Steve Fulling and I showed that the trace of the stress tensor anomaly was precisely equivalent to the justdiscovered Hawking radiation idea in two dimensions and also contributed to it in four dimensions. In the last years of the 1970s, Duff and I showed how the coincidence limit was related to other anomalies and index theorems in supergravity theories.
One perhapsamusing anecdote. I had the rare opportunity to discuss these calculations with the famous mathematician Paul Erdős. I had dinner with him at a friend’s home in York, England, and a few days later I happened to encounter him again walking on the campus of Durham University in the UK during a break from a conference there. I told him about the Green’s function expansions and the numerical ratios I was getting in the results. The number 2880 appeared in some of the denominators along the way. He immediately understood why this was and suggested some historical series structures I should consider. Such an event stays in one’s mind.
For the next few years, I continued to watch the development of symbolic manipulation software but saw nothing that would help. The main issue was comparing one complicated tensor term with another and combining them when possible. When I became a physics professor in 1980 at UNCChapel Hill, I decided it was time to start programming a system to do the quantum field theory work. Computer development had finally reached the personal computer level, and I hoped I might learn how to use one for my efforts. I contacted Veltman again, and he said that he had not done anything new that might help, but a young student at Caltech was doing something that sounded like what I might use. This was Stephen Wolfram. Veltman was to be awarded the Nobel Prize in Physics in 1999 along with Gerard ’t Hooft.
I contacted Stephen, and he strongly suggested that a Unix system with a good C compiler would be best. He was working on a system with robust patternmatching functionality—which he knew would become something I could use. So, I started looking for such a computer in 1982. After much research, I finally found a new startup company in a small office area in Mountain View, California, called Sun Microsystems. They were the only Silicon Valley company that seemed very enthusiastic about universitylevel scientific research software development. They offered me what I think was their first academic discount. After applying to the National Science Foundation and getting an equipment grant, I obtained one of the first Sun1 systems, the first in North Carolina as far as I know. Stephen also got some Sun machines and soon developed his SMP system on them.
While I waited for SMP, I started writing my code to do the coincidence limit calculations. I kept in contact with Stephen; eventually, we both ended up at the University of Illinois UrbanaChampaign. I was there to help set up the Sun computer network and collect relevant Unix software for the scientists using the NCSA Cray supercomputer. I continued to write C code and did manage to make some progress on generating coincidence limits, but not enough to combine and simplify terms. I still needed sophisticated pattern matching. The following printout image (hanging from a tree in my yard) shows one equation about 30 pages long. Each term has many tensor indices. For example, summing two tensor indices would require not only finding a term on, say, page seven and combining it with a term on page 29, but furthermore there would be rules that would generate curvature terms, increasing the length and complexity of some of the equations. Higherorder calculations could create tens of thousands of intermediate terms or more equations:
✕

One day, Stephen contacted me and asked if I would like to try his new system with a more advanced patternmatching scheme. He gave me what may have been one of the first alpha tests of what eventually became Mathematica. It ran on my Sun workstation. Within two weeks, I had written code that did my acoefficient calculations far better and faster than the C code I had spent years trying to write. I was hooked. In 1988 I was asked to present my work at the public introduction of Mathematica at a press conference in Silicon Valley. I sat next to Steve Jobs at the event. He was there to show the running of Mathematica on his new NeXT computer. I was a beta tester for a NeXT machine with Mathematica—so secret at the time that it was hidden in my home office. I was encouraged to create a tensor analysis system to distribute to other researchers.
Later I found out that my longtime colleague, Leonard Parker at the University of WisconsinMilwaukee and a pioneer in quantum field theory in curved spacetime development, was also working on his version of a tensor analysis system for Mathematica. We each had our ideas on what was needed and decided to join forces and write a new system. After a few years of development and the writing of a book on how to use this system, we started a twoperson S corporation in 1994 to sell it to support the work. We expected that maybe a few hundred researchers in gravitation and relativity might buy a copy. We ended up, over two decades, selling a few thousand copies. Supporting that many users became significant work. We had not guessed that engineers, physicists in particle physics and elsewhere and even an eye doctor working on eye curvature might need tensor analysis and Riemann tensor computations:
✕

At this point, I will show one of the first and simplest calculations I had to do by hand in my research in graduate school, but now do with MathTensor. The calculations of the acoefficients, their derivatives and the stress tensor divergent parts are much longer. The details of these calculations are far too long to show here.
Over the years since the first release, the Mathematica developers added functions like Symmetrize, RiemannR and others that conflicted with some of our function names. Rather than redo a couple hundred files of MathTensor code, we now just overwrite the Mathematica functions in an initialization file:
✕

The main loader file adds in about two hundred files. These are encrypted and require a machinespecific user password file. Some source code is available, but most is not. Basic information about the software and loading is printed out.
Load MathTensor:
✕

Set basic definitions and properties. I work in four dimensions and set a few constants to 1:
✕

The DefineTensor function can take three arguments. The first is the tensor, the second is the symbol we want it to be printed as and a third gives the symmetries of the tensor’s indices. σ is a scalar with no indices:
✕

Next are the definitions of σ’s properties and the first three known derivative coincidence limits:
✕

The RuleUnique function defines a rule name, the object the rule searches for and its value substituted when it is found. Here, we know that goes to zero when , as do its first and third derivatives. The covariant (lower) indices in MathTensor are labeled with an “l” in front while contravariant (upper) indices are started with “u.” The coincidence limit of the twoderivative case is the metric tensor, Metricg[la,lb], already defined in MathTensor:
✕

✕

✕

A list of rules, sigmarules, is created and new rules derived are added to the rule list as we do more derivatives:
✕

Take the previously shown definition equation, which we will set to zero eventually. Note that indices are printed in their correct up and down form in the output cells:
✕

The Canonicalize function looks at all the tensor index summations in an equation and renames them so that indices from la to lo (or ua to uo) are renamed to later letters (lp or beyond) reserved normally for summations only. If we happen to have two terms that end up with the same summations, they will be combined:
✕

Take the first covariant derivative, with the MathTensor CD function, of the definition and continue until we have four derivatives. First derivative:
✕

The Expand function can help to show all individual terms:
✕

Second covariant derivative:
✕

✕

Third covariant derivative:
✕

✕

And finally, the fourth derivative:
✕

✕

The MathTensor ApplyRules function knows how to apply rules we define into each term in a tensorial equation. After using the first set of rules, we get the next result. The first and third derivatives of σ are zero and the second derivative rule lowers indices. We want to reorder the derivative indices alphabetically so we can combine the fourderivative terms:
✕

The OrderCD function looks at the derivatives, reorders them alphabetically and generates the needed Riemann tensor terms. We apply sigmarules again and then Solve for the fourderivative limit:
✕

✕

This gives us the fourderivative coincidence limit rule we want:
✕

Next, we define the new fourderivative rule and add it to the total sigmarules list. The RuleUnique function in MathTensor takes care of any summed indices:
✕

✕

Move on to the fivederivative case and apply the same set of operations as before. Take the new derivative of the fourderivative term. We begin to see that the equations are getting complicated:
✕

Again, apply the most recent rules, order the indices, apply rules again and solve for the coincidence limit of five covariant derivatives:
✕

✕

✕

We now have the fivederivative coincidence limit, which we add to the full rules list:
✕

✕

✕

Finally, do the sixth derivative and do all the same operations. This shows just how careful we have to be to make sure all indices are in exactly the right places. As I have said, doing this by hand can take weeks. One index out of place can invalidate the entire result:
✕

✕

✕

✕

✕

Canonicalize will rename indices so that terms can combine:
✕

We end up with this long expression for the sixderivative limit. Add it to the full list of rules for σ:
✕

We have the final set of rules we need to calculate the coefficient coincidence limit:
✕

✕

One of the objects we need is obtained by summing the pairs of indices:
✕

✕

The Tsimplify function is powerful. It will find ways to combine terms by renaming indices:
✕

MathTensor has a large set of rules—the RiemannRules—that recognize the symmetries of the Riemann tensor, its products and its derivatives. The ApplyRules function tells you which rules were used. Here are some examples of the rules about two covariant derivatives of the Riemann tensor built into MathTensor. If ApplyRules “sees” the structure on the left side of one of the rules, it substitutes the right side and renames the summation indices appropriately:
✕

✕

All that work to get to this result takes just seconds with MathTensor, and gives this correct intermediate result:
✕

We can carry out all of the coincidence limits for σ, Δ, the acoefficients and all their covariant derivatives. We then plug those into the Green’s function expressions of the tensor VEV and get the divergence structure shown. From there, we can plug in a given spacetime metric and create a renormalized finite stress tensor. We put this into the right side of Einstein’s equations and investigate the back reaction problem. In the paper with Fulling, an example of this is discussed. Since then, hundreds of papers have been published showing how pointsplitting can be used. See the citations here.
Going back to computing exact solutions, Parker and I added the Components function into the MathTensor system. With this, an arbitrary metric structure can be proposed, and all the components of the Riemann tensor, Ricci tensor and Riemann scalar can be computed quickly, along with the affine connections. The Einstein differential equations can then be obtained and potentially solved via Mathematica’s equationsolving routines.
Over the years, other researchers have found clever ways to compute higherorder acoefficients, up to as far as I know, but I don’t think anyone has extended the detailed structures of the stress tensor to such high levels.
Part of the problem is that the higher we go in products of the Riemann tensor, Ricci tensor, Riemann scalar and their derivatives, the harder it is to find the related rules. We want to create a basic set of products to write all possible terms as a linear combination. In 1980 I wondered what would happen if we tried to add torsion to our spacetimes. I wrote a paper while at ITP at UCSB on this. It was clear that things would get out of hand very quickly. We needed more sophisticated math to figure this out. This involves very detailed group theoretical arguments. Wybourne wrote software called Schur to help do the Lie and symmetric group calculations that might be useful. I helped Wybourne build and sell the software in the 1990s until his passing in 2003. Schur is now open source. I hoped to create a Mathematica version of Schur, but I have never gotten to it. Maybe others already have.
This work led to projects and consulting with Wolfram Research and Sun Microsystems (and eventually Oracle, which bought Sun), lasting four decades to date. In 1988 I started what ultimately became MathGroup for online discussions of Mathematica. I wrote about this in June of 2009. In addition, the opensource work for Sun and its Solaris operating system became www.sunfreeware.com, which is now www.unixpackages.com.
To finish, I want to express my deep gratitude to Stephen and all the Wolfram Research people who have made this work highly rewarding and fun.
Christensen, S. M. 1975. “Covariant Coordinate Space Methods for Calculations in the Quantum Theory of Gravity.” Ph.D. diss., University of Texas at Austin.
Christensen, S. M. 1976. “Vacuum Expectation Value of the Stress Tensor in an Arbitrary Curved Background: The Covariant PointSeparation Method.” Physical Review D 14, no. 10: 2490. https://doi.org/10.1103/PhysRevD.14.2490.
Christensen, S. M., and S. A. Fulling. 1977. “Trace Anomalies and the Hawking Effect.” Physical Review D 15, no. 8: 2088. https://doi.org/10.1103/PhysRevD.15.2088.
Christensen, S. M. 1978. “Regularization, Renormalization, and Covariant Geodesic Point Separation.” Physical Review D 17, no. 4: 946. https://doi.org/10.1103/PhysRevD.17.946.
Christensen, S. M., and M. J. Duff. 1979. “New Gravitational Index Theorems and Super Theorems.” Nuclear Physics B 154, no. 2: 301–342. https://doi.org/10.1016/05503213(79)905169.
Christensen, S. M. 1980. “Second and FourthOrder Invariants on Curved Manifolds with Torsion.” Journal of Physics A: Mathematical and General 13, no. 9: 3001. https://doi.org/10.1088/03054470/13/9/027.
Christensen, S. M. 1984. “The World of the Schwinger–DeWitt Algorithm.” In Quantum Theory of Gravity, Essays in Honor of the 60th Birthday of Bryce S. DeWitt, edited by S. M. Christensen. Boca Raton: CRC Press.
Christensen, S. M. 2019. “The Schwinger–DeWitt Proper Time Algorithm: A History.” In Proceedings of the Julian Schwinger Centennial Conference, edited by BertholdGeorg Englert. Singapore: World Scientific. https://doi.org/10.1142/11602.
DeWitt, B. S. 1965. Dynamical Theory of Groups and Fields. Philadelphia: Gordon & Breach.
Duff, M. J. 1994. “Twenty Years of the Weyl Anomaly.” Classical and Quantum Gravity 11, no. 6: 1387. https://doi.org/10.1088/02649381/11/6/004.
Fulling, S. A., R. C. King, B. G. Wybourne and C. J. Cummins. 1992. “Normal Forms for Tensor Polynomials. I. The Riemann Tensor.” Classical and Quantum Gravity 9, no. 5: 1151. https://doi.org/10.1088/02649381/9/5/003.
Parker, L., and S. M. Christensen. 1994. MathTensor: A System for Doing Tensor Analysis by Computer. Boston: AddisonWesley Professional.
Editors Note: Information on the full functionality of the MathTensor package, and how to get it can be obtained, can be done by emailing the author at sunfreeware@gmail.com. The MathTensor book is available on Amazon.
Get full access to the latest Wolfram Language functionality with a Mathematica or WolframOne trial. 
What is the halfderivative of x?
Fractional calculus studies the extension of derivatives and integrals to such fractional orders, along with methods of solving differential equations involving these fractionalorder derivatives and integrals. This branch is becoming more and more popular in fluid dynamics, control theory, signal processing and other areas. Realizing the importance and potential of this topic, we have added support for fractional derivatives and integrals in the recent release of Version 13.1 of the Wolfram Language.
The foundations of calculus were developed by Newton and Leibniz back in the seventeenth century, with differentiation and integration being the two fundamental operations of this subject.
Every student of calculus knows that the first derivative of the square function is x, while the result of integrating it is and that integration is essentially the inverse operation of differentiation (the integral of order n may be regarded as a derivative of order –n). However, speaking about derivatives or antiderivatives or integrals, we assume the order n is integer.
✕

What if the ideas of differentiation and integration could be extended to noninteger or even complex orders? This is done in the theory of fractional calculus, which generalizes the classical calculus notions of derivatives and integrals to fractional orders α such that the results of fractional operations coincide with the results of classical calculus operations when the order α is a positive integer (differentiation) or negative integer (integration). As shown in the following illustration, derivatives of other real orders “interpolate” between the derivatives of integer orders:
✕

Fractional calculus is not a new subject. It has at least a twocentury history starting from the two articles written by Niels Henrik Abel back in 1823 and 1826.
As explained in this article, fractional calculus was introduced in one of Abel’s early papers, where all the elements can be found: the idea of fractionalorder integration and differentiation; the mutually inverse relationship between them; the understanding that fractionalorder differentiation and integration can be considered as the same generalized operation; and even the unified notation for differentiation and integration of arbitrary real order.
✕

Abel considered the generalized version of the tautochrone problem (also known as Abel’s problem) on how to determine the equation for the curve KCA along the slope from the prescribed transit time T = f(x) given as a function from the distance x = AB.
Abel obtained the integral equation for the unknown function φ(x), the determination of which makes it possible to find the equation for the curve itself. After several algebraic manipulations, this integral equation might be rewritten in the form , which is what we call now the Caputo fractional derivative.
During the last two centuries, scientists from different areas and backgrounds worked on the theory of fractional calculus (considering it from different points of view). Hence, there are different approaches on how to define a fractional “differintegration” operation. Three of these definitions are the most popular and important in practice. We will talk about them in this blog post.
Let’s take the square function and derive the formula for the fractional derivatives using some simple algebraic manipulations. First, let’s calculate the n^{th}order ordinary derivative of the square function:
✕

Putting negative n in this formula, one might easily get the n^{th}order antiderivative of this function:
✕

Let’s take the formula for the n^{th}order derivative of the square function and put a noninteger order n into it:
✕

And what will we get if we take the n^{th}order ordinary derivative of the latter function and substitute 1/2 there?
✕

This is the first derivative of the square function! It is obtained via two “halforder fractional differentiation” procedures. One might easily verify that the antiderivative of the square function can be obtained via two similar halforder integration procedures (substituting –1/2 in the previously shown formulas).
So with this simple example, we show what fractional calculus is and how it is connected with as well as how it generalizes the classical version.
As integration is essentially the inverse operation of differentiation, we could define one united operation of differentiation/integration, which we call the differintegral: in the literature, this operator is written as , which stands for a fractional differintegral of order α of the function f(x) with respect to x and with the lower bound a. Fractional differintegrals depend on the value of the function f(x) at the point a so they use the “history” of the function. In practice, the lower bound is usually taken to be 0.
The Grünwald–Letnikov differintegral gives the basic extension of the classical derivatives/integrals and is based on limits:
In practice, this approach is not very usable, as it contains an infinite number of approximations of a function at different points.
The Riemann–Liouville definition is:
, where
It lies under a solid and strict mathematical theory of fractional calculus. This theory is well developed, but the Riemann–Liouville approach has a couple of limitations that make it not so suitable for applications in realworld problems.
The Caputo definition is:
, where
There is some similarity between this and the Riemann–Liouville differintegral and, in fact, the Caputo differintegral can be defined via the Riemann–Liouville differintegral:
Obviously, for negative α, the Caputo fractional derivatives coincide with the Riemann–Liouville fractional derivatives.
The Caputo definition of fractional derivatives and integrals has many advantages in comparison with the Riemann–Liouville or Grünwald–Letnikov ones: first, it takes into consideration the values of the function and its derivatives at the origin (or, in general, at any lowerlimit point a), which automatically makes it suitable for solving fractionalorder initialvalue problems using Laplace transforms. Also, the Caputo fractional derivative of a constant is 0 (while, in general, the Riemann–Liouville fractional derivative is not), hence it is more consistent with classical calculus.
The following animation shows the behavior of the Caputo fractional derivatives of a square function in comparison with the ordinary ones—the fractionalorder derivatives “interpolate” between the derivatives of integer orders:
✕

We have implemented a function called FractionalD into Wolfram Language Version 13.1. This function computes the Riemann–Liouville fractional derivative of order α of the function f(x).
As an example, let’s calculate the halforder fractional derivative of a cubic function:
✕

Now verify this result using the Riemann–Liouville definition:
✕

Repeating the halforder fractional differentiation procedure leads to the ordinary derivative of the cubic function:
✕

The following calculation recovers the initial function using three nested fractional integrations:
✕

Now let’s compute the arbitrary fractionalorder derivative of this cubic function, make a table of its values for specific orders and plot the list of derivatives:
✕

✕

✕

Next, let’s compute the 0.23order fractional derivatives of the Exp and BesselJ functions:
✕

✕

Here, we show the fractional derivative of the MeijerG superfunction, as it is a very important theoretical case: the fractional derivatives of MeijerG are given in terms of another MeijerG function:
✕

As a final example, we present a table of the α^{th} fractional and n^{th} ordinary derivatives for a few common special functions:
✕

In Wolfram Language 13.1, CaputoD gives the Caputo fractional derivative of order α of the function f(x).
As mentioned previously, the Caputo fractional derivative of a constant is 0:
✕

✕

For negative orders of α, the CaputoD output coincides with FractionalD:
✕

✕

Now, let’s compute the 0.23order Caputo fractional derivative of the Exp function:
✕

Compute the halforder Caputo fractional derivative of the BesselJ function:
✕

And as a final example, we present the halforder Caputo fractional derivatives of some common mathematical functions:
✕

Fractional differential equations (FDEs) are differential equations involving fractional derivatives d^{α} l d x^{α}. These are generalizations of the ordinary differential equations (ODEs) that have attracted much attention and have been widely used in engineering, physics, chemistry, biology and other fields. In most of their applications, FDEs involve relaxation and oscillation models.
Here is an example in which we solve an FDE using the powerful DSolve function, which was heavily updated in Version 13.1 to support FDEs:
✕

This solution is given in terms of the MittagLefflerE function, which is the basic function for fractional calculus applications. Its role in the solutions of FDEs is similar to the role and importance of the Exp function for the solutions of ODEs: any FDE with constant coefficients can be solved in terms of Mittag–Leffler functions.
Now, let’s plot the previous solution:
✕

As a more interesting example, we solve the equation of a fractional harmonic oscillator of order 1.9:
✕

The behavior of this fractional harmonic oscillator is very similar to the behavior of the ordinary damped harmonic oscillator:
✕

Plot these solutions and compare them:
✕

This example clearly demonstrates that the order of FDE can be used as a controlling parameter to model some complicated systems.
Another method for solving FDEs is via the Laplace transformation of the equation (i.e. transforming the initial FDE to some algebraic equation). We’ve also added LaplaceTransform support for FDEs in Version 13.1:
✕

Now, calculating the inverse Laplace transform of this solution, we will immediately get the same solution obtained via DSolve:
✕

Here at Wolfram Research, we are constantly updating the Wolfram Language, covering more and more topics that could be revolutionary and push scientists to start innovative research in their areas of study.
In Wolfram Language 13.1, we have implemented two basic operators for fractional calculus (the FractionalD and CaputoD functions), and also made a huge effort to add support for solving fractional differential equations via DSolve and LaplaceTransform. We have also updated the algorithms of the MittagLefflerE functions, as they have crucial importance in the theory of fractional calculus. You can learn more about this from both the blog post “Launching Version 13.1 of Wolfram Language & Mathematica” by Stephen Wolfram and the New in Wolfram Language 13.1 webinar series.
Also, I would like to acknowledge the work done by my colleagues Aram Manaselyan and Hrachya Khachatryan on the implementation of fractional calculus in the Wolfram Language; the invaluable contribution of Professor Oleg Marichev to the theory of fractional calculus and symbolic computational algorithms within it; and Devendra Kapadia for managing the project as well as for valuable remarks and critical comments on this text.
]]>
Recognizing the importance of the topics and the powerful capabilities in the Wolfram Language for signal processing, we set out to develop a fully interactive course about signal and system processing to make the subject accessible to a wide audience. After sharing and reviewing the course materials, notes and experiences we’ve collected from university undergraduatelevel courses over many years, this resulting Wolfram U course represents the collaborative efforts of two principal authors, Mariusz Jankowski and Leila Fuladi, and a team of knowledgeable staff. It is our great pleasure to introduce to you the new, free, interactive course Signals, Systems and Signal Processing, which we hope will help you understand and master this difficult but tremendously important and exciting subject.
The topics discussed here are a mainstay of almost every electrical, computer and biomedical engineering program in the United States and around the world, and have been for at least the last 30 years. They provide a gateway to more advanced engineering topics such as control, communications, digital signal processing, image processing, machine learning and more. They lay at the core of many applications: audio and image processing, data smoothing, analyzing genomic data such as DNA sequences, imaging processes in MRIs, Internet of Things services and other AIenabled systems. Thus, with its concise but comprehensive content and its many fully worked out examples and exercises, the course should be of great value to current and future engineering students, but also to any engineer, researcher or selflearner wishing to review or master the concepts and methods of signals and systems.
Want to get started? Explore the interactive course by clicking the following image before reading the rest of this blog post.
Mariusz Jankowski has used Mathematica and the Wolfram Language since 1995 and is a developer of image processing functionality in the language. He is a professor of electrical engineering at the University of Southern Maine and has received awards from Ames Laboratory, Wolfram Research and the University of Southern Maine.
It has been my observation, widely shared by many engineering educators, that a signals and systems course is one of the more difficult in a student’s undergraduate experience. Many struggle with the mathematical skills required to deal with the multitude of concepts and methods introduced. Therefore, from the very first days of teaching such a course, over 20 years ago, I have been trying to use the stateoftheart algebraic, numerical and graphical capabilities of the Wolfram Language to help students overcome some of the barriers they face in mastering its content. Signals, Systems and Signal Processing is therefore a culmination of many years of continued experimentation with the Wolfram Language in developing lecture notes, examples, illustrations, exams and soluzes, all greatly assisted by the feedback, sometimes positive and sometimes not so much , that I have received from hundreds of my students. I hope you will enjoy watching, reading and interacting with the course materials as much as I have enjoyed developing them.
Leila Fuladi is a certified Wolfram instructor and technical content developer with Wolfram Research. She has years of teaching experience at the university undergraduate level in a range of mathematics and engineering subjects.
My experience has been that once a topic is presented to students, it helps with the learning if the students are invited to solve the examples together with the instructor and think about how the idea presented in the lesson can be applied to the example. For each of the examples in this course, the videos typically show two solution methods: using the Wolfram Language and a “stepbystep” method using the traditional paperandpencil method of solving problems. To solve the examples on your own, you can use paper and pencil or test your Wolfram Language code in the embedded scratch notebook. I have worked diligently to keep the videos at a manageable length, focusing on the important ideas and examples. You can go over a topic in a short amount of time or learn at your own pace. Signal processing is a very interesting topic where you get to apply simple and beautiful mathematical ideas to solve great problems. I hope you enjoy this course and learn a lot!
It should not surprise you that the methods and techniques presented in this course bear the names of great mathematicians. For example, Leonhard Euler formally discovered the solution methods for many types of differential equations, in particular a type that electrical engineers use to model electrical circuits and thus allow them to analyze, simulate and design them. Jean Baptiste Fourier initiated the investigation of the Fourier series, which eventually developed into Fourier and harmonic analysis. The Fourier transform, both in continuous time and discrete time, plays a prominent part in this course. Then we have PierreSimon Laplace, who introduced a powerful integral transform that is now a fundamental tool in both systems analysis and the design of an important class of electrical, mechanical and chemical systems. Finally, of great importance in the course is the sampling theorem, which carries the names of Harry Nyquist and Claude Shannon, whose work bridged the gap between continuoustime and discretetime signals and systems and ushered in the age of today’s signal processing.
Students taking this course will get a typical collegelevel introduction to signals, linear systems and signal processing. As such, both continuoustime and discretetime signals and systems are included and presented in parallel, taking advantage of the many similarities and, occasionally, important differences. The course begins with elementary signals and operations on signals and continues with a basic introduction to the properties of linear timeinvariant systems. This is then followed by timedomain analysis of systems (differential and difference equations, system responses and convolution), frequencydomain analysis (Fourier series, the Fourier transform and the frequency response of linear timeinvariant systems) and Laplace and ztransforms. Finally, the allimportant topic of sampling is presented. The course concludes with introductions to both analog and digital filter design.
Here’s a sneak peek at some of the course topics (shown in the lefthand column):
✕

It is assumed that students are familiar with collegelevel algebra, trigonometry, complex variables and basic calculus. A background in electrical circuits is useful, as they are used as common examples of linear timeinvariant systems, but, strictly speaking, not necessary. The course is tightly integrated with the Wolfram Language, showing how the many formulas and calculations are implemented. Importantly, in addition to evaluations using the Wolfram Language, the examples and exercises include detailed stepbystep derivations. This is to assist those students who want to see the details of each calculation and help those for whom the primary assessment modality at their university is a paperandpencil exam or test.
The next few sections of the blog post will describe the different components of the course in detail.
The course consists of 33 carefully selected lessons and videos. The videos, one for each lesson, range from 7 to 15 minutes in length, and each video is accompanied by a transcript (lesson) notebook displayed on the righthand side of the screen. Copy and paste Wolfram Language input directly from the transcript notebook to the embedded scratch notebook to try the examples for yourself. Watching the videos and taking the 8 quizzes could take about 10 hours.
Each lesson is approximately 10–20 slides long and begins with a topic overview, some definitions, discussion of key concepts, several example calculations and sometimes an extended application example.
This course begins with an introduction to the basic concepts of the course, signals, systems, sampling and signal processing. The remaining topics cover the typical breadth and depth of an undergraduatelevel academic course on the subject and include convolution, differential and difference equations, Fourier series and the Fourier transform, Laplace and ztransforms, sampling and more.
Here is a short version of one of the lessons:
There are 120 examples in this course. Some of the examples are designed to help explain the concept discussed in a lesson, while others give an application of the theoretical concept. Throughout the course, there are examples on data processing, audio and image processing, modeling electrical circuits and designing and applying filters.
Most of the examples are solved using the Wolfram Language functionality and also include stepbystep solutions that will go over each calculation by hand to ensure understanding of different concepts and methods. Here is an example from the lesson on continuoustime Fourier series:
Obtain the Fourier coefficients of the square wave shown.
✕

This shows the Wolfram Language solution:
The given square wave has period and therefore . This gives the Fourier coefficients:
✕

Here are values of the coefficients for :
✕

And here is the stepbystep solution:
The Fourier series analysis formula:
Substitute and for :
Integrate:
Replace with :
Simplify the last expression to get:
and
for
Many of the examples are interactive. The user can vary one or more parameters to easily explore the solution space of a problem. For example, this shows the Fourier transform of a sampled signal as the sampling frequency is varied:
The following is a short excerpt of the video for lesson 13 that shows a discretetime convolution application used to perform data smoothing on average daily temperatures (using WeatherData), which were recorded over a period of approximately four years in Portland, Maine.
Each lesson (except for the first) includes a set of 5–11 exercises to review the concepts covered in that lesson. There are, in total, 230 exercises. Here is one of them:
Determine the ztransform and the ROC for the shifted unit step sequence .
Directly from the BilateralZTransform you get:
✕

This course is designed for independent study, so detailed solutions are also provided for all exercises, as per this example:
Directly from the definition:
With you get
Finally
, for
The notebooks with the exercises are interactive, so students can try variations of each problem in the Wolfram Cloud. In particular, they are encouraged to change the signal or system parameters and experience the awesome power of the Wolfram Language.
Each course section concludes with a short, multiplechoice quiz with 10 problems. The quiz problems are at roughly the same level as those covered in the lessons, and a student who reviews the sections thoroughly should have no difficulty in doing well on the quizzes.
Here is one of the quiz problems:
✕

Students will receive instant feedback about their responses to quiz questions, and they are encouraged to go back to a section’s lesson notebooks for reference and to review the material as many times as needed.
Students should watch all the lessons and problem sessions and attempt the quizzes in the recommended sequence because course topics often rely on earlier concepts and techniques. At the end of the course, you can request a certificate of completion. A course certificate is earned after watching all the lessons and passing all the quizzes. It represents proficiency in the fundamentals of signal processing and adds value to your resume or social media profile.
Mastering the fundamental concepts of signals, systems and signal processing is essential for students in electrical, computer and biomedical engineering, as well as other fields where signal processing is used. We hope that this course will help you to achieve this mastery and contribute to your success in your chosen field. Any comments regarding the current course as well as suggestions for future courses are welcome and deeply appreciated.
The authors would like to thank Shadi Ashnai, Cassidy Hinkle, Joyce Tracewell, Andy Hunt, Laura Crawford, Mariah Laugesen, Abrita Chakravarty, Matt Coleman and Bob Owens for their dedicated work on various aspects (lessons, exercises, graphics, workflow, etc.) of the course.
Want more help? Register for one of Wolfram U’s Daily Study Groups. 
What is the 56th digit of π? (Nine.) How fast is a wolf’s heartbeat? (It’s 80–110 beats per minute.) What is the estimated average airspeed velocity of an unladen European swallow? (Wait a sec….)
Where did these answers come from? WolframAlpha. This knowledge engine has been around since 2009 and is often associated with mathematics, given its computational abilities. That said, there’s more to it than just math. It connects to knowledgebases that extend beyond STEM, and depending on how you use it, it can be a wellspring for creativity.
Read on for four things you might not know about this powerful bit of edtech.
Not sure how to use WolframAlpha in your classroom? On the WolframAlpha site, you’ll find links to several educator resources. Although WolframAlpha is often used as a standalone tool by students, teachers can incorporate it into lessons. There are problem sets, example topics and more.
✕

Some of the resources, such as Wolfram Problem Generator, are free. Others, like the stepbystep solution addon, require a subscription. It’s worth exploring what’s available to see what best suits your needs, whether it’s for a single project or an integrated curriculum.
Students can use WolframAlpha to explore history from timebased perspectives, performing calculations and pulling up raw data. That said, history is more than just dates! This topic page gives ideas on how to use WolframAlpha for history class.
✕

What if students compared two eras to brainstorm ideas for a research paper? Could they calculate the spans of time between two big events, then figure out why they took place on drastically different time scales? What about incorporating math into your history class with discussions of historical money or numbers?
This feature isn’t free, but it is powerful. On the Web Apps page, you’ll find a list of some of the apps, including a personal finance assistant and sun exposure assistant, that can be connected with WolframAlpha’s search and calculation capabilities. There are apps for education, work and personal use.
✕

You can see a teaser of each app by clicking its link from the Web Apps page. This will allow you to explore possible uses without a subscription. For example, here is the Calculus Course Assistant.
✕

Take a look beneath the WolframAlpha search bar and you’ll find not only an option for “Natural Language,” but for “Math Input” as well. This somewhat new feature allows you and your students to input queries through figures and formulas. This can capture complexity that would otherwise be lost when asking something mathy.
By default, the search will load with Natural Language enabled. You can switch back and forth as needed. If you have access to a Pro subscription, there’s even an option to input images and data!
Sign up for WolframAlpha Pro to access customizable settings, stepbystep solutions, increased computation time and more. 