*Particle and Particle Systems Characterization: Small-Angle Scattering (SAS) Applications*

by Wilfried Gille

Small-angle scattering (SAS) is the premier technique for the characterization of disordered nanoscale particle ensembles. SAS is produced by the particle as a whole and does not depend in any way on the internal crystal structure of the particle. Since the first applications of x-ray scattering in the 1930s, SAS has developed into a standard method in the field of materials science. SAS is a non-destructive method and can be directly applied for solid and liquid samples.

This book is geared to any scientist who might want to apply SAS to study tightly packed particle ensembles using elements of stochastic geometry. After completing the book, the reader should be able to demonstrate detailed knowledge of the application of SAS for the characterization of physical and chemical materials.

*Computer Algebra in Quantum Field Theory: Integration, Summation and Special Functions*

by Carsten Schneider and Johannes Blumlein

The book focuses on advanced computer algebra methods and special functions that have striking applications in the context of quantum field theory. It presents the state of the art and new methods for (infinite) multiple sums; multiple integrals, in particular Feynman integrals; and difference and differential equations in the format of survey articles. The presented techniques emerge from interdisciplinary fields: mathematics, computer science, and theoretical physics; the articles are written by mathematicians and physicists with the goal that both groups can learn from the other field, including most recent developments. Besides that, the collection of articles also serves as an up-to-date handbook of available algorithms/software that are commonly used or might be useful in the fields of mathematics, physics, or other sciences.

*Mathematics for Physical Science and Engineering*

by Frank E. Harris

*Mathematics for Physical Science and Engineering* is a complete text in mathematics for physical science that includes the use of symbolic computation to illustrate the mathematical concepts and enable the solution of a broader range of practical problems. Due to the increasing importance of symbolic computation and platforms such as *Mathematica*, the book begins by introducing that topic before delving into its core mathematical topics. Each of those subjects is described in principle and then applied through symbolic computing. The aim of the text is designed to clarify and optimize the efficiency of the student’s acquisition of mathematical understanding and skill and to provide students with a mathematical toolbox that will rapidly become of routine use in a scientific or engineering career.

*Multimedia Maths*

by Bieke Masselis and Ivo De Pauw

*Multimedia Maths* provides an accessible guide to understanding and using basic software applications including the golden section, co-ordinate systems, collision detection, vectors, and parameters.

Screen effects and image handling are explained at a complex level using a more detailed outline to build and develop on the basic transformations.

More advanced multimedia themes of quaternion rotation, fractal texture, Bézier curves, and B-splines are deconstructed and usefully linked to an interactive website that includes *Mathematica* files.

*A Math Primer for Engineers*

by Colin Walker Cryer

Mathematics and engineering are inevitably interrelated, and this interaction will steadily increase as the use of mathematical modeling grows. Although mathematicians and engineers often misunderstand one another, their basic approach is quite similar, as is the historical development of their respective disciplines. The purpose of this *Math Primer* is to provide a brief introduction to those parts of mathematics that are, or could be, useful in engineering, especially bioengineering. The aim is to summarize the ideas covered in each subject area without going into exhaustive detail. Formulas and equations have not been avoided, but every effort has been made to keep them simple in the hope of persuading readers that they are not only useful, but also accessible.

The wide range of topics covered includes introductory material such as numbers and sequences, geometry in two and three dimensions, linear algebra, and calculus. Building on these foundations, linear spaces, tensor analysis, and Fourier analysis are introduced. All these concepts are used to solve problems for ordinary and partial differential equations. Illustrative applications are taken from a variety of engineering disciplines, and the choice of a suitable model is considered from the point of view of both the mathematician and the engineer.

This book will be of interest to engineers and bioengineers looking for the mathematical means to help further their work, and it will offer readers a glimpse of many ideas that may spark their interest.

*Financial Hacking: Evaluate Risks, Price Derivatives, Structure Trades, and Build Your Intuition Quickly and Easily*

by Philip Maymin

This book teaches financial engineering in an innovative way by providing tools and a point of view to quickly and easily solve real, front-office problems. Projects and simulations are not just exercises in this book, but its true backbone. You will not only learn how to do state-of-the-art simulations and build exotic derivatives valuation models, you will also learn how to quickly make reasonable inferences based on incomplete information. This book will give you the expertise to make significant progress in understanding brand new derivatives given only a preliminary term sheet, thus making you valuable to banks, brokerage houses, trading floors, and hedge funds.

*Financial Hacking* is not about long, detailed mathematical proofs or brief summaries of conventional financial theories; it is about engineering-specific, useable answers to imprecise, but important questions. It is an essential book both for students and for practitioners of financial engineering.

MBAs in finance learn case-method and standard finance mainly by talking. Mathematical finance students learn the elegance and beauty of formulas mainly by manipulating symbols. But financial engineers need to learn how to build useful tools, and the best way to do that is to actually build them in a test environment, with only hypothetical profits or losses at stake; this book gives graduate students and others who are looking to move closer to trading operations the opportunity to do just that.

*Introduction to Quantitative Methods for Financial Markets*

by Hansjorg Albrecher, Andreas Binder, Volkmar Lautscham, and Philipp Mayer

Swaps, futures, options, structured instruments—a wide range of derivative products is traded in today’s financial markets. Analyzing, pricing, and managing such products often requires fairly sophisticated quantitative tools and methods. This book serves as an introduction to financial mathematics with special emphasis on aspects relevant in practice. In addition to numerous illustrative examples, algorithmic implementations are demonstrated using *Mathematica* and the software package *UnRisk* (available for both students and teachers). The content is organized in 15 chapters that can be treated as independent modules.

In particular, the exposition is tailored for classroom use in a bachelor’s or master’s program course, as well as for practitioners who wish to further strengthen their quantitative background.

]]>According to Roux, the advantage to *Mathematica* is that it frees them to focus on more important tasks. They don’t have to worry about the minutiae of financial analysis because they can access statistical libraries, link to databases, and import Excel files easily with *Mathematica*‘s built-in functionality.

Before discovering *Mathematica*, the two analysts routinely encountered problems. In addition to slower calculations, they were unable to trust the reliability of their results. Now they can get a model down on paper, implement it immediately, and adapt their financial strategies to the different needs of the market. And *Mathematica*’s versatile graphing makes it easy to share their findings with those outside the financial industry.

Watch Roux and Fellous discuss how they use *Mathematica* in risk management, model validation, and pricing tool creation.

https://www.youtube.com/watch?v=upF06HxjohA

*This video is in French, so be sure to click the CC button in the lower right hand corner for captions.*

You can view other *Mathematica* success stories on our Customer Stories pages.

I will show that such errors are quite unlikely to occur in *Mathematica* because of its explicit comprehensive functional programming style, which not only makes coding simpler and quicker, but allows for rapid sensitivity tests for robustness. The current debacle came to light after Reinhart and Rogoff (R&R) finally released their dataset, which was then analyzed in detail by Thomas Herndon, Michael Ash, and Robert Pollin (HAP 2013) from the University of Massachusetts Amherst. I am not going to replicate their results, but rather show how similar analyses can be done in *Mathematica* while also addressing related problems of causality and conditional probability estimation that were not addressed in the initial publication.

The data for the original investigation can be obtained here. I have adapted it to contain the important variables of Debt/GDP and Real GDP (RGDP) Growth that are the basis of analysis in all the papers discussed above. The one main area of difference is that I use the three-year RGDP Growth as a variable rather than the usual one-year approach. This is because any real measure of policy determination for the $16 trillion US economy would not look just one year into the future but three years to properly assess its effects. This is important to keep in mind because my RGDP Growth figures will appear to be approximately three times that in R&R’s paper.

These two variables, Debt/GDP and three-year RGDP Growth, are the only additional columns that I have made to the original R&R Excel workbook, RR.xls. I have also included all the historical data in order to include important data points such as the 1930s and the Great Depression as well as the financial crises in the 1870s, 1890s, and immediately before WWI, as opposed to most studies that only looked at data after 1945. These data points were especially important for our current economic situation because they were significant financial crises in which monetary policy was largely irrelevant, exactly as we have now. For additional details about the data, see the footnote at the bottom of the blog post.

I imported R&R’s data into *Mathematica* from their Excel spreadsheet, giving an array of data about the economic variables of debt and GDP growth for each country:

These are the countries whose economic variables were studied:

In this analysis, I am most interested in the Debt to GDP ratio and three-year RGDP Growth variables. This differs from the original R&R analysis, where they concentrated on one-year RGDP Growth. This is to allow for three-year prediction intervals rather than limiting the analysis to just one year ahead. Let’s take a look at these two variables for all the countries that are in our data sheet.

Using the conclusions from R&R’s original 2010 paper as well as HAP’s corrected calculations based on R&R’s methodology, which are described in Table 3 of HAP’s recent review article, I find the following comparative results represented graphically. The blue bars represent R&R’s original results. The green bars show HAP’s corrected results.

Notice the HAP results revise upward by 2.3% R&R’s negative growth rate of -0.1% for countries whose debt exceeds 90%. Furthermore, the difference in growth rates from the 60–90% cohort to the >90% group is not statistically significant.

Suppose you accept the approach adopted by R&R that debt is a driving factor detrimentally affecting growth, then you should be able to test that notion by regressing growth upon debt and studying the consequences. If that analysis is performed, then you get the following linear regression between Y:RGDP % annual Growth and X:nominal Debt/GDP ratio, Y = -0.0229 X + 3.8998, as described by Paul Krugman in his blog post, which shows that each percentage increase in Debt/GDP reduces annual RGDP by merely .0229% not including the intercept constant. I can adopt the same argument that Krugman and Brad DeLong make that even if this relation is causal running from high debt to slower future growth, then this still does not warrant cuts in government spending now. The effects of government spending on GDP growth are called the fiscal multiplier, and recent estimates of this by the IMF Fiscal Monitor estimate the multiplier to be about 1.5. This means that every dollar spent by the government leads to $1.5 GDP growth.

So should we cut spending now, resulting in lost jobs and reduced growth, in order to avoid a possible growth decline a decade later? Using estimates of fiscal multiplier, current US marginal tax rate, and linear regression between real GDP annual growth rates and the nominal Debt/GDP ratios, I visualize the effect on GDP a decade later.

From the interactive chart above, you can see that a reduction of 2% GDP now results in a statistically insignificant 0.23% GDP growth 10 years from now! And conversely, if we boost government spending by 2% of GDP now, the net effect will be to lower GDP by only 0.23% a decade later.

A different type of analysis would be to look at individual countries and see whether regressions of RGDP Growth against Debt/GDP show a downward trend in growth for higher levels of debt. The following tab display gives these regressions for all the different countries:

Looking at individual countries, I can observe the slope of the regression at very high levels of debt to see which way it is trending, and I find that there are 10 countries with a slight trend up and 10 countries with a slight trend down. But the real observation is that for many, the slope at 90% Debt/GDP is not significantly different from zero, showing that changes in debt at that level have little effect on GDP. Seven countries are flat or indifferent at the 90% mark, five are trending down, three are trending up, and five lack the relevant data, probably because they have either never reached this threshold or did so when other reliable data was not available. So out of the 15 countries I can study at the 90% debt level, 10 or 75% of them are either no worse off or appear to be growing, while 1/3 appear to be getting worse. This does not appear to support the notion that only calamity can result when we cross the magical 90% Debt/GDP threshold.

The real popularity of R&R’s paper rested on their graph of mean growth rates at different levels of debt, which showed a sudden cliff at the 90% Debt/GDP level, dropping to -0.1% RGDP Growth. No one was able to replicate this result even when they used R&R’s previous data from their book *This Time Is Different*. In fact, most other researchers insisted that the obviously better approach was to use medians rather than means when doing statistical analysis. When R&R adopted this scheme themselves, the so-called growth cliff also disappeared, but remarkably they did not infer from this that their analysis was not robust or that the distributions were inconsistent. When eventually three years later they released their data, it was shown that this result was an illusory byproduct of an Excel coding error, and that when their own analysis was done successfully, the discrepancy disappeared.

In the analysis below I have not used their debatable weighting scheme, but instead have given equal weight to each data point {Debt/GDP, 3 Yr RGDP Growth} for each country, which should be the default case. Also I am looking at three-year growth rates, not one-year increments. I can see below that there is no magical growth cliff at the 90% Debt/GDP level, and that if there is any significant result, it appears that some high-debt countries in the 60–90% range showed high mean growth while the median performance was slightly poorer than for lower debt levels. This indicates that growth responses to large debt differed considerably between countries, with some responding positively, which does not fit in with the R&R notion that large debt is always bad.

A more sophisticated analysis can be done using conditional probability estimates by looking at the distributions associated with the data. Here I look at the overall conditional probability of RDGP Growth given Debt > 90% and compare it with the case when Debt lies between 60 and 90%. I see that their CDF structures are remarkably similar, suggesting similar probability distributions. This is also inconsistent with the R&R results, which suggest a different regime with higher levels of debt.

Another important issue with the R&R approach is that they implicitly argue that high debt causes slow growth, but it is also possible that the causation runs the other way and that slow growth with a lack of revenues drives up the debt.

The above instances of Italy and Japan show that this is at least as likely as the other possibility. I can investigate this further by putting all the data together and doing regressions for the last three years and the next three years of real GDP growth.

If it is the case that high debt causes exceptionally slow growth, then this should show up as a significant regression at the 90% level on future growth, whereas if it is the other way around, I should see significant regression on past growth.

From the regression and parameter tables above, I find a very small statistically significant positive slope of the trend line for future growth at 90% Debt and a corresponding mildly negative slope of the trend line on past growth, but which is not statistically significant. So the slope of the regression curve appears to be flat in both directions, and hence causation cannot be determined conclusively by regression approaches.

One important criticism of the R&R approach, especially after the work of HAP, is that they never explicitly identified their weighting methodology of country mean growth rates for different debt bins. This has been described above, and it raises the question of how different weighting mechanisms might work. Suppose for instance that I sort the data according to their debt levels and give higher weightings to those data points that have higher debt levels on a sliding linear scale, starting at 1 for equal weighting up to 11 times the weight for the highest levels of debt when compared to the lowest levels of debt. I can see that this increases the significance of debt on future growth, but only at very high levels of distortionary weighting.

Another approach would be to look more closely at the distributional differences between the four Debt/GDP bins that R&R use. If I use their four bins of debt levels and look at the overall histograms, I see that while the first two levels are comparable, the last two are different from the first two but are themselves comparable. This was also suggested from the above probability analysis.

Using the sparse *Mathematica* code given above, I show that high levels of debt are not a necessary barrier to GDP growth, that there are at least as many countries doing well with high debt as are doing badly, and that it is at least as likely that the causation runs the other way, where slow growth is causing high debt so that we are damaging the economy by cutting back spending now for fear of invoking the wrath of the debt demons. Since these high-level analyses can be carried out conveniently in *Mathematica*, the errors made by R&R using Excel could easily have been avoided.

Because each spreadsheet had similar variables in different columns for different countries, the additional computations for Debt/GDP and three-year Real GDP Growth were done using simple Excel assignments. All variables are given in terms of percentages. The Debt/GDP variable was determined by taking ratios of corresponding debt and GDP values in the same year for the same country. If there were multiple debt and GDP columns starting at different times, as often occurred when countries switched currency regimes (for instance, when European countries all went onto the euro), then those new ratios were utilized at the later starting dates. The three-year Real GDP Growth variable was determined by looking at the percentage rate of change of RGDP over three-year periods, and again when there were multiple RGDP columns starting at different times, the new RGDP Growth series was adopted at the later starting dates.

Download this post as a Computable Document Format (CDF) file.

Download the Excel data.

One set of new capabilities that *Finance Platform* 2 introduces is a major enhancement to the way financial analysis is deployed: automated report generation.

Report Generation allows you to create documents quickly and easily using Wolfram *Finance Platform* documents. Since Report Generation is built on *Finance Platform*‘s Computable Document Format interface, it’s easy to add it into your normal workflow.

Data for the report can come from a variety of sources, such as the result of a computation, a database query, or *Finance Platform*‘s integrated computation data source or integrated market data streams. Portfolio performance, risk analyses, and market/economic outlook are just a few of the applications that can take advantage of Report Generation.

How it all works:

Report Generation uses templates to define the various elements that will appear in a report. Think of the template as a blueprint, containing the style and structure of your report, as well as instructions for how data is inserted into the report.

Within the template there are two types of special markers for information in the report: template variables and evaluation expressions.

Template variables are temporary placeholders that are replaced by expressions such as text, graphics, and function names when the report is generated. The specific expression that’s inserted for each template variable is specified when the report is generated, so that the same template can be used in any number of applications.

Evaluation expressions, on the other hand, are code snippets that are evaluated automatically when the report is generated. These are useful for including additional information with the report, such as the date on which it was run, without having to specify the value manually.

The data in your report can be organized in a hierarchical manner using template groups, which can generate nested subgroups in your report without having to copy and paste sections in the report template. This also makes report templates highly scalable, requiring no modification to expand the scope of an application.

Report Generation also includes options for input cells that allow you to evaluate code when generating the report, hide the input code after evaluation, or even delete code after results have been generated.

When a report template is complete, the `ReportGenerate` function allows you to generate reports using the template on demand, or it can be scheduled to run automatically.

To run a report, simply provide `ReportGenerate` the name of the template you’d like to use and the replacement rules for any template variables included in the report.

`ReportGenerate` also takes an optional argument for an output file, and generated reports can be exported in a variety of different formats, including CDF, PDF, and HTML.

You can find more information on all the features of Wolfram *Finance Platform* 2 at the product page.

Join our Virtual Seminar showcasing all of the new functionalities and features »

]]>A simple way to explore diversification within the stock market is to invest in stocks from different sectors or different geographic regions. Beyond stocks, investors can consider diversification in different asset classes such as bonds, commodities, or real estate.

The following chart shows the S&P 500 and Dow Jones Industrials indices, indicators of return that move in sync with each other. You can download the Computable Document Format (CDF) version of this post below to execute this code yourself.

Cumulative return shows how much an investment changes over time. The similarity of the two plots above shows the highly correlated nature of S&P 500 and Dow Jones Industrials.

The numerical correlation is nearly 1, an indication that their returns track each other extremely well.

Therefore, allocating an investment between Dow Jones Industrials and S&P 500 is not a good strategy for diversification.

In order to achieve effective diversification, we need to find asset classes with return correlations that are either small or negative, indicating that their returns either don’t track each other at all or move in opposite directions. Below, I have selected three asset classes—stock, commodity, and fixed income. Specifically, I’ve chosen the Dow Jones Industrials (DJI), gold (GLD), and the US Treasury Index Fund (TUZ) as a sample portfolio. Let’s calculate the correlations among their returns since 2010.

The correlation matrix shows how pairs of assets are related: a value of 1 indicates that the corresponding pair of assets go up and down in perfect synchronization, a value of 0 indicates there is no relationship between their fluctuations, and a value of -1 indicates that when one goes up, the other goes down by the same amount.

Thus you can see that gold and the Dow Jones Industrials’ returns are hardly related at all, while the Treasury Fund Index tends to move in somewhat the opposite direction of the Dow Jones Industrials.

A matrix is helpful in that it provides precise information. However, once we expand our search to a wider investment universe, a large matrix will be cumbersome to interpret. Can we visually represent this correlation information? Graph theory provides a good solution.

From the Graphs and Networks: Concepts and Applications Wolfram Training course I attended recently, I learned that `AdjacencyGraph` gives a graph representation of a matrix. To illustrate, I first define matrix *m*:

In this matrix, 1 means a pair of vertices is connected, while 0 means that it is not. An arrow connects the two nodes of the corresponding graph exactly when there is a 1 in the corresponding location of the adjacency matrix. In matrix *m* above, vertex 1 (reading from the column of matrix) and vertex 2 (reading from the row of matrix) are connected. Correspondingly, an arrow is drawn from vertex 1 to vertex 2 in the adjacency graph. Stepping through the rows and columns of matrix *m*, the adjacency graph of matrix *m* is completed.

To visualize relationships between returns in the portfolio above as a graph, I start with the correlation matrix. Since the correlation matrix is already in a matrix form, all I need to do is to turn it into something that can be represented by `AdjacencyGraph`. To do so, we can first define a threshold below which the entries in the correlation matrix will be 0 and above which the entries will be 1.

Here I define the threshold to be 0.

From those thresholded correlations, `AdjacencyGraph` yields a graph.

This graph shows that the US Treasury Index Fund (TUZ) and the Dow Jones (DJI) are negatively correlated, since there is no edge connecting these two, and that the correlations among these two assets and gold are positive.

Let’s see if the graph theory technique can be applied to a bigger set of investment vehicles. I have first defined a bigger portfolio. It consists of all the members of the Dow Jones, a few commodities, and a few bonds. I am only interested in the correlation of returns since the beginning of this year.

In the graph below, I have chosen a correlation coefficient threshold of 0.58. If the return correlation between two assets is below 0.58, there is no edge connecting the assets. Thus, a pair of investments is connected if they are highly correlated.

As we can see, there are some distinct features of this graph. First of all, there are many assets whose returns are not strongly correlated with any other asset. They are the individual components within the graph. Secondly, there are five connected graphs. We can display those connected subgraphs for a closer inspection.

The first subgraph consists of a few members of the Dow Jones Industrials.

The second subgraph consists of members of a few bond funds.

The third and forth subgraphs are connected graphs with two vertices each. One of them connects silver and gold; the other connects Verizon and AT&T.

These findings make sense. Traditionally, asset allocation between equity and bond provides a good diversification strategy. In the subgraphs, equities are indeed separated from bonds. What is interesting is that since the beginning of this year, Verizon/AT&T and J.P. Morgan Chase/Bank of America are in camps of their own, tracking each other closely, but unrelated to the rest of the Dow Jones Industrials members. Gold and silver are separated from the rest of the commodities.

One of the immediate analyses we can perform on the subgraph is to find out which groups of stocks tend to move in sync. Let’s take a look at g1, the connected subgraph that has a few members of the Dow Jones Industrials. To find subgroups of stocks in which every pair of stocks is connected, I can ask for the maximum clique within g1 using `FindClique`.

The implication is that since the beginning of the year, those stocks have all tended to move in sync with each other. We can verify this claim from the cumulative return plots below.

However, for diversification purposes, we are looking for the assets whose returns are not highly correlated and therefore have correlation coefficients that are below the preset threshold value.

In a graph representation, there will be no connection between these diversified assets. Without a connection to other vertices, the diversified assets turn out to be the vertices that are never incident to the same edge.

There are many such independent sets. To include as many as assets as possible, we can use `FindIndependentVertexSet`, which finds a maximum number of independent vertices.

Let’s highlight those vertices in our graph with stars:

If you were to choose a subset of investments from the stars in the graph, you’d have a well-diversified portfolio.

Graph theory has helped to determine which asset classes are highly correlated with one another and which are not. From the graph representation, the relationships between asset classes can be easily seen, the assets that are highly correlated with others can be quickly identified, and diversification between assets can be understood more intuitively.

We can certainly expand this type of analysis to include more asset classes and a longer (or different) time period. You can find all the necessary code in this post so you may explore more of the finance web.

*A big thanks goes to our graph theory developer Charles Pooh for all the very helpful discussions on this blog post.*

Download this post as a Computable Document Format (CDF) file.

]]>This is a major new initiative for us to create the ultimate computation environment for finance. It builds on our existing computational technology with extra capabilities and professional support services.

As part of this, I spent some time interviewing finance customers in the city of London about what they liked and didn’t like about *Mathematica*, what they wanted, and why some of their colleagues didn’t use it.

It turned out to be an exercise in staying open-minded and listening—because while we are naturally most excited about all the great computation that we provide (including “finance” functionality), time and again the issues that mattered were the less glamorous workflow improvements.

We have been focused on improving workflows more generally, for example supporting great data analysis, statistics, and visualization with highly automated import and export and reporting. The flow from data to analysis to CDF report is now a smooth process.

But looking at the specific needs of one industry segment reveals further optimizations. The new Bloomberg feed link is one such example. Being able to flow live, trading quality data directly into a computation or to create a CDF where the charts update with the market has now become trivial because we took that extra step to make the connection seamless.

Applying one of our our core principles—to automate as much as possible—makes this feature particularly powerful. The link includes a parameter discovery interface that automatically generates API code. This means that, armed with a new trading idea, you can prototype fast with our analysis capabilities and just paste in the generated code to call to the live data, and you can be ready to trade your new idea in minutes. Feed that into some of the built-in visualizations, and you can have a live CDF-powered dashboard to watch in just a few more minutes. One message I have been hearing repeatedly is that this kind of “algorithmic agility” is a key competitive advantage in finance.

Check out the new Wolfram *Finance Platform*, and watch this space for further improvements to both finance functionality and other optimized workflows that are in Wolfram *Finance Platform*‘s development pipeline.

Computation has always been at the center of what Wolfram does… but finance hasn’t been, at least not until now.

In recent months, the team has been taking our ultimate computation environment and specializing it for finance.

It’s amazing some of the results our technology readily achieves in this domain–whether it’s a user-customizable market data explorer of Bloomberg data feeds, financial derivative valuations with GPUs, or automated reporting with interactivity.

We’re previewing the Wolfram *Finance Platform* at our March 27 virtual conference with an introduction from Conrad Wolfram. Join us!

This virtual event will be held on Tuesday, March 27, at the following times:

* 8–11:30am Eastern Daylight Time (EDT); 1–4:30pm British Summer Time (BST)

* Repeat session: 1–4:30pm EDT; 6–9:30pm BST

Virtual seats are limited–see the event schedule and register today!

]]>As bonds are bought, their prices go up and the corresponding yields drop. Looking at the U.S. yield curve using Wolfram|Alpha at the end of July and eight weeks later below, the yields on long-term 10-year treasury bonds have dropped from 2.82% to 1.84%, which is a historic 50-year low. This shows that investors are now more likely to buy long-term U.S. government debt, which is puzzling behavior if they are panicked by the S&P downgrade.

Instead, it may have been the news about the redetermination of the size of the 2008–2009 decline in GDP from 4.8% to 8.9% that caused the panic, or recent evidence that the economy has stalled and might possibly decline in the second half of the year. When coupled with the sovereign debt crisis in peripheral European countries like Portugal, Italy, Greece, and Spain, this has driven fears of a “double dip” back into another recession.

The return to plummeting asset prices leads one to ask how prevalent such events have been in the U.S. economy since 1980. We choose 1980 because there is a distinct difference in the extraneous conditions and rate of growth of the markets from World War II to 1980 and from 1980 to the present. The economy had 2.56 times the annualized rate of growth during the latter period compared to the former period. This is shown below:

Another reason for discerning different regimens, obvious from the `GraphicsColumn` below, is that volatilities are substantially higher since 1980 than previously. This can be shown by first defining the annualized volatility for a particular equity or index and the year we wish to study, and then comparing volatilities for years 1979 and 2008, where the latter is four times the former.

A final economic reason for examining the period after 1980 is that the late 1970s were complicated by the exogenous shock of the rapid rise in OPEC-controlled oil prices. Studying the post 1980 period, we can now discern a number of peaks and valleys that correspond to known financial events. These include the stock market crash of October 1987, seen here as a small blip on the graph below. The two other peaks are respectively the “Tech” crash of 2000–2001 and its ensuing recession, and from December 2007 until mid 2009.

If we had not known about the crash of 1987, it might have been hard to spot on the graph above, so what we really need is a `Manipulate` GUI that will allow us to move over the graph at will for a given equity and over a specified time range. In this way we can study the ups and downs of the market interactively. In the GUI below we identify starting and ending points in terms of years and the equity prices we wish to analyze. It is now easy to see how significant the 1987 crash was. A further point to note is that the recovery times from each successive crash are getting significantly longer: two years for the 1987 crash, seven years for the 2001 crash, and more than four years for the current crisis. We can see that the severity of the market crashes is increasing in terms of the following periods of downturn.

If we need further convincing, we can use Renko trading charts that show the up and down patterns behind the data. A Renko chart is drawn as a series of fixed-height bricks, where a brick is drawn in a new column whenever the price rises above the top of the previous brick, or drops below the bottom of the previous brick by a fixed amount. Studying one such chart for each of the days of the October 1987 period shows what a massive downturn it was, brought about by the savings and loan bank debacle.

However, what these charts do not reveal is the extent of the downturn. For that we need to create plots that not only show the price range, but also calculate the size of the price decline from the maximum price to the minimum price within the specified time period and for a fixed equity. The function `FinancialDataRange` defined below accomplishes this task.

When we apply `FinancialDataRange` to periods in 1987, 2007–2009, and the recent downturn, we get the following plots and percentage decline computations.

Again, it would be even better if we could wrap the above function inside a `Manipulate` that lets you choose the crash period and index to study the effects. This is given below, where we show the recent downturn for the Dow Jones Industrial Average.

What strategies do investors employ to protect their assets in such times? One option is to buy immunization in terms of financial options that act as insurance policies for their portfolio positions. *Mathematica* has a very powerful function `FinancialDerivative` that allows one to determine the price of such options, and we can use it to consider a hypothetical case where one wants to take advantage of the recent “crash”.

Pretend that it is Friday, July 29, and you have just heard that S&P has downgraded the USA’s credit reliability, and further, you have had a week full of doom and gloom both about European sovereign debt and the stalling of the U.S. economy. You heavily suspect that the market will take a hit next week. You cannot be sure of course, because anything can happen, but you want to put money on the hunch that the market is going down. To do so, you buy put options, which give you the right but not the obligation to sell equities for a fixed price (called the strike) at a future date, say at the end of eight weeks, on September 23.

For the sake of some example, let’s choose the S&P 500 futures, called the ES Mini, and the DJIA futures. The smallest change in price of the underlying index is called a tick and is one quarter of one unit. The futures cost $12.50 for the ES Mini and $2.50 for the DJIA. In other words, a one-unit change in the ES Mini and the DJIA costs $50 and $10 respectively. These are called the futures price multipliers and have to be taken into account to determine the final profit.

When evaluating options on forward values, we usually use futures options. But here the underlying asset is actually a multiple of the index, so we will calculate the appropriate multiples of vanilla American put options on stock indices.

Other parameters need to be used as inputs in the American exercise put option, such as dividend yield, volatility, and risk-free interest rate. The expiration is 40/253 because we will exercise 40 trading days into the future. The interest rate is taken from the three-month treasury bill, which was calculated from Wolfram|Alpha at the top of the blog at 0.1%. The dividend yield at the end of July was 0.68% for the S&P 500 and 3.00% for the DJIA.

And we can use `FinancialData` to determine the 50-day volatility from which we can then calculate the annual volatility by multiplying by √253/50.

Now we contact our broker late on Friday and tell him to buy over-the-counter (OTC) two-month (eight-week) puts on the ES Mini and DJIA futures when the market opens on Monday morning. Usually exchange traded futures have expiration dates in multiples of months, but for the purposes of this example, we will price eight-week OTC puts to investigate options that essentially replicate short selling. The opening prices are given by `FinancialData`.

The strike price for a put is always below the actual price so that an immediate profit cannot be made. We choose strikes as high as possible to increase the profit margin. Let us suppose that the highest valued puts we can acquire that morning are written at 1280 and 12100 for the S&P 500 and DJIA respectively. We can now calculate the values of the puts on the underlying indices:

So we can now see what the actual closing prices were at the end of eight weeks for the S&P 500 and the DJIA.

The value of the put on the futures is the difference `Max[K-S _{T}, 0]` between the strike and the final price multiplied by the appropriate futures factor minus the cost of the purchase price of the put and the broker’s charge, which is a fixed price that we can ignore since it doesn’t alter the structure of the discussion.

As we can see, these are substantial, but our hunch could have been wrong and the market could have gone against us or recovered more quickly than expected. So a proper analysis requires that we consider the possible profits for various scenarios for the indices as well as choose different strikes, since only some limited number of strikes may have been available. These different payoffs are displayed below, including red dots to indicate the original profits calculated.

This exercise illustrates the reality that financial institutions whose job it is to evaluate risk and sovereign debt can often fail to correctly assess fundamental economic phenomena—after S&P incorrectly determined America’s debt risk, its CEO stepped down. Massive gyrations in the stock market are normal, but not in keeping with traditional economic theory which emphasizes equilibrium models which do not cope with exogenous shocks.

Download this post as a Computable Document Format (CDF) file.

*The examples offered here are only hypothetical, for the purposes of showing how Mathematica functionality can be used to analyze and invest in the market. Wolfram Research is not a financial institution and is not responsible for trading losses as a result of using its technology.*

We will cover a variety of topics, including how to:

- Customize analysis by leveraging application-specific functions with a high-level programming language
- Automatically apply any of the known trading indicators
- Perform code optimization and parallel processing of Monte Carlo simulations
- Easily create reports, charts, and interactive GUIs for presentations
- Reduce development time by working entirely within one environment

To give you a feeling for the topics that that will be covered in the finance workshop, I’ve included a few examples below that are a small subsection of the new financial technology in Version 8.

One of the most significant innovations is the ability to estimate a wide range of financial options. In fact, there are almost 100 different types of financial derivatives. Derivatives have three sublists of transformation rules that allow you to specify the option in terms of its name and exercise type, basic parameters, and ambient time-specific parameters such as the current price, and finally an optional list that allows you to ask for additional properties like the Greeks. In the following example we can insert derivative valuations into one of *Mathematica*‘s visualization functions like `ListPlot3D` to obtain the return surface of a call spread between different strikes:

Another important aspect of financial calculations are the recently enhanced probability and statistical functions.There are now 126 different types of distributions, including many that are important to financial evaluation like the Copula and Lévy distributions. The Copula distribution became infamous in the financial crisis when it was discovered that the Copula was routinely miscalculated using the multinormal variant when the underlying mortgage distributions were anything but normal. *Mathematica* allows 11 variants of the Copula distribution that can be fitted to the actual data.

*Mathematica* can access reams of price data and fundamental economic properties using `FinancialData`. For instance, you can study the erratic behavior of the S&P 500 in the new millennium:

If you are also interested in trading using `FinancialData`, there are almost 100 trading indicators to choose from. These indicators can be shown either superimposed upon the trading chart or displayed beneath it.

There are also many of the trend reversal charts that are a common technique to study price patterns

We also have parallelized option pricing using GPU technology, which is accessed by calling the `CUDALink` context:

We can now easily plot the return surface of an Asian arithmetic put spread between different strike prices. First we create the prices and expiration periods, and then include those prices and expirations as complete lists within `CUDAFinancialDerivative`:

Space is limited for the workshop and seminar, so register now.

All workshop attendees will receive a code for a 20% discount on a professional license of *Mathematica* at the conclusion of the event.

Visit the event web page for further details about time, location, schedule, and more.

]]>Using *Mathematica*‘s `TradingChart` command, we can witness the stock market’s immediate negative reaction to the downgrade. Note: Mouse over the chart to see the daily open, high, low, and closing price of the S&P 500 index.

Now that the world’s largest economy has lost its AAA status, what does it mean for the rest of the world? Will the debt downgrade spread? What is the implication to the bond market and the economic recovery? Can our economic well-being be measured by the amount of debt that we carry?

Indeed, there is much to think about. To fully understand the economic landscape, we first need to take a look at the U.S. fiscal situation. Wolfram|Alpha can be a handy entry point:

It is alarming to see a 14 trillion figure for our total government debt. However, we should understand the composition of the U.S. debt to make a meaningful judgment. We can ask Wolfram|Alpha to show a breakdown of the debt figure:

Note: Wolfram|Alpha provides a convenient way to generate this subpod without coding. You can do so by clicking the “+” sign on the top right-hand corner of the desired subpod from a full Wolfram|Alpha output. Please see the instructions here.

The subpod above shows a general breakdown of who owns the U.S. government securities. The amount held by the public is the amount held by Federal Reserve and government accounts. The amount held by agencies and trusts is the amount that is privately held.

Usually economists will look at the debt-to-GDP ratio as one of the indicators of the health of an economy. The debt-to-GDP ratio is the amount of national debt of a country as a percentage of its gross domestic product (GDP).

The two most commonly used debt-to-GDP ratios are total debt-to-GDP ratio, which reflects the indebtedness of a nation, and the public debt-to-GDP ratio, which reflects the government’s finances. We can ask Wolfram|Alpha for these two ratios:

The units are quoted in years, indicating the amount of debt as a percent of one year’s GDP.

Since the government is responsible for the public debt and not the accumulated private debt, the (U.S. debt held by public)/GDP is the ratio to consider for U.S. government securities’ credit worthiness.

Besides asking for the value of debt-to-GDP ratio, we can take a look at the time series of such ratios to see the change of the debt level as a percent of GDP.

We notice that the public debt soared after the 2008 financial crisis. This increase can be due to an increase in government insurance programs such as unemployment insurance, Medicare, and Medicaid.

The problem with obsessing about debt is that it ignores that there are two sides to any ledger, and that someone’s debt is another person’s asset. The problem with debt is not its size, but the uneven distribution among income classes. Basically 95% of the U.S. is in debt to the top 5% in terms of income distribution. If the debt is unevenly distributed, then income is unevenly distributed.

The Gini index is a common measure of such distortion of income distribution. A value of 0 means total equality and a value of 1 means total inequality. On consulting Wolfram|Alpha, we discover that the U.S. has a Gini index of 0.408, ranked 69th highest in the world. While we are not at either extreme in the Gini index distribution, this implied distortion of income could explain why there has been a lack of income, therefore demand, from the middle class.

Another major drive for the increase of our debt is the use of deficit spending to stimulate the economy, known as Keynesian economics. Wolfram|Alpha provides the amount of deficit as a percentage of GDP. By plotting the time series of the deficit percentage, we can clearly see the most recent (since 2008 financial crisis) increase in our deficit. Therefore, most of the stimulus program is deficit spending.

The basic point is that if government spends $1, albeit a borrowed dollar, to stimulate the economy, the national income may grow by more than $1. We can use the ratio of national income versus government spending, called the fiscal multiplier, to see the effect. If the fiscal multiplier is greater than 1, then the overall increase in national income is greater than the initial incremental amount of spending. However, if the fiscal multiplier is less than 1, the government spending might not have been as effective.

Another consequence to consider as a result of government borrowing is a concept called crowding out. It refers to a reduction in private consumption or investment due to government borrowing. The assumption is that there is a limited amount of available loanable funds in the market. The increase in government borrowing pushes up the demand for loanable funds and therefore increases the real interest rate. The result is a decrease in interest-sensitive expenditures from the private sector.

Note that crowding out only happens when the economy is at full capacity with negligible unemployment and above zero interest rates. So did the government’s stimulus program create crowding out in the U.S. economy? If the answer to this question is yes, then the demand for money and hence bond yields would increase. Many pundits predicted such an outcome, but the results as indicated below were in keeping with Keynesian predictions, and bond yields have dropped dramatically.

We can ask Wolfram|Alpha for an up-to-date unemployment rate:

Below is the unemployment rate of the last five years:

And the treasury yield:

With a 9.1% unemployment rate and 10-year treasury note yields at a 50 year low, we can safely conclude that the recent deficit spending did not crowd out the private expenditure.

So if crowding out is not a factor, why is it that we have yet to see the economic recovery?

When the monetary policy of increasing money supply (lowering interest rates) fails to stimulate the economy, a liquidity trap might be to blame. In John Maynard Keynes’ words in *The General Theory of Employment, Interest and Money*: “There is the possibility… that after the rate of interest has fallen to a certain level, liquidity preference is virtually absolute in the sense that almost everyone prefers cash to holding a debt at so low a rate of interest. In this event, the monetary authority would have lost effective control.”

How do we know if we are in a liquidity trap?

In macroeconomics, the identity:

means monetary base times the velocity of money is equal to the price level times real output. The right-hand side of the equation is the dollar value of GDP. If we divide both sides by the monetary base, we get a definition of the velocity of money:

We can think of the velocity of money as the average annual frequency a dollar turns over to purchase goods and services. If we believe that an increase in money supply results in an increase in GDP, we need to assume that the velocity of money does not decline.

You can interact with the dynamic Computable Document Format (CDF) file below to investigate the relation between 3-Month Treasury Bill Yield versus the Velocity of Monetary Base. The bottom left-hand corner of the graph shows a combination of low interest rate and a declining velocity of money. That is where the liquidity trap happens.

As of 2010, it seems that we are in a liquidity trap. We await the data from 2011 to see a complete picture.

You can download the full source code and descriptive details of the Demonstration above at the Wolfram Demonstrations Project, which is an excellent source of interactive knowledge apps on finance and economics.

It is not the amount of debt that is the concern for economic well-being, it is how we spend our borrowed money. If the borrowed funds are spent in a productive way, it is good for our economy. Otherwise, they are just wasted funds that our future generations will have to pay for.

How to tell if the funds are used wisely? One way is to calculate the stimulus multiplier. Another way to have a quick gauge is to see our nation’s labor productivity and inventiveness. Let’s save that for another post.

]]>