Wolfram Computation Meets Knowledge

Splitting a Point with Mathematica and MathTensor: A Mathematica Memoir

Splitting a Point with Mathematica and MathTensor: A Mathematica Memoir

In the past few years, there have been many significant anniversaries in the Mathematica world. This has made me think about my long personal history working with all things Mathematica. Here I present an account of how I got involved with this world, developed my part of it and continue to use it. I show what I think is a unique application that differs from the other thousands of applications in Mathematica or the Wolfram Language presented on the various Wolfram Research websites, Wolfram Community and elsewhere. Finally, I attempt to describe the physics of what I do. The beginning historical part with much name-dropping can be skipped for those who want to read only about technical or physics issues.

History and Physics

Autobiographically, this begins with me in high school in 1965 and a book by Peter Bergmann, one of Einstein’s former assistants at the Institute for Advanced Study in Princeton.

Introduction to the Theory of Relativity
&#10005


I read the book line by line and cover to cover one summer and found a few typos. Naively and not knowing who Bergmann was exactly, I wrote a letter to him, pointing out the errors I had found. Weeks later, a kind but definitive letter came from him, pointing out that the book had been published years before and that he surely would have known about any problems by that time.

The section on finding solutions to Einstein’s gravitational field equations was particularly inspiring.

Einstein’s gravitational field equations
&#10005


Anyone who has tried to find exact solutions to such very complicated, coupled, nonlinear partial differential equations knows that even the straightforward, static, spherically symmetric and empty space case requires some complex tensor algebra to get to the differential equations themselves. Although this has always been difficult, Schwarzschild, Kerr, Gödel, Kaluza–Klein, Robertson–Walker, Taub–NUT and others did find now-famous solutions.

My initial fascination with such solutions led me to contact John Archibald Wheeler and his student Brendan Godfrey during an APS conference in Chicago in the late 1960s. Godfrey was doing his dissertation on exact solutions. These men encouraged me to follow my passion for studying such solutions.

It quickly became apparent that solutions were rare. Finding even the well-known ones was challenging and error prone. This led me to think of my other obsession: computer programming. Back in the mid-60s, access to any computer for a high-school student was very unusual. However, I was lucky. My hometown, Moline, Illinois, was the headquarters of the farm implement company John Deere. One of its main engineering groups worked there. At the time, the engineers wanted to convince their boss to fund new IBM computers to be used—not for business, but for engineering simulations. They had been doing some of this on analog computers, but felt digital was the future. The boss was concerned that the time and cost needed for educating his employees in Fortran and using punch cards, not to mention buying the IBM hardware, would take them away from pressing design work. They got the idea to bring in high-school students, give them a Fortran book and access to a computer, and then see if they could learn to code relatively quickly. If teenagers could understand it, engineers would be able to do it even more easily.

The engineers called up the science and math departments at Moline High School and asked if MHS had any students who might be interested. The school asked whether there were any interested students, and about 10 said yes. I and the others showed up for an evening meeting at Deere, and the engineers presented their proposal to give some programming tutorials. If I remember correctly, perhaps five came for the first tutorial. Only one appeared for the second session—me.

The engineers wanted me to simulate the problem of a tractor going over a bumpy field. The tractor had only one spring and shock absorber to protect the driver above. I was to solve the damped harmonic oscillator differential equation and plot the motion of the driver. Today this is a straightforward task. For example:

Manipulate
&#10005


We would have killed for such miraculous futuristic software then. In any case, I first learned about what kind of solution was needed by programming their analog computer, which was similar to this one, connected to a pen plotter:

Analog computer
&#10005


This proved to be a lot of fun and relatively simple to do, but how could I do it digitally? I had to figure out how to solve a differential equation numerically on the IBM and print out a plot on a line printer. After some weeks of work, my study of numerical methods and Fortran programming finally led to plots similar to the analog output results. The engineers were happy. I was thanked and given the Fortran book. Eventually I attended a nice “honors” lunch at the Deere headquarters building. I have no idea how the engineers used computer programming afterward.

When I went to Augustana College in Illinois, my experiences at Deere turned out to be very valuable. For the first time in 1967, the college offered a course in Fortran programming for students and permitted the faculty to use the college’s small IBM computer for their research in the evenings. I was hired as the first student to run the computer, taking in decks of punch cards, running them through and returning the printout. It was the best-paying job on campus and gave me free access to any programs I wanted to run. I tried to find a way to do symbolic computing to help find solutions to Einstein’s equations, but nothing I tried in Fortran would help. I wrote some game programs, composed some computer-generated music and did some planetary orbit simulations, but no relativity. The physics and math faculties at Augustana were remarkably supportive, allowing me to teach a seminar course in relativity theory and give guest lectures in the group theory and topology courses. I also taught a programming course for them.

A turning point came in the summer of 1970 when I was invited to an undergraduate summer program at Iowa State University under the sponsorship of the then-Atomic Energy Commission (AEC). I was working on trying to understand Misner and Wheeler’s notion of an “already unified field theory.” I wrote my first-ever paper on the subject and submitted it to a journal. It was rejected with a very kind and encouraging referee report, so I was not unhappy. What I got most out of that summer came from one of the administrators in the computer center there. Somehow, he had heard about my interest in relativity and using computers. He told me about FORMAC, which was released in late 1964 and was available on IBM mainframes. I got a copy of the manual for this system, one of the first “computer algebra” programs, an extension of Fortran. When I got back to college, I was able to get it installed on the small IBM machine we had. (I think it was an 1130 with a modem connected to a 360 system at the University of Iowa.) I was fascinated by the prospects of finding a way to do tensor calculus by computer, but FORMAC would not allow that.

After that, I applied to graduate school, emphasizing a desire to do research in gravitation using computers. It turned out that my timing was perfect. Bryce DeWitt, then the renowned head of the Institute of Field Physics at the University of North Carolina at Chapel Hill, was looking for three new research graduate students to do the work on computer simulations of black hole collisions. He hired Larry Smarr, Elaine Tsiang and me to work on this from day one. In January of 1972, we moved from Chapel Hill to the Center for Relativity Theory at the University of Texas at Austin, where DeWitt took over as director. We worked hard for the first two years to discover numerical ways to model the spacetime around a black hole. We started programming with punch cards fed from a terminal to the campus CDC mainframe. Taking the cards and printouts up and down various floors in Moore Hall got tiring. Funds were found to buy a terminal with a screen and keyboard, which were connected to the mainframe to do the programming and get the printouts. It was my job to write, but mainly debug, the code—since my years of helping students and faculty fix their programs gave me a good eye for noticing errors.

This effort contributed to some of the very early theoretical ideas and computer algorithms for what became the LIGO gravitational wave discoveries many decades later. However, after taking DeWitt’s “theory of everything” course on quantum field theory in curved spacetimes and quantum gravity, I decided to do my Ph.D. work in these areas. I chose to find ways to expand on the work in chapter 17 of his famous book, Dynamical Theory of Groups and Fields. The book was based on his lectures at the Les Houches Summer School in 1963, founded and run by the remarkable Cécile DeWitt-Morette, also a professor at Texas. In chapter 17 and other parts of the book, he outlined what is now known as the Schwinger–DeWitt proper time algorithm for doing regularization of infinite quantities in quantum field theory in curved spacetimes.

Dynamical Theory of Groups and Fields (Documents on Modern Physics)
&#10005


What is regularization? Suppose there is a quantum scalar field (or any other field, but for simplicity, choose a scalar field) in curved spacetime. We might want to see how that field interacts with the spacetime, say, of a black hole. Once we define a vacuum state, we can try to find the vacuum expectation value (VEV) of the stress tensor for the field in that vacuum. The VEV can then be put into the right side of the Einstein field equations to see how that changes the background gravitational field. This is called the back reaction problem. What typically happens is that the expectation value is infinite. We want to get rid of the infinities somehow in order to extract finite results. The process of extracting the infinities is called regularization. The VEV is broken up into an infinite and a finite part. The next step is to absorb or throw away the infinite part in some physically meaningful way. This process is called renormalization.

There are numerous ways to do this that go back decades. The regularization method that the Schwinger–DeWitt algorithm uses is “point-splitting” or “point-separation.” What this does is write the VEV of the stress tensor in terms of something called the Green’s function, which abstractly satisfies

Schwinger–DeWitt algorithm
&#10005


where the right side is the Dirac delta function and is the two-point Green’s function. Here we take F to be the differential operator for the coupled massive scalar field. (We have a so-called conformal field when the parameter is and .)

F to be the differential operator
&#10005


This function comes from the basic scalar field equations from an action S:

Basic scalar field equations from an action S
&#10005


DeWitt, following the flat-space work of Schwinger, was able to show that the Green’s function in a curved spacetime could be written as an integral of a sum of terms:

Green’s function in a curved spacetime
&#10005


The two key ingredients in this equation are , the so-called bi-scalar of the geodetic interval, and the a-coefficients derived from recursion relations with one boundary condition for the zeroth one. To maintain manifest covariance in a general curved spacetime, is used. It measures the distance between the x and points along the geodesic between them. In the flat-space limit, σ is just half the square of the straight-line distance between the two points. As the two points come together, σ goes to zero. The is the van Vleck–Morette determinant, which is related to σ. These are the recursion relations that come from F acting on :

Recursion relations
&#10005


Recursion relations
&#10005


For example, if we set and look at the terms that remain when , we find the first term vanishes and the second term is just , where the brackets indicate the coincidence limit. The final term is . The third term shows that we will need the summed second covariant derivative of . We have to take two derivatives of the recursion relation to get that. This leads to more complicated derivatives.

DeWitt and I were able to show that to get the first (so-called one-loop) divergences in the VEV of the stress tensor, we would need at least the object’s coincidence limit. This requires the coincidence limit of six derivatives of σ. As I will show later, such calculations generate hundreds of terms with many indices to keep track of.

DeWitt was admired for his incredible depth of knowledge in physics and also for his ability to execute by hand enormously long and complex calculations with no errors. I heard stories from some of his colleagues about what he could do over a weekend that might take them weeks or months. I learned many of his techniques for accurate symbolic calculation, but I convinced him that we should try to use a computer to automate it all. In particular, if we wanted to extend to higher-spin fields or more loops, we would be generating thousands, if not tens of thousands, of tensor structures—too many for even DeWitt to do perfectly, and perfection was required.

I told him about FORMAC and how it would not work. He told me he knew of someone who might be writing software to do such calculations. This potential contact was Martinus Veltman, a Dutch physicist then at the University of Michigan-Ann Arbor. Bryce offered to introduce me. Veltman was very kind and willing to help by sending me a copy of his Schoonschip computer algebra system that would run on the mainframe computers in Austin. I got it installed, but even after learning how to code with it, I could not get it to do what we needed for the a-coefficient work. I gave up on computerizing the calculations and spent six months, eighteen hours a day, seven days a week doing the calculations by hand. I did them independently five separate times to ensure I got the same answers.

I finished my dissertation, finding the divergences and finite terms in the scalar VEV of the stress tensor. After leaving Austin, I went to King’s College in London for a year and published the dissertation results in the following paper:

The covariant point-separation method
&#10005


The ultimate result in this paper was the elaborate set of following equations. The first term shows what is called a “quartic divergence”—that is, as , that term diverges as the inverse of the distance along the geodesic to the fourth power. We also see quadratic, logarithmic and linear divergences. All of these will need to be renormalized away in some fashion:

Quadratic, logarithmic and linear divergences
&#10005


Note the many subtle coefficients. If even one of these is not computed correctly, the physics results can be utterly wrong. One of the most important results of this work was to confirm the existence of something called a trace anomaly. It was assumed that the trace of the stress tensor (the sum of its diagonal elements roughly) would be zero for a massless conformally invariant scalar field. But Capper and Duff in the UK had shown that the trace was not zero—that is, it was anomalous in the dimensional regularization scheme. My calculations had shown the same thing. The finite term in the previously shown equation gave the same result as Capper and Duff’s work did, but in an entirely new way. Soon after finding this fact, Steve Fulling and I showed that the trace of the stress tensor anomaly was precisely equivalent to the just-discovered Hawking radiation idea in two dimensions and also contributed to it in four dimensions. In the last years of the 1970s, Duff and I showed how the coincidence limit was related to other anomalies and index theorems in supergravity theories.

One perhaps-amusing anecdote. I had the rare opportunity to discuss these calculations with the famous mathematician Paul Erdős. I had dinner with him at a friend’s home in York, England, and a few days later I happened to encounter him again walking on the campus of Durham University in the UK during a break from a conference there. I told him about the Green’s function expansions and the numerical ratios I was getting in the results. The number 2880 appeared in some of the denominators along the way. He immediately understood why this was and suggested some historical series structures I should consider. Such an event stays in one’s mind.

For the next few years, I continued to watch the development of symbolic manipulation software but saw nothing that would help. The main issue was comparing one complicated tensor term with another and combining them when possible. When I became a physics professor in 1980 at UNC-Chapel Hill, I decided it was time to start programming a system to do the quantum field theory work. Computer development had finally reached the personal computer level, and I hoped I might learn how to use one for my efforts. I contacted Veltman again, and he said that he had not done anything new that might help, but a young student at Caltech was doing something that sounded like what I might use. This was Stephen Wolfram. Veltman was to be awarded the Nobel Prize in Physics in 1999 along with Gerard ’t Hooft.

I contacted Stephen, and he strongly suggested that a Unix system with a good C compiler would be best. He was working on a system with robust pattern-matching functionality—which he knew would become something I could use. So, I started looking for such a computer in 1982. After much research, I finally found a new startup company in a small office area in Mountain View, California, called Sun Microsystems. They were the only Silicon Valley company that seemed very enthusiastic about university-level scientific research software development. They offered me what I think was their first academic discount. After applying to the National Science Foundation and getting an equipment grant, I obtained one of the first Sun-1 systems, the first in North Carolina as far as I know. Stephen also got some Sun machines and soon developed his SMP system on them.

While I waited for SMP, I started writing my code to do the coincidence limit calculations. I kept in contact with Stephen; eventually, we both ended up at the University of Illinois Urbana-Champaign. I was there to help set up the Sun computer network and collect relevant Unix software for the scientists using the NCSA Cray supercomputer. I continued to write C code and did manage to make some progress on generating coincidence limits, but not enough to combine and simplify terms. I still needed sophisticated pattern matching. The following printout image (hanging from a tree in my yard) shows one equation about 30 pages long. Each term has many tensor indices. For example, summing two tensor indices would require not only finding a term on, say, page seven and combining it with a term on page 29, but furthermore there would be rules that would generate curvature terms, increasing the length and complexity of some of the equations. Higher-order calculations could create tens of thousands of intermediate terms or more equations:

Printed equation
&#10005


One day, Stephen contacted me and asked if I would like to try his new system with a more advanced pattern-matching scheme. He gave me what may have been one of the first alpha tests of what eventually became Mathematica. It ran on my Sun workstation. Within two weeks, I had written code that did my a-coefficient calculations far better and faster than the C code I had spent years trying to write. I was hooked. In 1988 I was asked to present my work at the public introduction of Mathematica at a press conference in Silicon Valley. I sat next to Steve Jobs at the event. He was there to show the running of Mathematica on his new NeXT computer. I was a beta tester for a NeXT machine with Mathematica—so secret at the time that it was hidden in my home office. I was encouraged to create a tensor analysis system to distribute to other researchers.

Later I found out that my longtime colleague, Leonard Parker at the University of Wisconsin-Milwaukee and a pioneer in quantum field theory in curved spacetime development, was also working on his version of a tensor analysis system for Mathematica. We each had our ideas on what was needed and decided to join forces and write a new system. After a few years of development and the writing of a book on how to use this system, we started a two-person S corporation in 1994 to sell it to support the work. We expected that maybe a few hundred researchers in gravitation and relativity might buy a copy. We ended up, over two decades, selling a few thousand copies. Supporting that many users became significant work. We had not guessed that engineers, physicists in particle physics and elsewhere and even an eye doctor working on eye curvature might need tensor analysis and Riemann tensor computations:

MathTensor
&#10005


Example Calculation in MathTensor

At this point, I will show one of the first and simplest calculations I had to do by hand in my research in graduate school, but now do with MathTensor. The calculations of the a-coefficients, their derivatives and the stress tensor divergent parts are much longer. The details of these calculations are far too long to show here.

Over the years since the first release, the Mathematica developers added functions like Symmetrize, RiemannR and others that conflicted with some of our function names. Rather than redo a couple hundred files of MathTensor code, we now just overwrite the Mathematica functions in an initialization file:

Get
&#10005


The main loader file adds in about two hundred files. These are encrypted and require a machine-specific user password file. Some source code is available, but most is not. Basic information about the software and loading is printed out.

Load MathTensor:

Mathtensor.m
&#10005


Set basic definitions and properties. I work in four dimensions and set a few constants to 1:

Dimension = 4; G = 1; Lightc = 1;
&#10005


The DefineTensor function can take three arguments. The first is the tensor, the second is the symbol we want it to be printed as and a third gives the symmetries of the tensor’s indices. σ is a scalar with no indices:

DefineTensor
&#10005


Next are the definitions of σ’s properties and the first three known derivative coincidence limits:

SigmaEqn
&#10005


The RuleUnique function defines a rule name, the object the rule searches for and its value substituted when it is found. Here, we know that goes to zero when , as do its first and third derivatives. The covariant (lower) indices in MathTensor are labeled with an “l” in front while contravariant (upper) indices are started with “u.” The coincidence limit of the two-derivative case is the metric tensor, Metricg[la,lb], already defined in MathTensor:

RuleUnique
&#10005


RuleUnique
&#10005


RuleUnique
&#10005


A list of rules, sigmarules, is created and new rules derived are added to the rule list as we do more derivatives:

sigmarules := {sigmaonerule, sigmatworule, sigmathreerule}
&#10005


Take the previously shown definition equation, which we will set to zero eventually. Note that indices are printed in their correct up and down form in the output cells:

SigmaEqn
SigmaEqn
&#10005


The Canonicalize function looks at all the tensor index summations in an equation and renames them so that indices from la to lo (or ua to uo) are renamed to later letters (lp or beyond) reserved normally for summations only. If we happen to have two terms that end up with the same summations, they will be combined:

Canonicalize
Canonicalize
&#10005


Take the first covariant derivative, with the MathTensor CD function, of the definition and continue until we have four derivatives. First derivative:

CD
&#10005


The Expand function can help to show all individual terms:

Expand

&#10005


Second covariant derivative:

CD

&#10005


Expand

&#10005


Third covariant derivative:

CD

&#10005


Expand

&#10005


And finally, the fourth derivative:

CD

&#10005


Expand

&#10005


The MathTensor ApplyRules function knows how to apply rules we define into each term in a tensorial equation. After using the first set of rules, we get the next result. The first and third derivatives of σ are zero and the second derivative rule lowers indices. We want to reorder the derivative indices alphabetically so we can combine the four-derivative terms:

ApplyRules

&#10005


The OrderCD function looks at the derivatives, reorders them alphabetically and generates the needed Riemann tensor terms. We apply sigmarules again and then Solve for the four-derivative limit:

OrderCD

&#10005


ApplyRules
&#10005


This gives us the four-derivative coincidence limit rule we want:

Solve

&#10005


Next, we define the new four-derivative rule and add it to the total sigmarules list. The RuleUnique function in MathTensor takes care of any summed indices:

RuleUnique
&#10005


sigmarules := {sigmaonerule, sigmatworule, sigmathreerule,  sigmafourrule}
&#10005


Move on to the five-derivative case and apply the same set of operations as before. Take the new derivative of the four-derivative term. We begin to see that the equations are getting complicated:

CD

&#10005


Again, apply the most recent rules, order the indices, apply rules again and solve for the coincidence limit of five covariant derivatives:

ApplyRules

&#10005


OrderCD

&#10005


ApplyRules

&#10005


We now have the five-derivative coincidence limit, which we add to the full rules list:

Solve

&#10005


RuleUnique
&#10005


sigmarules := {sigmaonerule, sigmatworule, sigmathreerule,  sigmafourrule, sigmafiverule}
&#10005


Finally, do the sixth derivative and do all the same operations. This shows just how careful we have to be to make sure all indices are in exactly the right places. As I have said, doing this by hand can take weeks. One index out of place can invalidate the entire result:

CD

&#10005


ApplyRules


&#10005


OrderCD



&#10005


ApplyRules


&#10005


Expand


&#10005


Canonicalize will rename indices so that terms can combine:

Canonicalize


&#10005


We end up with this long expression for the six-derivative limit. Add it to the full list of rules for σ:

Solve

&#10005


We have the final set of rules we need to calculate the -coefficient coincidence limit:

RuleUnique
&#10005


sigmarules := {sigmaonerule, sigmatworule, sigmathreerule,  sigmafourrule, sigmafiverule, sigmasixrule}
&#10005


One of the objects we need is obtained by summing the pairs of indices:

%%% /. {lb -> ua, ld -> uc, lf -> ue}

&#10005


Canonicalize

&#10005


The Tsimplify function is powerful. It will find ways to combine terms by renaming indices:

Tsimplify

&#10005


MathTensor has a large set of rules—the RiemannRules—that recognize the symmetries of the Riemann tensor, its products and its derivatives. The ApplyRules function tells you which rules were used. Here are some examples of the rules about two covariant derivatives of the Riemann tensor built into MathTensor. If ApplyRules “sees” the structure on the left side of one of the rules, it substitutes the right side and renames the summation indices appropriately:

The RiemannRules
&#10005


ApplyRules
&#10005


All that work to get to this result takes just seconds with MathTensor, and gives this correct intermediate result:

Intermediate result with RiemannRules
&#10005


We can carry out all of the coincidence limits for σ, Δ, the a-coefficients and all their covariant derivatives. We then plug those into the Green’s function expressions of the tensor VEV and get the divergence structure shown. From there, we can plug in a given spacetime metric and create a renormalized finite stress tensor. We put this into the right side of Einstein’s equations and investigate the back reaction problem. In the paper with Fulling, an example of this is discussed. Since then, hundreds of papers have been published showing how point-splitting can be used. See the citations here.

Postscript

Going back to computing exact solutions, Parker and I added the Components function into the MathTensor system. With this, an arbitrary metric structure can be proposed, and all the components of the Riemann tensor, Ricci tensor and Riemann scalar can be computed quickly, along with the affine connections. The Einstein differential equations can then be obtained and potentially solved via Mathematica’s equation-solving routines.

Over the years, other researchers have found clever ways to compute higher-order a-coefficients, up to as far as I know, but I don’t think anyone has extended the detailed structures of the stress tensor to such high levels.

Part of the problem is that the higher we go in products of the Riemann tensor, Ricci tensor, Riemann scalar and their derivatives, the harder it is to find the related rules. We want to create a basic set of products to write all possible terms as a linear combination. In 1980 I wondered what would happen if we tried to add torsion to our spacetimes. I wrote a paper while at ITP at UCSB on this. It was clear that things would get out of hand very quickly. We needed more sophisticated math to figure this out. This involves very detailed group theoretical arguments. Wybourne wrote software called Schur to help do the Lie and symmetric group calculations that might be useful. I helped Wybourne build and sell the software in the 1990s until his passing in 2003. Schur is now open source. I hoped to create a Mathematica version of Schur, but I have never gotten to it. Maybe others already have.

This work led to projects and consulting with Wolfram Research and Sun Microsystems (and eventually Oracle, which bought Sun), lasting four decades to date. In 1988 I started what ultimately became MathGroup for online discussions of Mathematica. I wrote about this in June of 2009. In addition, the open-source work for Sun and its Solaris operating system became www.sunfreeware.com, which is now www.unixpackages.com.

To finish, I want to express my deep gratitude to Stephen and all the Wolfram Research people who have made this work highly rewarding and fun.

Bibliography

Christensen, S. M. 1975. “Covariant Coordinate Space Methods for Calculations in the Quantum Theory of Gravity.” Ph.D. diss., University of Texas at Austin.

Christensen, S. M. 1976. “Vacuum Expectation Value of the Stress Tensor in an Arbitrary Curved Background: The Covariant Point-Separation Method.” Physical Review D 14, no. 10: 2490. https://doi.org/10.1103/PhysRevD.14.2490.

Christensen, S. M., and S. A. Fulling. 1977. “Trace Anomalies and the Hawking Effect.” Physical Review D 15, no. 8: 2088. https://doi.org/10.1103/PhysRevD.15.2088.

Christensen, S. M. 1978. “Regularization, Renormalization, and Covariant Geodesic Point Separation.” Physical Review D 17, no. 4: 946. https://doi.org/10.1103/PhysRevD.17.946.

Christensen, S. M., and M. J. Duff. 1979. “New Gravitational Index Theorems and Super Theorems.” Nuclear Physics B 154, no. 2: 301–342. https://doi.org/10.1016/0550-3213(79)90516-9.

Christensen, S. M. 1980. “Second- and Fourth-Order Invariants on Curved Manifolds with Torsion.” Journal of Physics A: Mathematical and General 13, no. 9: 3001. https://doi.org/10.1088/0305-4470/13/9/027.

Christensen, S. M. 1984. “The World of the Schwinger–DeWitt Algorithm.” In Quantum Theory of Gravity, Essays in Honor of the 60th Birthday of Bryce S. DeWitt, edited by S. M. Christensen. Boca Raton: CRC Press.

Christensen, S. M. 2019. “The Schwinger–DeWitt Proper Time Algorithm: A History.” In Proceedings of the Julian Schwinger Centennial Conference, edited by Berthold-Georg Englert. Singapore: World Scientific. https://doi.org/10.1142/11602.

DeWitt, B. S. 1965. Dynamical Theory of Groups and Fields. Philadelphia: Gordon & Breach.

Duff, M. J. 1994. “Twenty Years of the Weyl Anomaly.” Classical and Quantum Gravity 11, no. 6: 1387. https://doi.org/10.1088/0264-9381/11/6/004.

Fulling, S. A., R. C. King, B. G. Wybourne and C. J. Cummins. 1992. “Normal Forms for Tensor Polynomials. I. The Riemann Tensor.” Classical and Quantum Gravity 9, no. 5: 1151. https://doi.org/10.1088/0264-9381/9/5/003.

Parker, L., and S. M. Christensen. 1994. MathTensor: A System for Doing Tensor Analysis by Computer. Boston: Addison-Wesley Professional.

Editors Note: Information on the full functionality of the MathTensor package, and how to get it can be obtained, can be done by emailing the author at sunfreeware@gmail.com. The MathTensor book is available on Amazon.

Get full access to the latest Wolfram Language functionality with a Mathematica or Wolfram|One trial.

Comments

Join the discussion

!Please enter your comment (at least 5 characters).

!Please enter your name.

!Please enter a valid email address.

1 comment

  1. Thanks for a fascinating read this morning and for all the contributions you have made to the Mathematica and now Wolfram Language communities … Syd Geraghty

    Reply