The upcoming August 21, 2017, total solar eclipse is a fascinating event in its own right. It’s also interesting to note that on April 8, 2024, there will be another total solar eclipse whose path will cut nearly perpendicular to the one this year.
With a bit of styling work and zooming in, you can see that the city of Carbondale, Illinois, is very close to this crossing point. If you live there, you will be able to see a total solar eclipse twice in only seven years.
Let’s aim for additional precision in our results and compute where the intersection of these two lines is. First, we request the two paths and remove the GeoPosition heads.
Then we can use RegionIntersection to find the intersection of the paths and convert the result to a GeoPosition.
By zooming in even further and getting rid of some of the style elements that are not useful at high zoom levels, we can try to pin down the crossing point of the two eclipse center paths.
It appears that the optimal place to be to see the longest eclipse for both locations would be just southwest of Carbondale, Illinois, near the east side of Cedar Lake along South Poplar Curve Road where it meets with Salem Road—although anywhere in the above image would see a total solar eclipse for both of the eclipses.
Because of this crossing point, I expect a lot of people will be planning to observe both the 2017 and 2024 solar eclipses. Don’t wait until the last minute to plan your trip!
So how often do eclipse paths intersect? We can find out with a little bit of data exploration. First, we need to get the dates, lines and types of all eclipses within a range of dates. Let’s limit ourselves to 40 years in the past to 40 years in the future for a total of about 80 years.
Keep only the eclipses for which path data is available.
Generate a set of all pairs of eclipses.
Now we define a function to test if a given pair of eclipse paths intersect.
Finally, we apply that function to all pairs of eclipses and keep only those for which an intersection occurs.
It turns out that there are a little over one hundred eclipses that intersect during this timespan.
We can visualize the distribution of these eclipse paths to see that many occur over the oceans. The chances of an intersection being within driving distance of a given land location is far lower.
So take advantage of the 2017 and 2024 eclipses if you live near Carbondale, Illinois! It’s unlikely you will see such an occurrence anytime soon without making a lot more effort.
On August 21, 2017, an event will happen across parts of the Western Hemisphere that has not been seen by most people in their lifetimes. A total eclipse of the Sun will sweep across the face of the United States and nearby oceans. Although eclipses of this type are not uncommon across the world, the chance of one happening near you is quite small and is often a once-in-a-lifetime event unless you happen to travel the world regularly. This year, the total eclipse will be within driving distance of most people in the lower 48 states.
Total eclipses of the Sun are a result of the Moon moving in front of the Sun, from the point of view of an observer on the Earth. The Moon’s shadow is quite small and only makes contact with the Earth’s surface in a small region, as shown in the following illustration.
We can make use of 3D graphics in the Wolfram Language to create a more realistic visualization of this event. First, we will want to make use of a texture to make the Earth look more realistic.
We can apply the texture to a rotated spherical surface as follows.
We represent the Earth’s shadow as a cone.
The Moon can be represented by a simple Sphere that is offset from the center of the scene, while its orbit is a simple dashed 3D path. Both are parameterized since the Moon’s orbit will precess in time. It’s useful to be able to supply values to these functions to get the shadow to line up where we want.
As with the Earth’s shadow, we represent the Moon’s shadow as a cone.
Finally, we create some additional scene elements for use as annotations.
Now we just need to assemble the scene. We want the Moon to be directly in line with the Sun, so we use 0° as one of the parameters to achieve that. To precess the orbit in such a way that the shadow falls on North America, we use 70°. The rest is just styling information.
This means that due to the eccentric orbit, sometimes the Moon is further away from Earth than at other times; it also means that, due to the orbital inclination, it may be above or below the plane of the Earth-Sun orbit. Usually when the Moon passes “between” the Earth and the Sun, it is either “above” or “below” the Sun from the point of view of an observer on the Earth’s surface. The geometry is affected by other effects but, from time to time, the geometry is just right, and the Moon actually blocks part or all of the Sun’s disk. On August 21, 2017, the geometry will be “just right,” and from some places on Earth the Moon will cover at least part of the Sun.
Besides illustrating eclipse geometry, we can also make use of the Wolfram Language via GeoGraphics to create various maps showing where the eclipse is visible. With very little code, you can get elaborate results. For example, we can combine the functionality of SolarEclipse with GeoGraphics to show where the path of the 2017 total solar eclipse can be seen. Totality will be visible in a narrow strip that cuts right across the central United States.
So which states are going to able to see the total solar eclipse? The following example can be used to determine that. First, we retrieve the polygon corresponding to the total phase for the upcoming eclipse.
Suppose you want to zoom in on a particular state to see a bit more detail. At this level, we are only interested in the path of totality and the center line. Once again, we use SolarEclipse to obtain the necessary elements.
Then we just use GeoGraphics to easily generate a map of the state in question—Wyoming, in this case.
We can make use of the Wolfram Data Repository to obtain additional eclipse information, such as timing of the eclipse at various locations.
We can use that data to construct annotated time marks for various points along the eclipse path.
Then we just combine the elements.
Of course, even if the eclipse is happening, there is no guarantee that you will be able to witness it. If the weather doesn’t cooperate, you will simply notice that it will get dark in the middle of the day. Using WeatherData, we can try to predict which areas are likely to have cloud cover on August 21. The following example is based on a similar Wolfram Community post.
The following retrieves all of the counties that intersect with the eclipse’s polygon bounds.
Most of the work is involved in looking at the "CloudCoverFraction" values for each county on August 21 for each year from 2001 to 2016 and finding the mean value for each county.
I can then use GeoRegionValuePlot to plot these values. In general, it appears that most areas along this path have relatively low cloud cover on August 21, based on historical data.
The total solar eclipse on August 21, 2017, is a big deal due to the fact that the path carries it across a large area of the United States. Make every effort to see it! Take the necessary safety precautions and wear eclipse-viewing glasses. If your kids are in school already, see if they are planning any viewing events. Just make sure to plan ahead, since traffic may be very heavy in areas near totality. Enjoy it!
We’re fascinated by artificial intelligence and machine learning, and Achim Zielesny’s second edition of From Curve Fitting to Machine Learning: An Illustrative Guide to Scientific Data Analysis and Computational Intelligence provides a great introduction to the increasingly necessary field of computational intelligence. This is an interactive and illustrative guide with all concepts and ideas outlined in a clear-cut manner, with graphically depicted plausibility arguments and a little elementary mathematics. Exploring topics such as two-dimensional curve fitting, multidimensional clustering and machine learning with neural networks or support vector machines, the subject-specific demonstrations are complemented with specific sections that address more fundamental questions like the relation between machine learning and human intelligence. Zielesny makes extensive use of Computational Intelligence Packages (CIP), a high-level function library developed with Mathematica’s programming language on top of Mathematica’s algorithms. Readers with programming skills may easily port or customize the provided code, so this book is particularly valuable to computer science students and scientific practitioners in industry and academia.
The Art of Programming in the Mathematica Software, third edition
Another gem for programmers and scientists who need to fine-tune and otherwise customize their Wolfram Language applications is the third edition of The Art of Programming in the Mathematica Software, by Victor Aladjev, Valery Boiko and Michael Shishakov. This text concentrates on procedural and functional programming. Experienced Wolfram Language programmers know the value of creating user tools. They can extend the most frequently used standard tools of the system and/or eliminate its shortcomings, complement new features, and much more. Scientists and data analysts can then conduct even the most sophisticated work efficiently using the Wolfram Language. Likewise, professional programmers can use these techniques to develop more valuable products for their clients/employers. Included is the MathToolBox package with more than 930 tools; their freeware license is attached to the book.
Introduction to Mathematica with Applications
For a more basic introduction to Mathematica, readers may turn to Marian Mureşan’s Introduction to Mathematica with Applications. First exploring the numerous features within Mathematica, the book continues with more complex material. Chapters include topics such as sorting algorithms, functions—both planar and solid—with many interesting examples and ordinary differential equations. Mureşan explores the advantages of using the Wolfram Language when dealing with the number pi and describes the power of Mathematica when working with optimal control problems. The target audience for this text includes researchers, professors and students—really anyone who needs a state-of-the art computational tool.
Geographical Models with Mathematica
The Wolfram Language’s powerful combination of extensive map data and computational agility is on display in André Dauphiné’s Geographical Models with Mathematica. This book gives a comprehensive overview of the types of models necessary for the development of new geographical knowledge, including stochastic models, models for data analysis, geostatistics, networks, dynamic systems, cellular automata and multi-agent systems, all discussed in their theoretical context. Dauphiné then provides over 65 programs that formalize these models, written in the Wolfram Language. He also includes case studies to help the reader apply these programs in their own work.
Our tour of new Wolfram Language books moves from terra firma to the stars in Geometric Optics: Theory and Design of Astronomical Optical Systems Using Mathematica. This book by Antonio Romano and Roberto Caveliere provides readers with the mathematical background needed to design many of the optical combinations that are used in astronomical telescopes and cameras. The results presented in the work were obtained through a different approach to third-order aberration theory as well as the extensive use of Mathematica. Replete with workout examples and exercises, Geometric Optics is an excellent reference for advanced graduate students, researchers and practitioners in applied mathematics, engineering, astronomy and astronomical optics. The work may be used as a supplementary textbook for graduate-level courses in astronomical optics, optical design, optical engineering, programming with Mathematica or geometric optics.
Don’t forget to check out Stephen Wolfram’s An Elementary Introduction to the Wolfram Language, now in its second edition. It is available in print, as an ebook and free on the web—as well as in Wolfram Programming Lab in the Wolfram Open Cloud. There’s also now a free online hands-on course based on the book. Read Stephen Wolfram’s recent blog post about machine learning for middle schoolers to learn more about the new edition. |
Getting Started with Wolfram Language and Mathematica for Raspberry Pi, Kindle Edition
If you’re interested in the Raspberry Pi and how the Wolfram Language can empower the device, then you ought to check out this ebook by Agus Kurniawan. The author takes you through the essentials of coding with the Wolfram Language in the Raspberry Pi environment. Pretty soon you’ll be ready to try out computational mathematics, GPIO programming and serial communication with Kurniawan’s step-by-step approach.
Essentials of Programming in Mathematica
Whether you are already familiar with programming or completely new to it, Essentials of Programming in Mathematica provides an excellent example-driven introduction for both self-study and a course in programming. Paul Wellin, an established authority on Mathematica and the Wolfram Language, covers the language from first principles to applications in natural language processing, bioinformatics, graphs and networks, signal analysis, geometry, computer science and much more. With tips and insight from a Wolfram Language veteran and more than 350 exercises, this volume is invaluable for both the novice and advanced Wolfram Language user.
Geospatial Algebraic Computations, Theory and Applications, Third Edition
Advances in geospatial instrumentation and technology such as laser scanning have resulted in tons of data—and this huge amount of data requires robust mathematical solutions. Joseph Awange and Béla Paláncz have written this enhanced third edition to respond to these new advancements by including robust parameter estimation, multi-objective optimization, symbolic regression and nonlinear homotopy. The authors cover these disciplines with both theoretical explorations and numerous applications. The included electronic supplement contains these theoretical and practical topics with corresponding Mathematica code to support the computations.
Boundary Integral Equation Methods and Numerical Solutions: Thin Plates on an Elastic Foundation
For graduate students and researchers, authors Christian Constanda, Dale Doty and William Hamill present a general, efficient and elegant method for solving the Dirichlet, Neumann and Robin boundary value problems for the extensional deformation of a thin plate on an elastic foundation. Utilizing Mathematica’s computational and graphics capabilities, the authors discuss both analytical and highly accurate numerical solutions for these sort of problems, and both describe the methodology and derive properties with full mathematical rigor.
Micromechanics with Mathematica
Seiichi Nomura demonstrates the simplicity and effectiveness of Mathematica as the solution to practical problems in composite materials, requiring no prior programming background. Using Mathematica’s computer algebra system to facilitate mathematical analysis, Nomura makes it practical to learn micromechanical approaches to the behavior of bodies with voids, inclusions and defects. With lots of exercises and their solutions on the companion website, students will be taken from the essentials, such as kinematics and stress, to applications involving Eshelby’s method, infinite and finite matrix media, thermal stresses and much more.
Tendências Tecnológicas em Computação e Informática (Portuguese)
For Portuguese students and researchers interested in technological trends in computation and informatics, this book is a real treat. The authors—Leandro Augusto Da Silva, Valéria Farinazzo Martins and João Soares De Oliviera Neto—gathered studies from both research and the commercial sector to examine the topics that mark current technological development. Read about how challenges in contemporary society encourage new theories and their applications in software like Mathematica. Topics include the semantic web, biometry, neural networks, satellite networks in logistics, parallel computing, geoprocessing and computation in forensics.
The End of Error: Unum Computing
Written with Mathematica by John L. Gustafson, one of the foremost experts in high-performance computing and the inventor of Gustafson’s law, The End of Error: Unum Computing explains a new approach to computer arithmetic: the universal number (unum). The book discusses this new number type, which encompasses all IEEE floating-point formats, obtains more accurate answers, uses fewer bits and solves problems that have vexed engineers and scientists for decades. With rich illustrations and friendly explanations, it takes no more than high-school math to learn about Gustafson’s novel and groundbreaking unum.
Want to find even more Wolfram technologies books? Visit Wolfram Books to discover books ranging across both topics and languages.
]]>You may have heard that on March 20 there was a solar eclipse. Depending on where you are geographically, a solar eclipse may or may not be visible. If it is visible, local media make a small hype of the event, telling people how and when to observe the event, what the weather conditions will be, and other relevant details. If the eclipse is not visible in your area, there is a high chance it will draw very little attention. But people on Wolfram Community come from all around the world, and all—novices and experienced users and developers—take part in these conversations. And it is a pleasure to witness how knowledge of the subject and of Wolfram technologies and data from different parts of the world are shared.
Five discussions arose recently on Wolfram Community that are related to the latest solar eclipse. They are arranged below in the order they appeared on Wolfram Community. The posts roughly reflect on anticipation, observation, and data analysis of the recent eclipse, as well as computations for future and extraterrestrial eclipses.
I will take almost everything here from the Wolfram Community discussions, summarizing important and interesting points, and sometimes changing the code or visuals slightly. For complete details, I encourage you to read the original posts.
First, before the total solar eclipse happened on March 20, 2015, Wolfram’s own Jeff Bryant and Francisco Rodríguez explained how to see where geographically the eclipse is totally or partially visible. Using GeoEntities, Francisco was able to also highlight with green the countries from which at least the partial solar eclipse would be visible:
Jeff Bryant is in the US and Francisco Rodríguez is in Peru, so as you can see above, neither was able to see even the partial solar eclipse. The intense red area shows the visibility of the total eclipse, and the lighter red is the partial eclipse. I consoled them by telling them that quite soon—in the next decade—almost all countries in the world, including the US and Peru, will be able to observe at least a partial phase of a total solar eclipse:
Another great way to visualize chronological events is with a new Wolfram Language function, TimelinePlot. I’ve considered the last few years and the next few years, and have plotted the countries and territories (according to the ISO 3166-1 standard) where a total solar eclipse will be visible, as well as when:
The image above shows the incredible powers of computational infographics. You see right away that a spectacular total solar eclipse will span the US from coast to coast on August 21, 2017 (see a related discussion below). You can also see that Argentina and Chile will get lucky, viewing a total eclipse twice in a row. Most subtly and curiously, the recent solar eclipse is unique in the sense that it covered two territories almost completely: the Faroe Islands and Svalbard. This means any inhabitant of these territories could have seen the total eclipse from any geo location, cloudiness permitting. Usually it’s quite the opposite: the observational area of a total eclipse is much smaller than the territory area it spans, and most of the inhabitants would have to travel to observe the total eclipse (fortunately, no visas needed). The behavior of the Solar System is very complex. The NASA data on solar eclipses goes just several thousand years into the past and future, losing precision drastically due to the chaos phenomenon.
At the time of the eclipse, I was in Odesa, Ukraine, which was in the partial eclipse zone. I made a separate post showing my position relative to the eclipse zone and grabbing a few photos of the eclipse. Using the orthographic GeoProjection, it’s easy to show that the total eclipse zone did not really cover any majorly populated places, passing mostly above ocean water. The black line shows the boundary of the partial eclipse visibility, which covered many populated territories:
The Faroe Islands were in the zone of the total solar eclipse, and above I show the shortest path, or geodesic, between the islands and my location. In a separate post (see further discussion below), Marco Thiel posted a link to mesmerizing footage of the total solar eclipse, shot from an airplane (to avoid any cloudiness) by a BBC crew while flying above the Faroe Islands (see related discussion below). Francisco actually showed in a comment how to compute the distance from Odesa to the partial eclipse border:
My photos, shot with a toy camera, were of course nothing like the BBC footage. Dense cloud coverage above Ukraine permitted only a few glimpses of the chipped-off Sun. Most images were very foggy, but ImageAdjust did a perfect job of removing the pall. A sample unedited photo is available for download in my Wolfram Community post:
By the way, can you guess why you see the candy below? As I said in my post, the kids in my neighborhood in Ukraine observed the eclipse through the wrapper of this and other similar types of Ukrainian candy. The candy is cheap, and the wrap is opaque enough to keep eyes safe when the Sun brightens in the patches between the clouds. Do you remember using floppy disks? It was typical in the past to look at the Sun through floppy disk film. Many people may remember.
And this is where the conversation got picked up by our users. Sander Huisman, a physicist from the University of Twente in the Netherlands, asked a great question: “Wouldn’t it be cool if you could find your location just from the photos? We can calculate the coverage of the Sun for each of your photos, and inside the photo we can also find the time when it was taken. Using those two pieces of information, we should be able to identify the location of your photo, right?” I did not know how to go about such calculations, but Marco Thiel, an applied mathematician from the University of Aberdeen, UK, posted another discussion, Aftermath of the solar eclipse. Marco and Henrik Schachner, a physicist from the Radiation Therapy Center in Weilheim, Germany, tried to at least estimate the percentage of the Sun coverage using image processing and computational geometry functionality. This is the first part of the problem. If you have an idea of how to solve second part, finding a location from a photo timestamp and percentage of the Sun cover, please join the discussion and post on Wolfram Community. Marco and Henrik used photos from Aberdeen, which was very close to the total eclipse zone.
Even though he was so close, Marco did not have a chance to capture the partial eclipse due to high cloudiness. What irony and luck that the photos he used came from a US student from Cornell University, Tanvi Chheda, who spent a semester abroad at Marco’s university. She grabbed the shots with her iPad, but what wonderful images with the eclipse and birds. Thank you, Tanvi, for sharing them on Wolfram Community! Here is one:
Well, that’s the turbulent nature of Wolfram Community—something interesting is always happening, and happening quite fast. I’ll summarize the main subject of Marco’s post in a moment (see the original Community post for more images and eclipse coverage estimation), but as Marco wrote: “Even before today’s eclipse, there were reports warning that Europe might face large-scale blackouts because the power grids would be strained by a lack of solar power. This is why I decided to use Mathematica to analyze some data about the effects on the power grid in the UK. I also used data from the Netatmo Weather Station to analyze variations in the temperature in Europe due to the eclipse.”
Marco owns a Netatmo Weather Station, and had written about its usage in an earlier post. He used an API to get data from many stations throughout Europe, and also tapped into the public data from the power grid. One of his interesting findings was a strong correlation between the eclipse period and a sharp rise in the hydroelectric power production:
For more observations, code, data, and analysis, I encourage you to read through the original post. There, Marco also touched on the subject of global warming and the relevance of high-resolution crowd-sourced data. To visualize the diversity of the discussion, I imported the whole text and used the new Wolfram Language function WordCloud:
It’s nice that the Wolfram Language code, as well as the text, is getting parsed, and you can see the most frequently used functions. In the code above, there are three handy tricks. First is that the option WordOrientation has diverse settings for words’ directions. Second is that the option ScalingFunctions can give the layout a good visual appeal, and the simple power law I’ve chosen is often more flexible than the logarithmic one. The third trick is subtler. It is the choice of background color to be the “bottom” color of the ColorFunction used. Then not only do the sizes of the words stress their weights, but they also fade into the background.
From the TimelinePlot infographics above, you can see that a total eclipse will span the US from northwest to southeast on August 21, 2017. I made yet another Wolfram Community post showcasing some computations with this eclipse. You should take a look at the original for all the details, but here is an image of all US counties that will be spanned during the total eclipse. Each county is colored according to the history of cloud cover above it from 2000 to 2015. This serves as an estimate for the probability of clear visibility of the eclipse. The colder the colors, the higher the chance of clear skies. That’s very approximate, though, especially taking into account the unreliability of weather stations. GeoEntities is a very nice function that selects only those geographical objects that intersect with the polygon of the total eclipse. Below is quite a cool graphic that I think only the Wolfram Language can build in a few lines of code:
And now that we’ve looked into the past and the future of the total solar eclipses, is there anything left to ponder? As it turns out, yes—the extraterrestrial solar eclipses! We live in unique times and on a unique planet with the angular diameter of its only Moon and its only Sun pretty much identical. I mentioned above a documentary where a BBC crew shot a video of the total solar eclipse from an airplane above the Faroe Islands. Quoting the show host, Liz Bonnin, right from the airplane: “There is no other planet in the Solar System that experiences the eclipse like this one… even though the Sun is 400 times bigger than the Moon, at this moment in our Solar System’s history, the Moon happens to be 400 times closer to the Earth than the Sun, and so they appear the same size…”
So can we verify that our planet is unique? In a recent Wolfram Community post, Jeff Bryant addressed this question. He made some computations using PlanetData and PlanetaryMoonData to investigate the solar eclipses on other planets. The main goal is to compare the angular diameter of the Sun to the angular diameter of the Moon in question, when observed from the surface of the planet in question. He used the semimajor axis of the Moon’s orbit as an estimate of the Moon’s distance from its host planet. Please see the complete code in the original post. Here I mention the final results. For Earth, we have an almost perfect ratio of 1, meaning that the Moon exactly covers the Sun in a total eclipse:
Now here is Mars’ data. The largest Moon, Phobos, is only .6 the diameter of the Sun viewed from the surface of Mars, so it can’t completely cover the Sun:
With human missions to Mars becoming more realistic, would you not be curious how a solar eclipse looks over there? Here are some spectacular shots captured by NASA’s Mars rover Curiosity of Phobos, passing right in front of the Sun:
NASA/JPL-Caltech/Malin Space Science Systems/Texas A&M Univ.
These are the sharpest images of a solar eclipse ever taken from Mars. As you can see, Phobos covers the Sun only partially (60%, according to our calculations), as seen from the surface of Mars. Such a solar eclipse is called a ring, or annular, type. Jupiter’s data seems more promising:
Jupiter’s Moon Amalthea is the closest with a ratio of 0.9, yet even if its orbit allows a perfect 90% of Sun cover, the spectacular Earth-eclipse coronas are probably not visible. During a total Earth solar eclipse, the solar corona can be seen by the naked eye:
Do you have a few ideas of your own to share or a few questions to ask? Join Wolfram Community—we would love to see your contributions!
Download this post as a Computable Document Format (CDF) file.
]]>With Mathematica, he’s making real progress. Nettleton says, “The rapid development environment that Mathematica provides, the ability to do things so concisely and with so much power out of the functional programming and pattern matching and all the things that are the great advantage of Mathematica—that allowed a very rapid development process, so something a panel of experts told me would take a number of people a number of years, one person managed to do most of it in six months.”
In this video, Nettleton describes Mathematica‘s crucial role in both developing his model and communicating his findings.
You can find more on Nettleton’s research and see other Mathematica success stories on our Customer Stories pages.
]]>Scientific understanding and modeling of complicated physical phenomena and engineering based on such analysis is imperative to prevent unnecessary loss of life from natural disasters. In this post, we’ll explore the science behind earthquakes to better understand why they happen and how we prepare for them.
Note: The dynamic examples in this post were built using Mathematica. Download the Computable Document Format (CDF) file provided to interact with the simulations and further explore the topics.
First, let’s start with locations. The following visualization is created from the U.S. Geological Survey (USGS) database of earthquakes that occurred between 1973 and early 2011 whose magnitudes were over 5. As you can clearly see, the epicenters are concentrated in narrow areas, usually on the boundaries of tectonic plates. In particular, there are severe seismic activities around the Pacific, namely the “Ring of Fire”. Unfortunately, Japan is sitting right in the middle of this highly active area.
Earthquakes are oftentimes caused at the boundaries of tectonic plates, which form huge faults on the surface. When large enough forces are applied in two different directions, they overcome the friction between the boundary and cause a sudden movement. The phenomenon, also known as strike-slip, is one of many mechanisms that causes earthquakes. The following animation simulates the strike-slip at a fault and seismic waves caused by it.
The scale of an earthquake is measured by the amount of released energy during the seismic activity. The magnitude scale is logarithmic. The following chart shows the relation between the scale and the energy released by the event, with a few recent major earthquakes indicated.
Mouse over the image to see the logarithmic graph.
PJ, or petajoule, may not be the most familiar measurement unit of energy for many people. We can use Wolfram|Alpha to convert it to more popular measures, such as megatons of TNT explosive. The Sendai earthquake was measured at magnitude 9, and the energy released was approximately 2000 petajoules:
To put it into some context, the largest nuclear explosive ever tested (Tsar Bomba) was roughly 58 megatons. We are talking about an order of magnitude more energy than that of the largest nuclear bomb.
The magnitude scale is a base-10 logarithmic scale. For instance, the difference in energy released between the current Sendai earthquake (magnitude 9) and the Haiti earthquake in 2010 (magnitude 7) is:
Thus, the energy released during the Sendai earthquake is 1000 times more than that of the Haiti.
So, how is all the energy from earthquakes transferred? Most of the energy is converted into heat generated by friction, but some of it is transformed and radiated as seismic waves. There are two types of seismic waves: body waves and surface waves.
Body waves travel through the earth and can be differentiated by the direction of oscillation during propagation. In P-waves, the particles vibrate in the same direction as wave propagation, similar to sound waves. On the contrary, particles move perpendicular to the direction of the propagation in S-waves.
Surface waves travel only through the crust or the surface of the Earth. However, they are responsible for much destruction due to their properties. Again, the surface waves can be divided into two: Love and Rayleigh waves.
Love waves create motions similar to S-waves, but horizontally. This horizontal movement is particularly damaging to the foundations of buildings.
Rayleigh waves traverse the way that waves roll across lakes or oceans. During propagation, particles move in elliptical paths. Sometimes called ground rolls, Rayleigh waves are low frequency (less than 20 Hz) and oftentimes are detected by animals, but not humans.
Understanding seismic waves is essential for minimizing or preventing structural damages to buildings during earthquakes. So the question is, what you can do?
If external forces are applied, buildings will sway. The structural dynamics are quite complex and depend on a lot of parameters. But as an example, I used a simple damped harmonic oscillator equation to simulate the sway of a building in the following animation (the gray cylinders represent the foundations of the building):
The sway helps to dissipate energy from the seismic waves. Through structural design, material selection, and construction techniques, engineers have tried to reduce the effect of earthquakes on buildings. One such example is a tuned mass damper, a device installed in buildings to reduce harmonic resonance.
Another option that is widely considered today is a technique called base isolation, or seismic isolation. In nutshell, you put some devices, such as a lead rubber bearing, at the base of a building to isolate the building structure from the vibration from earthquakes.
This technique can help reduce structural stress significantly. It is also considered to be a good option for short buildings with high stiffness, or for retrofitting existing structures. In fact, many historical monuments in the U.S. have already been retrofitted with base isolation systems to reduce the damage from earthquakes.
We are providing the downloadable CDF for the post. Most examples are dynamic, and you can interact with simulations. Feel free to use Mathematica or Wolfram CDF Player to explore the content.
]]>Because of the importance of groundwater recharge and discharge in the hydrological cycle, the U.S. National Research Council (NRC) deems research on them to be of critical priority. However, their rates and patterns are so complex that it takes years of study to estimate them.
With a team of colleagues, Lin is developing a suite of Mathematica interactive manipulations that utilize advanced matrix-computing and image-processing algorithms. It has several user interfaces, and offers the flexibility to apply expert-level knowledge to extract spatial patterns. It also provides a generic pattern-recognition approach that supports virtually any spatial decision support system (SDSS) used to assist in management applications such as water resources, land use, and agricultural development.
More details and other applications are on the Mathematica Solution for Geosciences page. You can find more on Lin’s work and other cutting-edge uses of Mathematica in our Portraits of Success pages.
]]>Here’s an example from a recent trail run I did at a nearby park. The data is stored in a GPX (GPS exchange format) file, which is a specific type of XML. We can bring the data into Mathematica using Import.
Now we can extract all the track points (the XML elements named “trkpt”).
Each of these track points has a latitude, longitude, elevation, and time.
The first thing I’d like to do is make an elevation profile plot. This plot will show the distance on the x axis and the elevation on the y axis. The elevation values are measured quantities that come straight from the GPX file, while the distance values are quantities we can compute from the latitude/longitude pairs.
Let’s pull out all the latitudes, longitudes, and elevations.
Next, let’s partition the list of points into a list of adjacent pairs of points. Then we can compute the distance between the two points in each pair using the GeoDistance function. GPS latitude and longitude coordinates are measured in the WGS-84 reference datum, so we’ll pass that information along to GeoPosition and GeoDistance to make sure the computed distances are as accurate as possible.
Now we have a list of relative distances between each successive point. Let’s convert these relative distances to absolute distances from the beginning of the track to each point by summing up all the previous relative distances.
The resulting distance and elevation values are in meters, but in the United States we are accustomed to measuring distance in miles and elevation in feet, so let’s convert the units using the Convert function.
Now that the data is ready, we can finally make our elevation profile plot.
Those three 100-foot hills in the last couple of miles were tough.
So that was interesting, but what I’m really more interested in is seeing my GPS track on a map. This will be a two-step process. First, we need to draw a suitable map. Second, we need to overlay the GPS track on top of it.
For the map I’m going to use USGS aerial photography from TerraServer, a site run by Microsoft that provides a web service to download USGS images. We’ll use InstallService from the WebServices` package to make it easy to get the images we need.
We have a rectangle (the bounding box of the GPS track) that we’d like to map, so let’s determine what that rectangle is and ask TerraServer for the AreaBoundingBox. This will tell us which images we need to download. For good measure we’ll go ahead and make sure the background is at least 10% larger than the track on each side.
Now we need to create a two-dimensional grid of map tile images. Let’s figure out which tiles we need to request.
We’ll get a sample TileId value for later modification.
Now we can download the tiles in the specified X and Y ranges using the GetTile web service method.
We can easily put these tiles together with ImageAssemble.
In addition to aerial photography, TerraServer also provides topographic maps.
Now comes the challenging part. We have GPS points given in latitude and longitude (3D spherical coordinates). We have an image drawn in some arbitrary 2D Cartesian coordinate system. How do we correctly combine these? Well, we need to make sure our latitude and longitude values are projected from 3D spherical to 2D Cartesian coordinates using the same method as the source images.
We can see from the TerraServer web service API documentation that it converts latitude and longitude values to 2D Cartesian coordinates using the Universal Transverse Mercator coordinate system. TerraServer also provides a function to do the conversion for you. This function would work quite well for a handful of points, but this GPS track has thousands of points. It would take an awfully long time and put an unnecessary burden on TerraServer’s resources to make this API request every time we want to convert a single point.
Fortunately, this conversion is well defined, so we can implement it directly in Mathematica. Unfortunately, the code is a little messy and beyond the scope of this blog post. Here’s the magical function:
I should note that the image tiles we downloaded from TerraServer cover an area equal to or greater than that which we requested. They almost certainly cover a greater area, so let’s figure out the area they do cover.
Now we’ll convert these to UTM.
The image itself isn’t in UTM coordinates, it’s in some other coordinate system. So we will need to scale all UTM values to the image’s coordinate system. The next step is to figure out what scaling factors to use.
With this information we can now define a single function that will convert latitude and longitude values into the coordinate space of our image.
Now we can map this function across all of our GPS points.
Finally, we can show the track on top of the map image.
The map is good, but not perfect. Let’s trim the image down to the original bounding rectangle we requested from TerraServer.
We can even make it interactive.
Here’s the track on a topographic map.
Let’s try a more elaborate example to see if we can take this mapmaking to the next level.
Last November my wife ran the Indianapolis Monumental Marathon. I rode my bike to several different locations along the course to cheer her on and take photos. We both wore GPS devices to record our tracks. The funny thing about this was that she ran faster than either of us expected and I missed her several times because she arrived at a location on the course before I did. Let’s find out what this looked like.
For this example we’ll be dealing with two GPS tracks, the Rob one and the Melissa one. Let’s begin by importing the GPX files and extracting the latitudes and longitudes from the track points.
Next, we’ll get the map. Let’s take all the code we used earlier and stick it into a function to make it easier to reuse.
This function returns the map image and a pure function that can be used to translate latitude/longitude pairs into the coordinate space of the map.
Let’s take a look.
Now we have a map of the race course (in blue) and my path on the bike (in red). This would make a neat interactive map, but we can’t just use the same trick as before where we just added another track point to the line for each step of the animation, because the two tracks won’t be properly synchronized.
To solve the synchronization problem, we’ll need to get the original time stamp of each track point from the GPX data.
Next we will build an InterpolatingFunction that when given a time will return the approximate location for the track. Since the track points are spaced one or more seconds apart, any time between track points will return a value interpolated between the two nearest points.
We need to determine the earliest and latest times of either track.
Now we can make the interactive map.
Because the course is almost entirely north/south, this map has a very inconvenient aspect ratio. It’s much too tall and not very wide. We could improve it by focusing on a small section (with a reasonable aspect ratio) at any given time. So let’s zoom in a bit and crop the image.
Here we have the cropped PlotRange and ImageSize, as well as the larger, tile-aligned PlotRange and ImageSize. Let’s keep track of the larger size so we have a larger work area.
Let’s choose an arbitrary size for the final image.
As we want the focus at any given time to be the latest points in the two tracks, let’s center the image at the midpoint between the latest track points from each track.
We can just as easily export this as a sequence of images and assemble them into a movie using QuickTime Player (with QuickTime Pro features enabled).
I counted at least four times when I (red dot) arrived at a location on the marathon course after my wife (blue dot) had already passed. Next time I may have to tell her to slow down.
These are just a few ways to analyze GPS data with Mathematica. We could just as easily plot speed or pace. We could request map images from other services. We could zoom in or out on the map. Mathematica‘s integration of symbolic computation, graphics, import/export, web connectivity, and computable data (e.g. GeodesyData) make it the ideal tool for these types of data explorations.
]]>The main phenomenon is the propagation of so-called shallow water waves–water waves whose wavelength is large compared to the depth of the ocean. Those waves satisfy a partial differential equation (PDE) that was figured out in the 1800s. The equation is a nasty nonlinear one–that can’t be solved exactly.
I’ve been working on the numerical differential equation capabilities of Mathematica for more than a decade. Our goal is to automate the solutions of all types of equations–so users just have to enter their equation, and Mathematica then does all the analysis to select and apply the best algorithm.
The shallow-water equations are a good test–that I’m happy to say Mathematica passes with flying colors. One essentially just has to type the equations in, and get the solution, which is then easy to visualize–especially using the new visualization capabilities of Mathematica 6. (Click the image below to see the Mathematica animation.)
Let me explain a little about what’s involved in getting this.
Here’s an image of a Mathematica notebook that produces the main picture (you can download the notebook here):
The first thing is just the equations, given in Mathematica StandardForm (one could use TraditionalForm so it looks exactly like a traditional textbook, but it’s slightly harder to understand that way). Then we just use NDSolve to solve the equations. Then we use Plot3D to create a 3D visualization–complete with specular surfaces and everything. And finally, we use Animate to create an animation. (We can immediately export for the web using Export.)
What’s going on inside NDSolve? That one function is doing some pretty complicated things–it’s almost a microcosm of a century or two of applied mathematics.
NDSolve is doing a lot of things that can really only be done in Mathematica–by combining Mathematica‘s strength not only in numerical computation, but also in symbolic computation and discrete mathematics.
That you can just type an equation into NDSolve relies on Mathematica‘s general symbolic architecture. Once NDSolve gets an equation, it immediately determines the structure. In this case, it finds out that it’s been given a (2+1)-dimensional initial-value PDE.
For that type of equation, it forms a discrete grid in space (with the grid structure determined automatically to meet a certain accuracy criterion), then generates a large system of first-order ODEs on that grid, automatically incorporating all the necessary boundary conditions.
NDSolve has about 20 different families of methods for solving ODEs. In this case, it actually switches automatically between explicit and implicit methods depending on local stiffness. (The implicit methods get to use some of our very fast sparse matrix solving capabilities.)
In the end, NDSolve packages its solution as an InterpolatingFunction–a very convenient Mathematica construct that directly represents an approximate function, that in this case we chose to plot using Plot3D.
(Even though NDSolve can figure out what to do automatically, you can always give it hints, or even tell it exactly what method to use. In this case, you can improve the quality of the solution by choosing to use a pseudospectral method with a specified number of grid points.)
As one of the people responsible for NDSolve, I know how complicated all the things it does inside are. But when you use NDSolve, all you have to do is type your equation in, and let Mathematica do the rest. We’ve done the work (and it’s been a lot of work, I might add) to have everything run efficiently and automatically from there.
It might take a while to work out the physics of the shallow water wave approximation to a tsunami. But I think I can say that Mathematica‘s part of solving the equations could be accomplished before a tsunami has propagated very far.
]]>